I hate the word ‘should’ – it always implies that there is something you have to do but you’d rather put it off. Well, I recently came across a viewpoint article that made me think we really ‘should’ be doing more to make SRM more reliable. This commentary, in Proteomics (vol 9: 1124-1127), suggests that SRM is not quite fool-proof…shame.
According to Duncan et al we should all realise that quantification of a target protein cannot be achieved by targeting a single peptide however straight forward it sounds. This is because SRM fails to take into account all the possible forms of each peptide in a given sample. The variants, such as those with PTMs, may represent very closely related forms, but could actually be more abundant than the form targeted by the SRM assay.
“When multiple progenitors exist, whether they be known or not, selectivity is often seriously compromised…Quantification based on a peptide that is common to multiple related forms leads to an overestimate of any single variant”
This gives us something to think about. Are we really measuring what we think we are? Is it prudent to work out the quantity of something by relying on the small fragments of it that you can see, do these fragments really represent the protein variant as a whole?