Page 1 of 1

Q Exactive HF and deep phosphoproteomics papers

Posted: Wed Oct 29, 2014 1:59 am
by Kelstrup

Our group just published two technology papers that could be of interest to some of you?

Q Exactive HF for shotgun proteomics

Off-line high pH reversed-phase fractionation for in-depth phosphoproteomics

Comments/questions are welcome :)

Posted: Thu Oct 30, 2014 4:12 am
by tsbatth
Good work ;)

Posted: Thu Oct 30, 2014 11:27 am
by AAlpert
In response to your second paper (Batth et al., J. Proteome Res., online 22 Oct. 2014):

Thsi concerns the use of high-pH reversed phase (HpH) with concatenated fraction collection offline with subsequent enrichment for phosphopeptides. The large number of phosphopeptides that you identified indicates that this approach is well worth considering. Now, you compare this approach with peptide fractionation by SCX, which you put forth as the method to beat because of its widespread use. Fine. However, this isn't the first time that the two modes have been compared. Your ref. 17 (Wang et at., Proteomics 11 (2011) 2019) has a 1.8- and 1.6-fold increase in peptide (= tryptic of all kinds, not just phosphopeptides) and proteins identifications with concatenated HpH compared with SCX. Here you claim an increase in phosphopeptide identifications of 4-6 fold over SCX. Why the disparity between the two studies? Wang et al. used a PolySULFOETHYL A column. You used a Resource S column. Resource S is manufactured by GE Healthcare for use in cation-exchange of proteins. It's a perfusive material with through-pores as wide as 10,000 Å and side pores of 200 Å. Particle diameter: 15 µm. Surface area: 70 m2/g. Getting back to PolySULFOETHYL A: The 200-Å version which most labs use for phosphopeptide work has a surface area of ~ 200 m2/g and a thicker coating than Resource S. Effectively, it has about 3-4x higher capacity for peptides. That means that the early-eluting peptides in SCX, including about 30% of the phosphopeptides, are better retained and resolved. The phosphopeptides can be better separated from the weakly-retained unmodified peptides. Resource S can't do this; no wonder the void volume peak in your Fig. 1 (Fraction 1) is huge! You tried to compensate for this by performing two titania enrichments in succession of this fraction, but that doesn't come close to compensating for good fractionation in the first place (which is the entire rationale for the concatenation method). Also, your Waters BEH column for HpH has a particle diameter of 3.5 µm, not 15 µm. Result: Much sharper peaks, with fewer peptides eluting in peaks so wide that they're split between adjacent fractions. Basically, the SCX column that you chose is totally unsuitable for this comparison.

Moving on: In recent years, the ERLIC method has been shown to outperform both SCX and HILIC for fractionation of tryptic digests in general and identification of phosphopeptides in particular. For phosphopeptides, there is a well-controlled comparison in: X. Chen et al., J. Chromatogr. B, 879 (2911) 25. Boichenko et al. (J. Sep. Sci. 36 (2013) 3463) found ERLIC to outperform HpH for identification of tryptic phosphopeptides in general. Finally, concatenated ERLIC was compared with concatenated HpH (P. Hao et al., J. Proteomics 82 (2013) 254. The ERLIC method identified nearly as many unique peptides and a greater number of proteins. This seems to put in doubt the statement on pg. 3 of your paper that fractionation methods based on hydrophobic interactions such as reversed-phase chromatography are more efficient in separation of peptides compared to charge-based separations like SCX. Of course, you could rationalize your statement by saying that ERLIC is a combination of charge-based separation plus HILIC.

Posted: Thu Oct 30, 2014 2:42 pm
by tsbatth
Hey AAlpert thanks for such a careful reading of the paper :) .

Regarding the Wang study from 2011, I think that comparison is hard to make because 1) they use a different column for their reverse phase fractionation 2) different instrument to analyze them 3) different cell line under different conditions and lastly 4) they don't perform an enrichment step for phosphopeptides.

You're correct that there is a difference in the SCX column used between our studies and our SCX fraction method could definitely be optimized further. But as we mention on page 10, our purpose wasn't to use SCX since it's "the method to beat" but merely provide a reference point since it's a widely adopted protocol and had been used in our lab previously many times. Infact we cite several papers including some of the references you provide(such as ERLIC) which show other methods outperforming SCX. However I disagree that the SCX column is totally unsuitable for the comparisons. In fact, Zarie et al 2013 JPR ( ) do a comparision and found SCX (on the exact same Resource S column as ours, with similar gradient) outperformed HILIC and ERLIC for phosphopeptide detection.

I couldn't locate the X. Chen paper but in the Boichenko and Hao studies, the comparisons are between non-phosphorylated peptides which was not the goal of our study. You're correct that our statement was in relation to traditional charged based separations such as IEF, SCX and SAX and not HILIC or ERLIC per say.

I would say the focal point of the manuscript wasn't to show how much better our method is compared to SCX (although it is much better) but to show it's usefulness in a off-line 2D setup, how it can be optimized along with phospho enrichment (if using TiO2) and more importantly the importance of "sensitive" detection parameters in relation to phosphoproteomics analysis with Orbitrap systems. Taking all these things into account lets you go much deeper into the phosphoproteome and all the other benefits that come with that as we describe in detail.

Thanks for the questions. feel free to ask more!



Posted: Fri Oct 31, 2014 7:27 am
by AAlpert
Your data on the efficacy of the HpH approach stands on its own. I would like to respond to your correct observation that most of the ERLIC references I cited involved its use for fractionation of tryptic digests in general, not isolation of phosphopeptides. Your ref. 17 (a paper from Richard Smith's group at PNL) also concerned tryptic digests in general. At ASMS 2011, Yuexi Wang from that lab gave a lecture (session WOA, 3:10 PM) in which he compared concatenated HpH with ERLIC, with fractions from both methods subsequently being processed with IMAC to isolate the phosphopeptides. Their figures were the following:
Unique phosphorylation sites 19,745/15,976
Unique phosphopeptides 25,057/19,454
Unique phosphoproteins 4720/ 4377
pY sites 458/ 409

The sets of phosphopeptides identified didn't have a high degree of overlap.

They never published this data. I have a copy of the slide presentation; contact me offlist if you want a copy:

Posted: Fri Oct 31, 2014 8:43 pm
by Christopher
Interesting papers guys.

Regarding the first one, it would have been nice to see you pick a different organism. Have you benchmarked this in yeast perhaps? To be frank, a HeLa (or any Human sample for that matter) is not that interesting in my opinion in this context. What was interesting about the Orbi Fusion paper was that they could get a high percentage of total proteome coverage in a less complex organism. This is the area where this type of analysis (where you are stuffing 1ug of material into 60 minutes of run time so you can actually get away with using a 15ms fill time) is going to see actual use in the field. It would have been nice to see how it stacks up to that. Maybe you have this data? Or perhaps to see how many PRM masses you could stuff into that short run. I would have also liked to see how it performs on an extended gradient rather than with fractionation. Did you guys do this perhaps, and just didn't show the data? The data obviously isn't going to scale with longer runs, but it would be interesting to see how you would compensate for that and potentially pull 6-, 7-, 8K proteins in single-shot run which is not that hard to achieve on the Fusion, and is getting into a very interesting and very applicable area for Human cell samples.

Not trying to sound negative...I enjoyed the paper! Just thinking out loud.

Posted: Mon Nov 03, 2014 2:05 am
by Kelstrup
Thanks Christopher for reading it and thinking aloud :)

Regarding the lack of yeast samples, this was also something I thought about. However, for mainly political reasons we are unfortunately no longer growing yeast so there is a practical consideration too. A further requirement is of course sample prep which the Coon lab seems to be very good at, it would take quite some effort for us to get to the same level on yeast - we have spent a long time getting our protocols for human cell culture samples "clean". Outside practical considerations, I must admit I don't particularly like yeast and the target-decoy strategy as very scratchy fragment scans can more often be enough to make an identification. These are by all means probably correct identifications, I just don't like it. And the Fusion together with yeast (iontrap sensitivity/speed, small search space) will anyhow win any comparison (on yeast!) I think.

I did try longer gradients, as expected I was unable to achieve a similar benefit - there are perhaps more on this in the Mann paper (MCP link). My purpose was to show the case where I think the Q Exactive HF benefits the most, i.e. in situations where the gradient length or sample numbers are a limitation, perhaps combined with off line fractionation. In my view a longer gradient can be seen as a sample dilution, i.e. not too interesting for fast scanning methods. I'd rather consider going in the opposite direction, even shorter gradients. :)

Thanks again for the comments!

Posted: Mon Nov 03, 2014 4:02 pm
by Christopher
Thanks for the reply!

Yes I agree with your opinion that it would be interesting to go to even shorter gradients, but not for something like a whole cell lysate. I think this is especially true if you start to get into things like AP-MS style experiments, medium complexity samples, where you can profile the entire thing very fast. I would just worry about compromising the quant information by limiting the number of points you have across your peak, if you are using MS1 data. Definitely some potentially interesting areas though.