SP – Can you tell us a little about how you got interested in science as a career and how you choose (or ended up) in the field of proteomics?
Mike – I’ve always been interested in science. My father is an organic chemist and he got me excited about science at an early age. I went to college at the University of Vermont and while I was a chemistry major, I never really foresaw myself doing “chemistry.” I have a short attention span and found myself more interested in spending time in the mountains skiing, climbing, hiking, etc… than I was at doing traditional work. My grades were never good and I found myself teetering on academic probation year after year. It drove my parents crazy.
My first research experience was actually in the geology department. It was my junior year and I had made 13C/12C isotope ratio measurements of carbonate rock for stratigraphy and maple syrup to assess adulteration. Doing lab work and trying to make something work was much more exciting than the theory we spent our time learning in class. Interestingly, I found myself much more interested in the mass spectrometry, an old VG Sira II viscous leak isotope ratio mass spectrometer. I then tried to find an internship where I could learn some more mass spectrometry that summer (1995). I was very fortunate to have been given an opportunity to work in Patrick Griffin’s lab at Merck Research Labs. My research project was to get SEQUEST to work with post-source decay data from a MALDI-TOF and we built a web interface for modifying the parameters, starting the jobs, and viewing the results in html. We were using Netscape 1.1 and this was my first foray into programming. We worked closely with John Yates’ lab on this project and it got me excited about the possibility of actually pursuing a career in science at the interface of technology and biology.
I returned to Merck the next summer. It was right after the release of the Finnigan LCQ. Nathan Yates (no relation to John Yates) had joined Merck a couple of weeks prior and between Nate and Pat, I got a huge education in electrospray ionization, nanoLC, ion traps, etc… during that summer. The hook was set and I was definitely interested in trying to make a career out of biomedical applications of mass spectrometry. Unfortunately, I had applied to 14 graduate schools and received 14 rejections so the path wasn’t that obvious.
I had heard that Dwight Matthews was going to be moving from Cornell Medical School to the University of Vermont Medical School with an appointment in Chemistry. I wrote Dwight before he moved to Vermont and literally begged him to take me as a student. He agreed and gave me a second chance at science. My doctoral research was on the fundamental limits that defined our ability to measure a small amount of a stable isotope labeled amino acid that had been administered to humans in the presence of a huge endogenous unlabeled excess. Dwight taught me about fundamentals of quantitative analysis, isotopes, and that the details are more important in making something work than buzz words and hype. He taught me to write, how to give presentations, and pretty much forced me to grow up. My graduate education was kind of like a kid being told to eat his vegetables — you push back when it is happening but years later you are thankful you were forced to do things you didn’t want to do.
Protein mass spectrometry was still a huge interest of mine so as I was finishing my thesis I emailed John Yates and asked him for a post-doc. He remembered me from when I was at Merck and also remembered me getting rejected for graduate school in his department at the University of Washington. He offered me a position and I started at the Scripps Research Institute in January 2001.
While I knew the basics of mass spectrometry and quantitative analysis, I had a lot to learn about protein biochemistry and cell biology. Christine Wu, a cell biologist in the lab, and I started working together and she taught me almost everything I know about protein biochemistry and cell biology. We collaborated on a ton of projects ranging from the first metabolically labeled rodent for proteomics, measurement of membrane proteins from numerous samples, identification of PTMs, development of some early quantitative analysis software, and many more that never saw publication.
That time in the Yates lab was incredible. The Yates lab opened my eyes to the need for an interdisciplinary group for proteomics. There were many talented scientists with diverse backgrounds in the lab at the time. The Yates lab significantly altered my perception of an academic research lab. The time spent in the Yates lab was some of my most memorable times scientifically. We spent all day and everyday dreaming about experiments and finding a way to make the impossible possible. We argued and debated everything and John gave us the freedom to pursue some pretty ambitious projects.
In January 2004, I started my lab at the University of Washington. I was initially in John Yates’ old lab space and was pretty humbled by that. I knew that I wanted an interdisciplinary group that merged my expertise in isotopes, mass spectrometry, and informatics to work on proteomics problems. While I have learned a lot over the years, my weakness is still in the biology. I still rely on Chris Wu and other colleagues for expertise in the biochemistry and cell biology.
SP – Few labs have dedicated as many resources into continually developing and improving their software as your group has with Skyline. Can you tell us a little about how this project originated and why you decided to put so much effort into continuously improving it?
Mike – The plans for Skyline started at the ASMS meeting in 2008. I gave a presentation at the Thermo user meeting making a case that method development for the quantitation of peptides using selected reaction monitoring required different computational solutions than what was currently available for small molecule quantitation. I presented a bit on my thoughts on how this should be done but was very light on data — most of the talk was speculative. Dan Liebler and I started chatting during that meeting about what it would take to actually make it happen.
A couple of weeks after ASMS I met Brendan MacLean, a very experienced software engineer, and we began discussing these ideas. We began discussing what it would take to build a professionally developed, vendor neutral, free, open source software tool for targeted proteomics. We were ambitious, we wanted the tool to offer state of the art peak detection and quantitation, a simple and easy to use installer, a Windows graphical user interface, multilevel undo/redo, extensive visualization capabilities, be latency free, be protein and peptide centric, and support all major instrument vendors without file conversion. Brendan has since built an impressive group around this objective that has resulted in a software tool that has now been installed >15,000 times, supports five major mass spectrometry vendors, and is booted up >3,500 times per week. While we started supporting only QqQ data from selected reaction monitoring experiments, we have expanded to support MS1 data, targeted MS/MS, and more recently data independent acquisition (aka SWATH). While Skyline is being developed in an academic research lab, it is a professionally developed software tool.
The development of Skyline quickly became a community effort. As we gained users we have gained support to ensure that the project is successful. We have an excellent group of people who help us test the software and provide feedback. Additionally, our users have been extremely generous in sharing data with us to make Skyline better. It is a continuous cycle and Brendan is definitely the best person to manage this process. Skyline wouldn’t be Skyline if the project wasn’t managed by Brendan.
I have learned a lot during this project. While I had written “code” for most of my scientific career, I now realize that I knew nothing about software engineering. Our field is filled with publications describing an algorithm that can identify more peptides, get better sensitivity, or do some unusual calculation. Most of these tools aren’t readily usable by another lab (I’m guilty of it too). Scientists aren’t generally rewarded for making an important problem routine for other labs. Most people don’t need the latest and greatest tool, they just need something to work every day on any computer and with any datafile. We need to take the routine analysis out of the hands of the bioinformatician and put it into the hands of the instrument operator. The goal of the Skyline project was to solve these basic problems from the beginning. We need more projects like this, where the goal is to build a free resource for the community, that is supported, documented, and made to push our field forward and not to just generate a paper or gain a competitive edge over other labs.
SP – The Skyline user’s group meeting has been quite popular especially for a free software package. Can you tell us a little about the user’s meeting and why you think it is so well attended?
Mike – Last year the Skyline Users’ Meeting exceeded our expectations. The meeting was initiated largely because we had users who asked for it. A few of our users asked if there would be an opportunity for them to meet with other users, see how others are using Skyline, and hear about the latest new features. We got some people attending the meeting who were experienced users who wanted to hear about what the latest features were and we had others attending who were new users who wanted to learn what sort of experiments Skyline can be used for. We had 135 people attend last year and we hope to exceed that number this year. There will be a lot of great new features announced at ASMS in 2013 and we hope to get more of our users using Panorama — our server side companion to Skyline. For info on Panorama see:http://panoramaweb.org.
SP – Targeted proteomics was named the method of the year by Nature Methods. Was there a particular development that you think lead to this decision? Or is it just the cumulative effect of the growing popularity of the method?
Mike – I think the move to targeted proteomics is a logical progression. At some point we need to stop making lists and start making quantitative measurements that we can believe and that can be performed in any lab and between labs day-in and day-out. These experiments will ultimately require accurate signal calibration of proteins and peptides — a nontrivial problem.
SP – SWATH is an increasingly popular technique that some say is an alternative to SRM. What are your thoughts on this technique and how does it compare to SRM? Is it less labor intensive? How does it compare in terms of sensitivity? And is it possible that SWATH may someday replace SRM?
Mike – I don’t believe that data independent acquisition (the broader term for SWATH) will ever replace SRM but I believe it might someday replace data dependent acquisition. If you need to measure one or two peptides it is very difficult to beat the performance of SRM. If you need to measure many peptides in a single run with lots of simultaneous transitions, the performance of SRM will decrease relative to an ion trapping or time of flight instrument. I believe that most experiments in the future will start with a hypothesis (or a 1,000 hypotheses) and we will collect data and analyze it in a way to test those hypotheses as opposed to generating a list of semi-random identifications. SRM will be used when you have a few peptides you need to measure in many samples and data independent acquisition will be used when you want to test many hypotheses in a sample.
The question of sensitivity is not straightforward. Unfortunately the answer is, “it depends.” It depends on the specificity of the precursor, the complexity of the sample, and whether the limits of detection is limited by the number of ions or by specificity.
University of Washington
Department of Genome Sciences
Mike is a leader in the field of quantitative proteomics. The focus of his lab is the development of high-throughput quantitative proteomic methods and their application to model organisms. During his post-doc he developed RelEx, one of the first tools to quantify proteins from stable isotope labeling experiments. His lab at the University of Washington has developed several widely used tools for quantitative proteomic analysis including Skyline, a free software package for the design and interpretation of targeted proteomics experiments.
MacLean B, Tomazela DM, Shulman N, Chambers M, Finney GL, Frewen B, Kern R, Tabb DL, Liebler DC, MacCoss MJ. Skyline: an open source document editor for creating and analyzing targeted proteomics experiments. Bioinformatics. 2010 Apr 1;26(7):966-8
Preferred brand of running shoe?
I have tried pretty much tried every pair of running shoes imaginable. Currently I’m getting over some injuries and have been running in Hoka One One Stinson Evos. They are an unusual shoe but are pretty good at minimizing impact while also promoting a normal running gait. I also like the Inov-8 Roclite 295 and Montrail Rogue Racer.
Favorite Seattle restaurant?
There are too many to name. Depends on what you like to eat. I like oysters and Elliot’s Oyster House is hard to beat.
How do you drink your coffee?
I always drink my coffee black, straight up. My wife bought a Nespresso maker and it is now my new favorite instrument. I haven’t figured out a safe way to take it IV yet.