December 1, 2011 (Vol. 31, No. 21)

J. Will Thompson, Ph.D.
Erik J. Soderblom, Ph.D.

Tips on How to Perform Proteomics Measurements with Consistency and Accuracy

For approximately 10 years the field of proteomics has held promise to assist researchers in crossing the “valley of death” in translational medicine—that is, to assist in translating basic research findings from the bench to the bedside. While steady progress has been made in improving proteomic techniques and instrumentation, the field has largely failed so far to deliver blockbuster results in the form of novel disease biomarkers.

One approach we have used to attempt to rectify this phenomenon and assure reliable results from experiments performed in our laboratory is to go back to the basics of our training in analytical chemistry. Analytical chemistry teaches us that the most important features of a technique are accuracy, specificity, and reproducibility.

We would argue that some early failures in proteomics are at least partially due to incomplete attention to detail with respect to the analytical quality of the approaches deployed. In order to apply any novel methodology to a complex biological or translational medicine question, the methodology must first be vetted such that all analytical variables are sufficiently identified, measured, and minimized.

Proteomics is most commonly practiced in the manner known as bottom-up, which involves taking a complex mixture of proteins and proteolytically digesting it with a protease, such as trypsin, into fragments ~8 to 35 amino acids in length. This step makes the mixtures even more complex by increasing the total number of unique molecules to be analyzed, but is advantageous because the biophysical properties of peptides make them much easier to separate (using liquid chromatography), and quantify and identify (using mass spectrometry) than the protein parent molecules.

With the advent of modern direct-flow nanoscale liquid chromatography systems and high-resolution mass spectrometers, the technical variability between measurements within a study has been drastically reduced; it is possible for thousands of peptides to be measured across 10s to 100s of samples with average coefficients of variation less than 15%.

The more common challenge to reproducible proteomics measurements, both within and between studies, is now sample preparation. Metabolic labeling techniques (e.g., SILAC, N15 labeling) attempt to bypass variability introduced during sample preparation because the conditions to be compared are mixed, then processed together through all preparation steps.

The obvious disadvantage of such labeling strategies is that the number of biological conditions is limited by the number of labels available for use, and some systems simply do not lend themselves well to labeling approaches (such as primary tissues or cells and animal models).

Label-free quantitation uses the area-under-the-curve of chromatographic peptide elution profiles to perform robust quantitation and can be utilized for both relative and absolute quantitation across sample cohorts without limitation to experimental design.

Although techniques in sample preparation vary widely depending on the application, we will seek to offer practical considerations, which have served to improve sample preparation reproducibility in our laboratory, and which are particularly important for label-free quantitative proteomics experiments.

It is largely accepted that a large factor, if not the most dominant causative factor in variation in proteomics results between laboratories is differences in sample preparation. Therefore, before we explore specific techniques, which can improve reproducibility in sample preparation in proteomics research, it is worthwhile to spend a few words on general practices that can be utilized to assess results reproducibility:

1) Be a minimalist. Utilize the simplest procedure possible, and seek to remove all unnecessary steps in a procedure. Even seemingly simple steps like a volume transfer can increase analytical variability.

2) Standardize where possible. Standardization of steps (such as digestion) within a protocol makes it easier to troubleshoot new protocols or methods, easier for others to replicate your results, and saves time because scientists within a lab can share reagent stocks. Establish and follow standard operating procedures (SOPs) once a protocol is mature.

3) Employ quality control (QC) metrics. QC metrics throughout a protocol allow for easier troubleshooting if unexpected results are observed. Track and utilize all available data, such as protein assay (Bradford) results, sample volumes, and sample storage time and conditions, to monitor the procedures. Accumulate the results from routine QC measurements in a common repository accessible to all scientists performing the assays, in order to enable easy identification of sample outliers.

Although it may seem that standardization and research are somewhat mutually exclusive activities, some standardization of common practices makes research activities higher throughput and, more importantly, improves reproducibility within and between proteomics studies.

Protein Solubilization

Bottom-up LC-MS based proteomic approaches are routinely applied to a wide range of starting biological matrices including cultured cell lines, tissues, biological fluids, etc. Regardless of the nature of the starting material, an effective and reproducible strategy needs to be employed to generate soluble protein prior to proteolytic digestion.

This is arguably the most critical step in a label-free proteomic experiment because of the possibility of broadly selecting for or against specific classes of proteins based on solubility in the reagents selected. Although there are countless strategies described in the literature for proteome or subproteome (i.e., membrane fractionation) solubilization from these various matricies, there are several core concepts our laboratory routinely follows to maximize the robustness and reproducibility of the methods:

1) Exchange samples into a mass spectrometry compatible buffer system (i.e., ammonium bicarbonate-based) prior to lysis. This often eliminates the requirement for downstream sample cleanups and potentially unnecessary sample manipulations.
2) Use mass spectrometry compatible surfactants that can be easily removed through acidification and centrifugation. There are a number of effective commercially available surfactants (Rapigest, PPS, AALS, etc.), which can be utilized depending on the desired surfactant hydrophobicity.
3) Avoid reagents that can covalently modify amino acid residues. Although urea is a very effective chaotropic agent, it can carbamylate primary amines (i.e., lysine residues) even at modest temperatures (37°C). This often results in a nonspecific partial derivitization of many lysine-containing peptides, which subsequently increases the complexity of the proteome.

Proteolytic Digestion

Regardless of solubilization strategy, samples should be subjected to high-speed centrifugation steps to remove insoluble material that could potentially interfere with downstream protein quantitation assays.

Protein assay (Bradford or BCA) calibration curves should be generated each time an assay is performed to account for reduction in reagent activity over time. All calibration curve samples and test samples should be run at least in duplicate, and a buffer blank should always be used to assess background. Unexpectedly high background values in these assays may suggest an interferant and an orthogonal measurement approach should be considered.

We have found it critical for reproducible digestion that samples be concentration-normalized to minimize variation in digestion efficiency. We recommend using protein concentrations of 1–5 ug/uL for digestions, but the most important parameter is that all samples be equivalent concentration. Our laboratory SOPs for in-gel and in-solution digestion have been made publicly available at www.genome.duke.edu/cores/proteomics/sample-preparation.

As described in “General Practices”, the core concept of this portion of the sample-preparation workflow is to be consistent in how protein assays are performed day-to-day. This will streamline troubleshooting of negative results by significantly reducing the possibility of error coming from the QC assay itself.

Sample-Enrichment Approaches: Immunodepletion

Human serum and plasma are an attractive biological matrix for clinical biomarker discovery studies due to the relative ease and low cost of sample collection, however it has an incredibly large dynamic range (at least 1010) and relatively few (~14) proteins make up 95% of the protein mass. The most common approach for increasing the number of proteins that can be quantified in serum/plasma using mass spectrometry is to utilize commercially available immunoaffinity columns packed with immobilized antibodies to deplete the most abundant proteins.

While the effectiveness of this approach to increase the number of unique protein identifications has been well described in the literature, the reproducibility of the method when deployed for a biomarker study is unfortunately most often completely ignored as a contributing factor to quantitative variability in the results.

Moreover, methods to assess reproducibility and efficiency of the immunodepletion step are not readily available. Through our experience across several clinical biomarker discoveries studies, we have developed three independent QC metrics aimed at assessing the immunodepletion process:

1) Protein assay QC. Perform Bradford assays on both the predepleted sample and flow-through (nondepleted) fraction. This allows measurement of initial column loading and final protein yield, and importantly, the overall fraction of the sample that was removed during depletion.

2) For liquid-chromatography column based immunodepletions, the peak areas (A280) for unbound and bound fractions are recorded, and a graphical display of the bound versus unbound AUC allows easy visualization of sample outliers.

3) Utilize SDS-PAGE gel separations to analyze aliquots of the immunodepleted sample, with protein bands visualized with coomassie blue staining and quantitated using image densitometry. This allows for a high-level visualization of the most abundant proteins, and gross detection of outliers. For instance, hemolysis during sample isolation will yield an intense hemoglobin band. This approach is also useful for detecting proteins differentially depleted over the course of a study due to nonspecific binding to the immunodepletion column.

Sample-Enrichment Approaches: Global Phosphopeptide Enrichment

Sample-enrichment strategies are necessary in order to access low-abundance protein or peptide species. Many approaches are available in the literature for protein and peptide enrichment, using antibodies or resin-based capture approaches, but few of the methods have associated metrics for reproducibility of the technique.

As an example, phosphorylation is a post-translational modification (PTM) of particular interest due to its role in cell signaling. The low stoichiometry of phosphorylated peptides relative to non-modified peptides necessitates enrichment strategies for these specific peptides (most often TiO2 or IMAC). The enrichment is known to be sensitive to a number of experimental conditions therefore establishing robust methodologies for enrichment as well as internal QC metrics are critical to maintaining a reproducible enrichment.

Critical practices to maintaining reproducible phosphopeptide enrichment include the following:

1) Keep constant the amount of input material versus the binding capacity of the resin, and the final concentration of enrichment-modifying compounds (e.g., glycolic acid or dihydroxybenzoic acid).

2) When possible, utilize the same enrichment column to enrich multiple samples, because constant binding capacity is so important.

3) Perhaps most important is to assess the reproducibility of the enrichment by spiking a known quantity of an exogenous phosphorylated protein into the lysate prior to sample processing.

Our practice is to spike bovine alpha-casein at 25 fmol per ug total lysate prior to sample digestion. Quantitative analysis of this protein within biological background across the sample cohort provides an internal standard to measure the digestion and enrichment reproducibility.

With proper consideration and control of analytical processes, label-free quantitative proteomics measurements can be performed with remarkable consistency and accuracy across experiments and laboratories. We believe that by keeping sample-preparation processes as simple as possible, standardizing protocols, and employing appropriate quality control metrics, proteomic approaches will mature such that mass spectrometry based proteomics will realize its potential as a powerful translational research tool.


Analytical and TiO2 enrichment variation from three 350 ug WT zebrafish embryo lysates subjected to independent TiO2 enrichments followed by triplicate LC-MS/MS analysis: (A) Coefficient of variation (CV%) distributions for all phosphorylated peptide extracted ion chromatogram intensities (n=99) for each set of analytical replicates (average median CV 7.0%) or across all three TiO2 enrichments (median CV 23.5%). (B) Coefficient of variation distributions of nonphosphorylated (n=88, median CV 44.0%), singly phosphorylated (n=57, median CV 23.4%), and multiply phosphorylated (n=42, median CV 19.7%) peptide intensities across all three TiO2 enrichments.

J. Will Thompson, Ph.D. ([email protected]), is senior lab administrator and Erik J. Soderblom, Ph.D., is laboratory analyst at Duke Proteomics Core Facility.

Previous articleNearly Two-to-One Majority Sees FDA or EMA Approval of Gene Therapy by 2013
Next articleRoche’s Cobas EGFR Mutation Test Granted CE Mark