December 1, 2010 (Vol. 30, No. 21)

Vicki Glaser Writer GEN

Navigating the Path between the Lab and the Clinic Is Becoming More Straightforward

Genomic data is pouring out of academic and government research labs, flowing into rapidly expanding databases—often overwhelming storage and informatics capabilities—and fueling a steady stream of publications proposing links between the “needles” being discovered in genomic haystacks and disorders ranging from common to uncommon, from cancer to Mendelian disease.

The researchers in hot pursuit of these discoveries are benefiting from rapid technological advances as well as new instruments and algorithms designed to sequence whole genomes faster, cheaper, and more accurately, to analyze gene expression, and to identify SNPs on ever-expanding microarrays to single out variations that may distinguish “healthy” from “disease” or “at risk” genomes. 

The promise riding on early efforts to apply these discoveries to real-world clinical scenarios and diagnostic, prognostic, and therapeutic decision making is tempered by the complexity of the genomic landscape, the enormity of the data output, and the challenges in interpreting raw genomic data to yield clinically useful information and to span the gap between the laboratory and the practice of genomic medicine.

At the recent American Society for Human Genetics (ASHG) meeting held in Washington, D.C., a session entitled “Genomic Medicine: Current Status, Evidence Dilemmas, and Translation into Clinical Practice” presented a real-world view of the opportunities and obstacles in acquiring and applying the data from whole-genome sequencing (WGS) and genome-wide analysis (GWA) studies.

Although the technology is available to sequence an individual’s genome, it is still a relatively costly endeavor, and how best to analyze and interpret the clinical significance of the results is not yet clear. Session moderator Kelly Ormond, associate professor and director of the master’s program in human genetics and genetic counseling at Stanford University, described the tension that exists between two competing forces driving genomic medicine: the desire to accelerate early adoption of the technology in the clinic and the cautionary voices that question the clinical utility of the information.

In her overview of WGS technology, Debbie Nickerson, Ph.D., a professor in the department of genome sciences at the University of Washington, emphasized the impact that next-generation sequencing (NGS) and emerging third-generation single-molecule sequencing technology will have on advancing knowledge about human genome variation and the ability to link genetic and phenotypic variation.

The results of large-scale GWA studies are increasingly populating leading scientific journals and facilitating family-based linkage analysis and disease association studies, she noted. The speed and cost of these studies will continue to improve as high-density chips containing one million variant markers will soon give way to arrays with as many as five million SNPs.

Dr. Nickerson described the new, higher-throughput, lower-cost genome sequencing strategies as “disruptive, game-changing” technology. Examples include systems developed by 454 Life Sciences (a Roche company), Illumina’s HiSeq sequencing by synthesis technology, the Applied Biosystems/Life Technologies SOLiD system, the Complete Genomics CGA platform, Pacific Biosciences’ Single Molecule Real-Time (SMRT) sequencing technology, “The Chip is the Machine” semiconductor chip-based system from Ion Torrent (recently acquired by Life Technologies), and Oxford NanoPore Technologies’ nanopore-based sequencing strategy.


Scientists continue to balance their wish to bring novel genomic technologies to the patient’s bedside against the actual clinical utility of the information that will be obtained via the application of these techniques. [Chepko Danil Vitalevich/ShutterStock Images]

$1,000 Genome

With continuous improvements in read length—10 kb and beyond now a reality—higher throughputs, and declining costs, Dr. Nickerson concluded that single-molecule sequencing will propel the field closer to the goal of the $1,000 genome. She added, however, that this figure accounts only for the cost of generating the sequence and not for the expenses associated with storage and data analysis. Dr. Nickerson described recent studies in which WGS is being used to identify somatic variants associated with cancer and exome resequencing projects aimed at identifying gene variants in protein-coding regions associated with Mendelian disease.

Euan Ashley, DPhil., an assistant professor of medicine and director of the Center for Inherited Cardiovascular Disease at Stanford University, talked about how dramatic declines in the cost of genome sequencing will increase the data generated exponentially, as “thousands, if not tens of thousands” of genomes are sequenced in the next couple of years.

He described the recent work of a Stanford team focused on extracting clinical value from genomic data (Lancet 2010;375:1525-1535) linking rare variants to disease risk, and applying the results of GWA studies to individual patient genomes.

Dr. Ashley emphasized the need to reconfigure the current databases. Most of the genomic databases are basically catalogs of genes and variations. They need to be reformatted in a way that would make them easier to interpret and use in a clinical setting, he urged. Furthermore, he pointed to gaps in the existing information; in particular, the regulatory and non-coding genome “has been neglected,” he said. Another challenge in the field at present is learning how to explain genomic data to patients.

Russ Altman, M.D., Ph.D., chair of the department of genetics and bioengineering at Stanford, discussed the utility of the Pharmacogenomics Knowledge Base PharmGKB, describing it as being “like Amazon for drugs and genes.” It allows users to search for genes and variants associated with drug dosing and efficacy, for specific drugs and related genomic data, for diseases, and for pharmacokinetic and pharmacodynamic pathways, all with supporting literature. 

In his presentation Dr. Altman described the use of PharmGKB to link a heterozygous null mutation in the CYP2C19 gene (a member of the cytochrome P450 system of enzymes that determine drug metabolism) identified in an individual’s genome to a 50% reduced capacity to metabolize certain drug types. This information would be used to drive drug selection for this particular individual.

Moving from the level of individuals to a broader view of how to use genomic data to reduce the burden of disease, close the gap between discovery and application of genomic information, and track outcomes, Cecelia Bellcross, Ph.D., a fellow with the Office of Public Health Genomics of the Centers for Disease Control and Prevention, noted the paucity of funding to support the back-end of the genomic medicine pathway—comprised of clinical testing, data interpretation/application, and outcomes—compared to the funds available for generating genomic data.

Noting that every individual genome is likely to have “abnormalities”—variants that may or may not have clinical significance—Dr. Bellcross underscored the need to tackle this “evidence dilemma” in genomic medicine in a systematic way and not on an individual basis. Strength of evidence does not imply strength of association, she cautioned, and genetic association does not necessarily equate to clinical risk.

Dr. Bellcross urged stakeholders—including funders, patients, clinicians, academicians/researzchers, and lawyers—to work together to close the gap between evidence and clinical applicability. The public-private partnership known as GAPPNet (Genomic Applications in Practice and Prevention Network; is an example of a strategy to streamline the use of validated genomic knowledge.

PCR and Genome Sequencing Technologies Move into the Next Generation

Among the companies launching new products at ASHG was QuantaLife, which introduced droplet digital™ PCR (ddPCR™), a system reportedly capable of detecting DNA targets with absolute quantitation. Droplet digital PCR can achieve 10 times higher resolution and 25 times greater sensitivity than conventional real-time PCR techniques, without the need for standards, according to Kevin Ness, Ph.D., founder and vp of product development.

The dual-instrument system consists of a droplet generator and a droplet reader. Using microfluidics, a 20 µL sample is converted into 20,000 1-nL water-in-oil droplets; TaqMan-based PCR amplification takes place in each droplet, and the droplets are streamed past a two-color fluorescence detector, yielding 20,000 independent measurements for each sample. The system can process 20,000 droplets in 20 seconds, for a rate of about 48 samples per hour.

Real-time PCR

> A range of new products and technologies featured by conference exhibitors focused on real-time PCR (rtPCR) applications, including the SmartChip Real-Time PCR System from WaferGen. The company presented a poster describing rtPCR gene-expression analysis using two profiling panels: the SmartChip Human Oncology Panel and the SmartChip Human microRNA Panel. Each 5,184-nanowell SmartChip panel can accommodate up to 384 samples.

> By the end of the month, Bioneer plans to introduce several new products, including the ExiPrep™ 16 Plus and ExiPrep 16 Pro automated, programmable nucleic acid extraction systems. Both instruments can process from 1 to 16 samples in parallel and include UV sterilization to prevent inter-assay contamination and a contamination shield to minimize the risk of intra-assay cross contamination.

Bioneer also launched a new gene-synthesis service at the ASHG meeting, with the capability to produce genes up to 20 kb at present, expanding to 50 kb in the near future. The company provides codon optimization using its GeneAdvantage software to improve protein expression and function.

> Roche highlighted its RealTime ready Cell Lysis Kit for use in performing one-step RNA purification and cDNA synthesis in preparation for rtPCR. Also new was the SeqCap EZ Human Exome Library v2.0 to facilitate sample preparation for targeted next-generation sequencing of the human exome.

> Promega’s new GoTaq® 2-Step RT-qPCR system combines the company’s GoScript™ reverse transcriptase and GoTaq® qPCR Master Mix. Promega also featured a family of products for purifying genomic DNA from blood, including the new ReliaPrep™ Blood gDNA Miniprep System for manual processing of up to 200 µL samples and the ReliaPrep large Volume HT gDNA Isolation System for automated processing of up to 96 3–10 mL samples.

Sequencing, Analysis, and Profiling

> Numerous exhibitors featured technologies and products to support DNA and RNA sequencing, genome analysis, and gene-expression profiling. For example, Agilent Technologies  introduced the SureSelectXT Target Enrichment System, which integrates library preparation, genomic DNA preparative reagents, and target-enrichment workflows on an automated platform. The system can process 2 to 3 plates, or 192–288 samples per week. The SureSelect platform currently supports the SOLiD and HiSeq sequencing platforms, with plans to add the 454 sequencing system to this list in the near future.

Agilent is introducing an integrated biology initiative aimed at integrating genomic, transcriptomic, proteomic, and metabolomic data to characterize and understand the regulatory processes controlling biological pathways.

> Planned for launch in 2011 and currently in beta testing is the nanoAnalyzer 1000 System, a single-molecule analysis platform developed by BioNanomatrix. The system includes disposable nano-channel array chips, a multicolor fluorescent imager with integrated analysis software, and reagent kits for generic and user-specific applications.

The nanoAnalyzer chips contain thousands of nanochannels that linearize the DNA molecules in a sample, up to 1 Mb in length, which can then be analyzed unamplified with a throughput of several gigabases per hour. The company described the technology in a recently published paper.

> NuGEN Technologies unveiled its Ovation® WGA FFPE System and Ovation RNA-Seq FFPE System for DNA and RNA sample preparation from FFPE tissues upstream of next-generation sequencing (NGS) applications for genome and transcriptome profiling. The WGA FFPE System provides sufficient DNA yield to allow for array-based comparative genomic hybridization and NGS from a single reaction, according to the company.

The ability to capture and analyze RNA from FFPE tissues will facilitate transcriptome analysis in archived samples and contribute to biomarker discovery research in diseases such as cancer to identify markers associated with disease progression, metastatic potential, and drug resistance.

> Coming soon from Sigma Life Science  is a second-generation chromatic immunoprecipitation (ChIP) kit developed for next-generation sequencing applications. The Imprint® Ultra ChIP2 kit utilizes DNA-blocked Staph-Seq cells to minimize contaminating DNA and make it easier to study recruitment of low-abundance transcription factors in genome-wide location analysis experiments such as ChIP-chip and ChIP-Seq.

> Thermo Scientific  highlighted its NanoDrop™ spectrophotometers, including the NanoDrop 2000 micro-volume instrument, which utilizes a sample-retention technology based on the surface tension between two fiber-optic cables to retain 1 µL samples in place on an optical surface. The detector can measure DNA, RNA, and protein concentrations, with a concentration range of 2 ng/µL–15,000 ng/µL of double-stranded DNA.

> NanoString Technologies  premiered its nCounter® Copy Number Variation CodeSets that allow researchers to detect CNVs in up to 800 regions of the human genome in one multiplexed reaction.

> Signature Genomic Laboratories, a PerkinElmer company that performs microarray-based diagnostic testing and markets the SignatureChipOS™ oligonucleotide-based array for detecting cytogenetic abnormalities, will introduce its OncoChip™ technology in 2011.

> Life Technologies  announced the availability of its digital PCR solution that operates on the company’s OpenArray® Real-Time qPCR platform. The system can reportedly run three arrays simultaneously for a throughput of 9,216 reactions in a single run. A sample is partitioned into wells and qPCR is performed in each well, enabling absolute quantitation of the number of copies of a gene without the need for a reference sample, enhancing the ability to detect rare alleles and low copy number genes. 


QuantaLife’s droplet digital PCR (ddPCR) technology converts a DNA sample into 20,000 1 nL droplets. TaqMan-based amplification takes place in each droplet, followed by absolute quantitation of the number of copies of a gene target as the individual droplets stream past a fluorescence detector.

Previous articlePositive Phase III Data Mean Cosmo and Santarus Plan 2011 NDA for Ulcerative Colitis Drug
Next articleAPR and Labtec License Registrational Oral Film Donepezil to Ferrer for Three EU Countries