July 1, 2014 (Vol. 34, No. 13)

Todd Skrinar Partner Ernst & Young
Thaddeus Wolfram

More ROI from Research and Development Is Urgently Needed

Cost pressures and new pay-for-performance payer approaches are making demonstration of real-world outcomes as essential to the drug research and development (R&D) model as initial, premarket evidence. Tied to this trend R&D organizations have an opportunity to improve productivity using big data and analytical capabilities.

There are multiple factors currently limiting the discovery and approval of new medicines. The low-hanging fruit has already been picked, and regulatory constraints have tightened, requiring new drugs to show an increased level of efficacy and safety while also displaying significant differentiation from treatments already available.

In addition, payers are beginning to demand that new, pricier drugs demonstrate that they’re worth it. At the same time the race to be first approved for novel therapies puts an even greater focus on the need to be more efficient in clinical development. These factors have created an urgent need to get more ROI from R&D.

Historically, getting more from your R&D has meant putting more money into it: more trials, more patients, more information generated, collected, and analyzed. This has reliably led to much higher costs, but not improved outcomes. Organizations have worked hard to trim costs, but the demands for trial results far outweigh these efforts. A more disruptive solution is needed.

Big data is part of that solution. The evolution of data capture and analysis techniques allows for innovative solution development in which more goes into clinical trials and more comes out, without necessarily an increase in cost. Here the “more” going in is information and insight rather than time and volume of resource. The “more” coming out are the tangible results that leaders in the area are starting to achieve, including improved patient recruiting and selection, and decreased clinical trial timelines and costs.

And this brings us to the question many life science organizations are asking today: Where do we start when it comes to taking advantage of big data to improve the ROI of R&D?


Todd Skrinar

Start with the Big Data of Today

Already today many applicable forms of data and data access exist, prompted to a large extent by the digitization of health records and medical imaging. Originating from a diversity of sources—including claims data, electronic health records (EHRs), clinical studies, and social media—access to the information must be approached differently in each case, especially with the challenges companies face when attempting to monitor and incorporate data generated outside of the clinical setting.

For this reason organizations must be smart in the way they partner and collaborate in the greater healthcare/life sciences ecosystem. Both access to big data and fully realizing success from its use require the right relationships within that ecosystem, making those relationships a critical success factor to improving R&D through big data.

Further complicating access and use of the data are the different access classification pools. While some data is publicly available, other data is private or anonymized. Understanding where your data is coming from and what classifications it carries is the first step in devising an appropriate approach to access.

The analytics used by biopharma companies today can be immediately leveraged to take advantage of big data. The industry has successfully used analytics to optimize dosing before beginning clinical trials, use publically available competitor data to cut costs and accelerate programs, and improve the quality and speed of trial design decisions. Moving forward, the models and algorithms behind these successes can be enhanced with the inclusion of big data, and designed to provide additional insights to increase the R&D return on investment.


Thaddeus Wolfram

Be Ready for the Big Data of Tomorrow

It is also important to consider how quickly the tide of technology is changing and what this means looking forward for additional data that will be generated. As the sequencing of individual patient genomes becomes more affordable, a whole new realm of data is opening up, with other omic data sources likely soon to follow. This data, when combined with currently available data such as electronic literature, population, and patient claims data, will enable entirely new ways understand patient/disease/prevention/treatment combinations. As companies work through the insights they are hoping to gain, and therefore what data they want to access, mine, and analyze today, a view on what data will be available tomorrow should impact which value opportunities an R&D organization will first target from a big data standpoint.

Evolution in data access will also continue moving forward. Efforts in the U.K. to sequence patient genomes and link them directly to EHR data indicate a more robust and comprehensive coupling of data pools, while the 2013 NIH appointment of its first Associate Director for Data Science demonstrates that serious efforts are under way in the U.S. to increase access to medically relevant big data. Looking forward organizations will need to understand how they can best tap into this information, and how they will address the many challenges that come with patient privacy rights, the transfer of high volumes of data, and interfacing with disparate data sources.

Build Your Big Data Capability

Creating value out of big data requires the right combination of people, process, and technology and is not something that an organization can just flex to do. Extracting the desired insights from the data requires a cohort of data scientists with a range of experience and expertise. R&D does not necessarily have a lack of data scientists, and so the greater focus will usually be on what is needed to get them collaborating with the leaders who are setting the strategic direction for R&D and looking toward big data for innovative ways to efficiently answer the most challenging questions.

Innovating with big data requires a technical environment outside of the structure of traditional IT. The necessary environment should complement the analytical tools that the organization brings in to enhance its current analytical state, and it should align with the capabilities of the data scientists employing big data and advanced analytics.

Lastly, the organization should seek to avoid, or overcome, the inertia that can result from data overload. Effective governance and an organizational structure that incentivizes the right analytics behaviors and encourages new and innovative thinking are important to both establish and maintain momentum on the big data front. Taking a balanced approach to analytics is another effective way to avoid getting bogged down in all the data. Rather than spending an overabundance of time and effort trying to tackle one challenge with big data, organizations should make sure they are progressing against a variety of near, mid-, and long-term challenges, with varying degrees of complexity. The low complexity, near-term opportunities offer quick wins and early value to help give momentum to broader big data and advanced analytics efforts. As a result, the organization can overcome the initial hurdle of data overload, and be poised to continue using big data to improve R&D productivity.

Todd Skrinar ([email protected]) is a principal in the Advisory Life Sciences practice of Ernst & Young.
Thaddeus Wolfram ([email protected]) is a manager in the Advisory Life Sciences practice of Ernst & Young. 

The views expressed in this article are those of the authors and do not necessarily reflect the views of Ernst and Young.

Previous articlebluebird bio Acquires Pregenen for Up to $139.9M
Next articleWhat to Look for in a Next-Generation Sequencer