At AAAS 2017, a pair of panel discussions addressed the reproducibility crisis in science, particularly biomedical science, and suggested that it is manageable, provided stakeholders continue to demonstrate a commitment to quality. One panel, led by Leonard P. Freedman, Ph.D., president of Global Biological Standards Institute (GBSI), was comprehensive. It prescribed a range of initiatives:

  1. Drive quality and ensure greater accountability through strengthened journal and funder policies.
  2. Create high-quality online training and proficiency testing and make them widely accessible.
  3. Engage the research community in establishing community-accepted standards and guidelines in specific scientific areas.
  4. Enhance open access to data and methodologies.

The other panel, led by Rochelle Tractenberg, Ph.D., an associate professor of neurology at Georgetown University Medical Center, and chair of the Committee on Professional Ethics of the American Statistical Association, focused on good practices in data analysis.

Dr. Freedman followed up on GBSI’s Reproducibility2020 challenge and action plan for the biomedical research community. According to Dr. Freedman, GBSI finds that there has been encouraging progress toward significantly improving the quality of preclinical biological research by year 2020.

Detailed findings appeared in “Reproducibility2020 Report: Progress and Priorities,” a paper posted on bioRxiv. It identifies action and impact that has been achieved by the life science research community and outlines priorities going forward. The report addresses progress in four major components of the research process: study design and data analysis, reagents and reference materials, laboratory protocols, and reporting and review.

“By far the greatest progress over these few years has been in stakeholders recognizing the severity of the problem and the importance of taking active steps for improvement,” said Dr. Freedman. “Every stakeholder group is now addressing the issues, including journals, NIH, private funders, academicians, and industry. That’s crucial because there is not one simple fix—it is a community-wide problem and a community-wide effort to achieve solutions.”

The report highlights tangible examples of community-led actions from implementing new funding guidelines and accountability to tackling industry-wide research standards and incentives for compliance. For example, the report notes that several initiatives have been taken to ensure the quality of published material:

• Guidelines adopted by the Biophysical Journal establish reporting standards in four key areas: rigorous statistical analysis, transparency and reproducibility, data and image processing, and materials and data availability.

• Authors submitting to the Nature Publishing Group family of journals must complete a reporting checklist to ensure compliance with established guidelines, including a requirement that authors detail if and where they are sharing their data.

• STAR Methods guidelines (Structured, Transparent, and Accessible Reporting) are designed to improve reporting across Cell Press journals. These guidelines remove length restrictions on methods, provide standardized sections and reporting standards for methods sections, and ensure that authors include adequate resource and contact information.

• As of January 2017, the Registered Reports initiative through the Center for Open Science allows selected reviewers to comment on study design and methods prior to data collection. Part of the Open Science framework, the Transparency and Openness Promotion (TOP) guidelines for journals include template guidelines for journals interested in implementing their own reproducibility guidelines.

Advances on the reagent quality front include the Research Resource Identification Initiative, which establishes unique identifiers for reagents, tools, and materials used in experiments, reducing ambiguity in methods descriptions. Also highlighted in the report was progress on antibody validation, a topic Dr. Freeman has also addressed in GEN.

“We are confident that continued transparent, global, multistakeholder engagement is the way forward to better, more impactful science,” added Dr. Freedman. “We are calling on all stakeholders—individuals and organizations alike—to take action to improve reproducibility in the preclinical life sciences by joining an existing effort, replicating successful policies and practices, providing resources to replication efforts, and taking on new opportunities.”

The panel led by Dr. Tractenberg was entitled, “Promoting Ethical Science and Policy with Ethical Statistical Practice.” It focused on the ways statisticians, data analysts, and data scientists could enhance responsible research and help resolve the reproducibility crisis. A survey of more than 1500 investigators, published in a 2016 issue of Nature, showed that more than 70% of researchers have tried and failed to reproduce other scientists' experiments, and more than half have failed to reproduce their own experiments.

“My focus on promoting ethical statistical practice arose because a scientific credibility crisis is emerging due partly to scientists who do not conduct—or insist upon—appropriate statistical analysis or interpretation, or both,” said Dr Tractenberg. “If ethical statistical practice becomes the norm across statistics and data science, it may then be taken up into other domains where data analysis makes important contributions.”

Several elements of a study can lead to irreproducible results, including incorrect analysis, improper interpretation of data, cherry picking results, or failing to transparently report the number of analyses that were done, Dr. Tractenberg indicated. Avoiding these are principles of ethical statistical practice as well as responsible conduct in research.

“Although it can often seem that data analysis is secondary to the 'main' science or study purpose,” Dr. Tractenberg continued, “the analytic method and its interpretation are essential attributes of both rigor and reproducibility, and this is true for their own work and for their peer review of others' work.”

A large number of these irreproducible studies may have never been published if peer reviewers that were unable to evaluate the statistics “just told the editor they don't feel qualified to evaluate the study's statistical argument, and that a formal statistical review is needed,” she noted. Having a formal statistical review does not guarantee reproducibility or rigor, but not having or insisting on one virtually guarantees the continuation of the reproducibility crisis.

Yet another reproducibility issue was brought up by Dr. Freedman, who argued that scientific practice would be more likely to improve if scientific culture were to place a higher priority on rigor. “The research culture, particularly at academic institutions, must also seek greater balance between the pressures of career advancement and advancing rigorous research through standards and best practices,” he noted. “Additional leadership and community-wide support will be needed, and we believe that the many initiatives described in this report add needed momentum to this emerging culture shift in science.”

Previous articleScientists Create Active Controllable Electronic DNA Switch
Next articleLong Noncoding RNAs: Clarity or Confusion?