Quality: Your “Q” to Increase R&D Productivity

The year 2014 found the FDA approving an unprecedented 41 new molecular entities (NMEs) [1]. Interestingly, only 44% of the NMEs approved in 2014 were from Big Pharma [2]. Now more than ever, there is substantial opportunity for innovative NMEs developed by biotechnology companies to reach approval status. This increased number of approvals indicates not only the success of the FDA’s recent PDUFA V reauthorization of performance goals and procedures, but also an overall increase in R&D productivity. This is good news, given that only 5 years ago NME approval rates slumped so much it was considered the dawn of a pharmaceutical “ice age”.

Phase 2 and 3 attrition has, by far, the largest impact on R&D cost and productivity. If the Phase 2 attrition rate increases to 75% (it typically sits at approximately 66%), the total cost for developing an NME jumps by 29% to 2.3 billion USD [3]. Attrition is typically both associated with the complexity of pursuing new drug targets and with the heightened scrutiny by the regulators of efficacy and safety data contributing to benefit-risk assessments [3]. Controlling the complexity of drug targets may not be realistic, but ensuring the highest clinical trial data quality is a “low-hanging fruit” for increasing regulator confidence in benefit-risk assessments. So, how does one improve clinical trial data quality? Here are 5 easy defense strategies for your fight against poor data quality.

First, carefully consider the method used to collect clinical trial data: the Case Report Forms (CRFs). Be they electronic or paper, CRFs must collect data appropriately in order to accomplish the goals of the study. It may seem obvious, but careful design of CRFs is arguably one of the most neglected processes by clinical scientists and operational leads during study start-up. As a result, CRFs can require a re-design later in the study and/or more data handling may be necessary to accommodate an earlier mistake in data collection. This wastes both time and money, in addition to decreasing the confidence you and the regulators will have in your data.

Second, a careful review of all data validation documents ensures your Data Management (DM) team are catching any data errors or missing data before database lock. This is particularly important for data that are vital to the analysis. For example, I once discovered that a validation document neglected to instruct the DM team to query for missing data that were key for the primary analysis. Discovering this after the database was locked could have been a disaster.

Thirdly, another key element for increasing data quality is a systematic review of individual patient data. The data should remain blinded in the case of a blinded study, but can be reviewed by independent reviewers in the case of an open-labelled study. Scrutinizing patient data allows scientists to discover unreported adverse events as well as other indication-specific data issues that may not be immediately obvious to a DM team.

The fourth defence against poor data quality is to use a data safety monitoring board (DSMB). This is an independent board of medical doctors and typically a statistician who examine the data with subject safety in mind. Regular assessment of the trial safety data by the DSMB can reveal previously unknown safety issues. This allows a company to put the best interests of their study subjects first and to take immediate action in mitigating any potential safety issue or stopping the trial if the safety concern is severe.

The last strategy is taking time to consider how your analysis results are displayed. Developing shell (dataless) tables, figures and listings allows the statistical programmers to know exactly how to export the analysis results. This prevents the endless analysis re-runs that often require extensive reprogramming and, as a result, wastes both time and money.

All 5 of these strategies can have a big impact on a company’s ability to shorten the cycle time of a clinical trial, increase confidence in their data and reduce development costs. High quality data contributes to a definitive, confident assessment of benefit-risk that can stand up to regulator scrutiny. It’s amazing how these 5 simple and effective steps can improve your R&D productivity. Let quality be your “Q.”

 

References:

  1. Food and Drug Administration. 2014. Novel New Drug Summary, January 2015 http://www.fda.gov/downloads/Drugs/DevelopmentApprovalProcess/DrugInnovation/UCM430299.pdf
  2. Munos B. 2014 New Drug Approvals Hit 18-year High. Forbes Magazine Online, http://www.forbes.com/sites/bernardmunos/2015/01/02/the-fda-approvals-of-2014/
  3. Paul SM, Mytelka DS, Dunwiddie CT, Persiner CC, Munos BH, Lindborg SR and Schacht AL. 2010. How to improve R7D productivity: the pharmaceutical industry’s grand challenge. Nat Reviews Drug Discovery 9:203-214

 

 

About the Author

Dr. Victoria Levesque is a Clinical Development Scientist with Novateur Ventures who is particularly passionate about data quality. When not consulting for Novateur Ventures, one usually finds Victoria running after her 2 little boys, but she also enjoys cycling, making jewelry and spending time with her family.