Examining the advantages of a program-wide data strategy in advancing high-efficiency drug development.
Data is one of the most valuable assets in clinical development. For a new therapy to reach patients, the clinical evidence package must be of the highest quality to gain approval from key decision-makers, including regulators, prescribers, and healthcare payers. Therefore, adopting a program-wide strategy that protects data quality and ensures its optimal utility across all developmental stages and throughout the product’s lifecycle is an essential requirement for pharmaceutical companies.
While planning a data strategy undoubtedly requires the right expertise, an equally crucial requirement is effective timing-something that many organizations can overlook. Attempting to implement a data strategy partway through a clinical trial, when copious amounts of data have already been generated, increases the risk of overlooking or even losing critical information. Such errors are costly and can significantly impact development timelines, ultimately reducing the chances of bringing valuable new therapies to patients. Thus, a comprehensive, program-wide data strategy is the essential first step, and indeed the basis on which the success of the whole clinical development program depends.
This article outlines the best practices and tactics for planning such a data strategy, including what’s involved, who’s responsible, and when it should take place. The key benefits that can be gained from a well-planned, program-wide data strategy will also be discussed, highlighting why such an approach is vital for successful high-efficiency clinical development.
Pharmaceutical companies are striving to bring ever-greater speed and efficiency to the drug discovery process while still ensuring the growing regulatory focus on data reliability and accuracy is satisfied. As a result, data has become a powerful asset that can be used to maximize every opportunity in clinical trials, from analytics and adaptations, prediction and simulation, and biomarker substratification, through dose-response modeling and QPP, data quality, and easy access to good metadata structures.
Data quality is a vital component for compliance with FDA and the European Medicines Agency (EMA), with both regulators putting a high priority on recording and reporting data and data transparency. Meanwhile, clinical data is no less important for other stakeholders, including trial participants, funders and sponsors, research institutions, journals, and professional societies.1
Even small errors in trial data can be detrimental to a company’s reputation or for the advancement of a promising new drug candidate. Deprioritization of a clinical data strategy jeopardizes the chances of regulatory approval and market acceptance, which puts company viability and assets at risk. It can also lead to trial
outcomes that are ultimately difficult to translate to clear evidence of effectiveness and benefits to patients.2
To address these emerging demands, data management systems are becoming increasingly sophisticated, enabling the collection of higher quality data on which to base subsequent analyses, while also increasing the chances of detecting unanticipated trends. Higher quality data and easy access to good meta-data structures create opportunities to optimize clinical studies, ultimately increasing the chances of trial success- whether bringing a therapy to market or licensing it to another company.
While a robust data strategy enables risks to be managed and minimized, the importance of implementing it at the earliest stages of clinical trials cannot be overstated. Indeed, to obtain a database of the highest quality, clinical data management should start early, even prior to the completion of the study protocol.3 A well-planned data strategy should cover each of the four stages of the traditional clinical development process (i.e., be program-wide), identify where data will be sourced from, and define the approaches to data collection, storage, and analysis. Figure 1 summarizes the best practices for data strategy formulation. The crucial issue of timing is explored in more detail ahead, but first we examine what exactly is meant by a well-planned, program-wide clinical data management strategy.
Click to enlarge
Although most companies formulate data strategies for their clinical programs, they often don’t adopt a program-wide approach, and they are put into action too late in the development process. In these late-on implementations, even the best-case scenario means that a program will not reap the full potential of a well-planned data strategy. In the worst case, the evidence package will be of insufficient quality to gain regulatory approval for a promising new therapy.
The traditional four-phase clinical trial process is becoming increasingly connected. This means that a data issue encountered during one phase can now easily have significant downstream consequences. Mitigating this potential issue is best achieved by planning a data strategy for the entire program, from start to finish, i.e., a program-wide strategy.
Instead of planning trial by trial, formulating a program-wide strategy for data management and trial design well before the commencement of Phase I enables the right data to be collected, at the right time, and in the right way. This approach will generate an evidence package of the highest quality and demonstrate to stakeholders that data integrity has been maintained throughout the entire program.
The following sections discuss the key considerations and best practices in implementing a data management plan, with an emphasis on Phases I and II of the clinical trial, given the importance of implementing a data strategy at the very earliest opportunity-and the tendency for implementation at these stages to be overlooked.
Phase I trials can involve unique and unforeseen challenges, such as the timely introduction of data standardization processes and the requirement for frequent protocol amendments during trial setup. There is also a need to manage and integrate data from multiple vendors-a task that requires significant time and organizational resources. Having a data strategy in place that predicts and manages these issues can thus help mitigate risks and ultimately expedite the development pathway. The contribution of dedicated statistical programmers in addressing these challenges is particularly important at this stage. Their work is to build databases to strengthen the evidence package, to assess existing data to inform trial design, and to perform pharmacokinetic analyses, all of which are critical to avoid spiralling costs and unwanted delays.
As discussed, the key aims of a data strategy are to protect the quality of clinical data, to identify and plug information gaps (such as when assessing target product profile), and to define the trial’s approach to data collection, storage, and analysis (such as the selection of suitable electronic data capture [EDC] systems). A well-planned data strategy will also ensure that early phase trials generate the evidence necessary for later phases of development.
Another crucial objective of a data strategy is to inform the choice of adaptations in trial design. While there are many types of trial designs to suit a variety of programs, a well-planned data strategy implemented at the earliest stages has the advantage of facilitating the necessary flexibility to implement innovative, adaptive trial designs. Trials with adaptive designs are often more efficient, informative, and ethical than trials with a traditional fixed design, since they can maximize the use of time and money, and may require fewer participants.4 Some adaptive trial designs allow newly enrolled participants to be assigned to a more promising treatment arm after interim analysis, whereas others change the recruitment eligibility criteria to enroll patients who are more likely to benefit. Changing parameters midstream in this way can create complex statistical challenges, which a well-planned data strategy will help organizations adequately prepare for and resolve.
While frequentist methodologies are used to design most clinical trials, adaptive trials also accommodate Bayesian methods. The flexibility of Bayesian approaches enables the integration of different sources of information and uncertainty5, which allow for smaller trials, more precise dose-escalation, and better patient care. They also enable the prediction of timelines and help determine how a patient cohort will respond to treatment (based on historical data or available patient-outcome information), all while minimizing the risk of introducing bias or impairing interpretability. However, Bayesian approaches are only available to investigators who have access to high quality data. A Bayesian trial requires a well-designed data strategy that is implemented from the outset, without which the scientific validity of results may be put at risk.
Phase I and II trials concern the study of patient populations with the disease of interest to generate preliminary evidence, particularly on safety. The key challenge at this stage is to ensure that adequate data is collected to fill any knowledge gaps in the target product profile, as well as plan for future evidence needs. A range of models are available to address any such knowledge gaps, from non-compartmental analysis (NCA) to more advanced statistical models such as pharmacokinetics/pharmacodynamics (PK/PD) and model-based meta-analysis (MBMA), which optimize the use of available data.
When conducting early phase studies, global regulators require submission of NCAs that measure factors such as extent and rate of exposure to a drug, without the complexity of strenuous assumptions or complex models. Through the use of rudimentary methods such as linear trapezoidal rules, NCAs make it relatively easy to measure the concentration of a drug in a body over time. They can capture exposure length, and time of peak exposure, without the challenges of models that require independent validation.6
NCAs are a common subject of regulatory inquiries and can complement other models such as PK/PD analysis. PK/PD models enable researchers to evaluate and present data on drug safety and potency, and to test the impact of trial designs on the potential outcomes. Reliable data at this stage also ensures that Phase III decisions about dosage are based on knowledge gathered during the entire Phase I to III analysis. PK/PD modeling can maximize the chances of success by ensuring that the correct data on safety is recorded.
PK/PD models are becoming more commonplace in quantitative pharmacometrics, to build strong dose-response models for Phase II. It is well known that unreliable dose-response models in Phase II can create problems for Phase III tests: only 13.2% of Phase III trials accepted after initial rejection are rejected on grounds of efficacy. More common reasons are dose selection, choice of endpoints, and other challenges that better Phase II modeling can prevent.7 Working with statistical experts from Phase I onwards can therefore ensure that knowledge gleaned from NCAs is used to build stronger Phase II models, thus avoiding these Phase III pitfalls.
Another important tool is MBMA, which allows investigators to take advantage of knowledge gained across several clinical trials, showcasing the benefits and challenges of confirming safety and efficacy from various endpoints. Trial sponsors move forward knowing they’ve made the best available choice from perspectives as wide ranging as patient safety and expected financial value. When coupled with forecasting and simulation tools, MBMAs also highlight possible pitfalls, allowing trial planners to plan ahead.
MBMAs also help with the construction of surrogate endpoints, which are useful for detecting early signs of efficacy, and go/no-go rules that allow statisticians to benefit from expertise gleaned across the therapeutic sector. The accumulation of this broad range of expertise means that studies moving to Phase III not only have early phase knowledge that validates the enterprise, but they can also rely on numerous other studies that signal the probability of Phase III success.
A program-wide data management approach is incredibly important to the success of a clinical trial. However, given the advantages of establishing the right data strategy as early as possible, the focus here has been on solutions for Phases I and II, which will have an immense impact on the success of the later stages. While a similar examination of Phases III and IV is beyond the scope of this article, a brief mention of some of the key data requirements for these phases follows here:
A predetermined strategy on what data should be collected, how it should be formatted and organized, and how it should be analyzed, simplifies the whole process from database submission to demonstrating regulatory compliance.
However, data strategy planning often occurs quite late in development, which means there is rarely enough time to address the complex considerations involved in the planning process. Consequently, fewer benefits will be gained and, in the worst case, the quality of the clinical evidence could be compromised.
A well-planned data strategy can strengthen clinical evidence by ensuring that the data is efficiently collected, handled properly, and interpreted correctly. Moreover, it can minimize risk by identifying, quantifying, and mitigating data issues. A clear plan also allows any problems and roadblocks to be resolved quickly, helping to streamline the development pathway and potentially expedite a therapy’s time to market.
Natasa Rajicic is Principal of Strategic Consulting, Cytel
References
1. Sharing Clinical Trial Data: Maximizing Benefits, Minimizing Risk. Committee on Strategies for Responsible Sharing of Clinical Trial Data; Board on Health Sciences Policy; Institute of Medicine. Washington (DC): National Academies Press (US); 2015 Apr 20. https://www.ncbi.nlm.nih.gov/books/NBK286000/
2. Why clinical trial outcomes fail to translate into benefits for patients. Carl Heneghan, Ben Goldacre, and Kamal R. Mahtani. Trials. 2017; 18: 122. Published online 2017 Mar 14. doi: 10.1186/s13063-017-1870-2 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5348914/
3. Data management in clinical research: An overview.Binny Krishnankutty, Shantala Bellary, Naveen B.R. Kumar, Latha S. Moodahadu, Indian J Pharmacol. 2012 Mar-Apr; 44(2): 168–172. doi: 10.4103/0253-7613.93842. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3326906/
4. Pallmann, P., Bedding, A.W., Choodari-Oskooei, B. et al. Adaptive designs in clinical trials: why use them, and how to run and report them. BMC Med 16, 29 (2018). https://doi.org/10.1186/s12916-018-1017-7
5. The Case for a Bayesian Approach to Benefit-Risk Assessment: Overview and Future Directions. It is vital that data generated at this stage satisfies sponsors, regulators, funders and agencies. Costa, MJ, He W, Jemiai Y, Zhao Y, Di Casoli C. Ther Innov Regul Sci. 2017 Sep; 51(5):568-574. doi: 10.1177/2168479017698190.
6. Gabrielsson, J. and Weiner, D., 2012. Non-compartmental analysis. In Computational toxicology (pp. 377-389). Humana Press, Totowa, NJ.
7. Sacks, L.V., Shamsuddin, H.H., Yasinskaya, Y.I., Bouri, K., Lanthier, M.L. and Sherman, R.E., 2014. Scientific and regulatory reasons for delay and denial of FDA approval of initial applications for new drugs, 2000-2012. Jama, 311(4), pp.378-384.
Driving Diversity with the Integrated Research Model
October 16th 2024Ashley Moultrie, CCRP, senior director, DEI & community engagement, Javara discusses current trends and challenges with achieving greater diversity in clinical trials, how integrated research organizations are bringing care directly to patients, and more.
AI in Clinical Trials: A Long, But Promising Road Ahead
May 29th 2024Stephen Pyke, chief clinical data and digital officer, Parexel, discusses how AI can be used in clinical trials to streamline operational processes, the importance of collaboration and data sharing in advancing the use of technology, and more.