Data Management Efficiencies Through Risk-Based Approaches and Innovations

Commentary
Article

Because of drastic changes in the clinical trial space, there is a need for advancing and streamlining design through risk-based methodologies and innovations.

Image credit: WrightStudio | stock.adobe.com

Image credit: WrightStudio | stock.adobe.com

The complexity and number of clinical trials have significantly increased over the past 15 years. According to surveys for the periods of 2001-2005 and 2011-2015,1 the complexity of endpoints increased by 86%, procedures by 70%, and data points collected by 88%.

The number of clinical trials has also increased dramatically—from 2,119 in 2001-2005 to 115,909 in 2011-2015. The number of principal investigators also increased by 63%, much of that in developing countries. All of this is happening in a more global context. Because of these drastic changes, there is a need for advancing and streamlining through risk-based methodologies and innovations.

Risk-based approaches originally stemmed from the FDA’s risk-based monitoring (RBM) guidance.2 This guidance was focused on risk monitoring and critical data rather than an end-to-end risk management process and quality by design. Consecutive revisions of ICH E6 emphasized the importance of data credibility, patient protection and quality.

Quality is considered to be a degree of excellence and should be built into an enterprise ecosystem. It cannot be achieved by oversight or monitoring alone.3 Recently, MHRA guidance discussed broadening the scope of centralized statistical data monitoring,4 while ICH E6(R3)5has continued expanding on topics related to data quality, fit-for-purpose approach, and actions proportional to risks:

  • “Systems and processes that aid in data capture, management and analyses…should be fit for purpose, should capture the data required by the protocol and should be implemented in a way that is proportionate to the risks to participants and the importance of acquired data.”
  • “The quality and amount of the information generated in a clinical trial should be sufficient to address trial objectives, provide confidence in the trial’s results.”
  • “The systems and processes that help ensure this quality should be designed and implemented in a way that is proportionate to the risks to participants and the reliability of trial results.”

The recent transformation of the clinical data management process is driven by:

  • The complexity of protocol designs leading to an increased number of data points
  • Inefficiencies due to the data complexity and volume
  • Disruptive novel technologies and advanced data science
  • The expansion of risk-based methodologies allowing focus on what matters the most

Quality risk management embraces the fact that data quality is not error-free data, but rather fit-for-purpose data that sufficiently supports conclusions equivalent to those derived from error-free data. This philosophy transforms the culture of wasting a disproportional amount of resources and time on no-value-add activities.

Innovations as risk-based data management enablers

A variety of automations, artificial intelligence (AI), machine learning (ML) and innovations (Figure 1) are increasingly used to optimize processes, improve data quality, reduce costs, and set a foundation for continuous process improvement.

FIGURE 1: Innovation tools across the clinical trial cycle

FIGURE 1: Innovation tools across the clinical trial cycle

For example, AI and ML enable the identification of adverse events and safety signals hidden in unstructured data, such as eDiaries and social media, accommodating a shift in the amount of patient data being collected via non-CRF sources (Figure 2), such as wireless, smartphone-connected products, wearables, implantables, ingestibles, and others.6,7

FIGURE 2: SCDM categorization of data sources into active (orange) and passive (gray)

FIGURE 2: SCDM categorization of data sources into active (orange) and passive (gray)

Over the past few years, the FDA has seen rapid growth in the number of submissions that reference AI and ML. Submissions that include AI and ML have increased tenfold in 2021 compared to 2020, reaching a count of 132.8 Recognizing the trend, in 2023, the FDA9 and EMA10 released papers to initiate discussion on the future guidance for the use of AI and ML for drug development.

End-to-end risk-based data management process

Implementation of these principles into data management requires incorporating risk-related elements into study conduct and across enterprise processes. This framework will reduce workload, burden, and timelines by targeting unnecessary complexity, focusing on critical processes and embedding flexibility.

Study-level activities start prior to study protocol finalization and continue through the execution, having a deep impact on the traditional CDM processes11 (Figure 3). Risks should be identified, reviewed, and managed across all stakeholders and on the trial, process, and vendor levels.

FIGURE 3: Study-level risk-based data management flow (risk-based elements are in green)

FIGURE 3: Study-level risk-based data management flow (risk-based elements are in green)

Process and enterprise-level analytics and governance enable continuous monitoring, identification and action on all levels and dimensions covering processes, vendors, studies, regions, phases, and therapeutic areas. The rapid growth in the use of AI, ML, intelligent automation, and robotics improves overall process efficiency by reducing data handovers, manual errors, inconsistencies, quality, and data availability delays.

Prior to study execution: Implementing Quality by Design

Quality by Design (QbD) relies on optimal protocol design as a critical step toward success.12 It requires proactively planning for quality during the design phase to reduce unnecessary complexity by means of:

  • Conducting a protocol risk assessment
  • Identifying critical processes and data
  • Assessing vendor and data technology and collection risks
  • Defining and documenting controls and mitigations (IQRMP or DMP, SRMP, monitoring plan, etc.)

The protocol risk assessment focuses on the factors that have a critical impact on the credibility and reliability of the trial results and minimizes noncritical data collection, optimizes study design, and prevents or mitigates risks due to data collection ambiguity (e.g., a geriatric population forced to use novel apps). It helps to think through the options for data streamlining or automation to reduce risks due to delays and unnecessary hand-offs.13

A number of critical factors14 should be considered during protocol risk assessment, driven by the study objectives and may include:

  • The complexity of the protocol design
  • The complexity of the data flow
  • Characteristics of the participating countries and patient population
  • The nature and format of the protocol-required procedures, including patient and site burden, feasibility of the use of certain technologies and devices (e.g., local regulation and Wi-Fi)
  • The technologies used to collect data, including patients bringing their own devices, access to the internet and the fluency of the patient population to use novel technologies if required
  • The number and experience of the vendors

AI and ML tools such as Trials.ai are widely used in QbD for optimizing protocol design, benchmarking to the best industry practices and automating the development of downstream systems. AI-driven protocol assessment integrates best industry practices by:

  • Reducing unnecessary protocol complexity and site and patient burden
  • Decreasing costs and time by reducing unnecessary assessments and data collection
  • Reducing confusion by providing a consistent protocol design format
  • Reducing the number of amendments and post-production updates

AI and robotics help to harmonize and streamline data management activities related to data collection modules, database design, programming, specifications, and plans. It allows the automated development of the downstream systems and documents, including informed consent, CRFs, and other data modules.

Study execution and enterprise process oversight

Described above QbD activities set strong foundation for efficient study execution through:

  • Scaling down activities by focusing on those that are critical and “errors that matter”
  • Reducing low-value activities
  • Improving the oversight of processes
  • Implementing flexible process driven by risks

A periodic review and risk reassessment during study execution is required due to the dynamic nature of risks and changes in regulatory requirements. Periodic risk reassessment activities may lead to various changes to risks, mitigations, controls, scoring, detection methods and even processes.

Risk reviews must be cross-functional, covering various areas of risks. Systematic issues should also be identified across enterprise at the process and vendor levels.

It is recommended to establish a consistent set of performance and quality metrics reflecting an end-to-end process across multiple dimensions, such as portfolio, assets, therapeutic areas, countries, phases, milestones, study, vendors, and sources:

  • Timelines: Protocol final to DB release, last patient last visit to DB lock
  • Data availability: Delays in data transfer, data entry lags, missing data
  • Data quality: Data correction rates, uncoded terms, unit conversion issues, abnormal values
  • Process performance: Number of post-production updates, database unlocks, proportion of global standard data collection modules and edit checks

The right set of metrics in combination with intelligent automation reduces subjectivity, enables governance and global fixes, and informs continuous improvement opportunities. These efforts will eliminate work duplication across studies, improve data quality, and reduce efforts on data cleaning and analysis. Through the course of the study, data science and innovations can be broadly applied to automate data cleaning, data validation, reconciliation and analysis, including the detection of duplicate participants, imputation of missing data values,15 and fraud detection.

FIGURE 4: Type of automations

FIGURE 4: Type of automations

Industry transformation

A risk-based approach changes the wasteful culture of spending a disproportionate amount of effort on easily avoidable and no-value-added activities due to sub-optimal study design and a lack of focus on the activities of greatest importance. We have to embrace that quality data is not necessarily error-free, but rather fit-for-purpose data that sufficiently supports conclusions equivalent to those derived from error-free data.

QbD is a crucial step for the study-level activities, which should start prior to protocol finalization and continue through consistent and periodic risk re-review during protocol execution. A risk-based strategy provides the opportunity for continuous learning and improvement, not just on the study level but also across processes and vendors.

Governance mechanisms should be established and supported by a framework with a consistent set of customized metrics to reduce subjectivity, identify systematic risks and enable a proportionate course of action. Data science and innovations enhance the intelligence of the risk-based framework and allow it to streamline and minimize risks, providing real-time access, reducing manual efforts and data handovers, improving data quality, and seamlessly accommodating the growing complexity of the data and trial designs, including adaptive studies.16

This approach is a considerable cultural change for the industry, shifting the mindset from a checklist and one-size-fits-all mentality to the paradigm of critical thinking, open dialogue, innovations, and quality being infused into company culture and corporate strategy.

About the Author

Vera Pomerantseva, associate partner at ZS on Risk Based approaches to Data Management.

References

1. Statista, “Increase in clinical trials’ complexity between 2001-2005 and 2011-2015,” https://www.statista.com/statistics/732558/complexity-of-clinical-trials-increase/ (December 2020).

2. Food and Drug Administration, A Risk-Based Approach to Monitoring of Clinical Investigations. Questions and Answers, (FDA, Rockville, MD, April 2023).

3. Food and Drug Administration, Oversight of Clinical Investigations – A Risk-Based Approach to Monitoring, (FDA, Rockville, MD, August 2013).

4. Medicines and Healthcare products Regulatory Agency, “Risk Adapted Approach to Clinical Trials and Risk Assessment,” https://www.gov.uk/government/publications/risk-adapted-approach-to-clinical-trials-and-risk-assessments/risk-adapted-approach-to-clinical-trials-and-risk-assessments (January 28, 2022).

5. International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use, Good Clinical Practice (GCP) E6 (R3), (May 19, 2023).

6. Food and Drug Administration, Digital Health Technologies for Remote Data Acquisition in Clinical Investigation, (FDA, Rockville, MD, December 2021).

7. Society for Clinical Data Management, The Evolution of Clinical Data Management to Clinical Data science (Part 2: The technology enablers), (March 5, 2020).

8. Qi Liu et al., “Landscape analysis of the application of artificial intelligence and machine learning in regulatory submissions for drug development from 2016 to 2021,” Clinical Pharmacology & Therapeutics 113 (4) 771-774 (April 2023).

9. Food and Drug Administration, “Using Artificial Intelligence and Machine Learning in the Development of Drug & Biological Products,” (FDA, Rockville, MD, May 2023).

10. European Medicines Agency, Reflection paper on the use of Artificial Intelligence (AI) in the medicinal product lifecycle, (July 13, 2023).

11. Society for Clinical Data Management, The adoption of risk-based CDM approaches, (July 2022).

12. Society for Clinical Data Management, The Evolution of Clinical Data Management to Clinical Data Science (Part 3: The evolution of the CDM role), (Aug 31, 2020).

13. Society for Clinical Data Management, Good Clinical Data Management Practices, (October 2013).

14. Clinical Trials Transformation Initiative, CTTI Quality by Design Project – Critical to Quality (CTQ) Factors Principles Document, (May, 2015).

15. Xinmeng Zhang et al., “Predicting Missing Values in Medical Data via XGBoost Regression,” Journal of Healthcare Informatics Research 4, 383–394 (2020).

16. John Scott, “(A) Regulatory Perspective on Adaptive Design in the Confirmatory Phase,” (Accessed on August, 15 2023).

Recent Videos
© 2024 MJH Life Sciences

All rights reserved.