Applied Clinical Trials
How to assure that software used in clinical trials will support regulatory scrutiny during pre-approval inspections and application review.
The FDA and other regulatory bodies have issued multiple guidance documents addressing technology tools that are used in clinical trials. FDA guidance documents include Computerized Systems;1 Software Validation;2 eSource;3 Mobile Devices;4 Electronic Health Records;5 eInformed Consent;6 and the use of Social Media.7 An FDA webinar on eSource8 is also available online. In 2010, the European Medicines Agency (EMA) issued a Reflection Paper on electronic source records in clinical trials,9 and recently, the Medicines and Healthcare Products Regulatory Agency (MHRA)10 issued a document on data integrity. In 2016, the Chinese FDA issued Technical Guidelines on Electronic Data Capture for Clinical Trials.11 Finally, way back in 2006, CDISC12 issued a document addressing the use of CDISC standards to facilitate the use of electronic source data in clinical trials.
The regulatory push to the paperless clinical trial has occurred despite a pharmaceutical industry that has been risk-averse in adopting modern-day technology tools that could support clinical trials. This risk aversion is in part due to fear by sites and sponsors of receiving a FDA Form 483.13 What if the FDA turns down an application if it discovers that, for example, a patient was born in 1982, when in the study database it is recorded as 1983, even when there is no impact on the study results? The clinical sites are also fearful of losing business as a result of any FDA Form 483 finding, however minor.
The major stakeholders in the clinical trial enterprise include study subjects/patients, sponsors, clinical researchers, regulators and the public at large. Now that the electronic world has entered the clinical trials space, as part of routine pre-approval inspections there will be an assessment of the impact of
electronic systems on 1) informed consent; 2) data integrity and accountability; 3) protocol compliance; 4) monitoring of the study by the sponsor, including risk-based monitoring; 5) patient safety by assuring that all safety events are accurately collected and reported, and 6) drug/device accountability.
The following are examples of regulated clinical trial software currently being used:
Software validation
It is strongly recommended that a risk-based approach to software validation be assessed for any medical software/app, even those not being used in clinical trials, as there are moral and ethical issues if any software used for medical decision-making does not act as intended.
For software to be “reliable,” it needs to validated. Therefore, assuring that software is in a validated state is a key factor when regulators look at software products used in clinical trials. Fortunately, in 2002, FDA provided validation guidelines for software supporting regulated products.2
The following sections address some of the needed steps to assure that software used in clinical trials will be adequate to support regulatory scrutiny during pre-approval inspections and the review of marketing applications.
Intended use
Here are some critical questions to be asked concerning the “intended use” of the software, and depending on the answers, appropriate steps must be put in place.
1. Why am I using the software?
2. How is it being used?
3. Will the data be used to support decision-making such as:
4. Will the data be used to support primary and secondary outcomes?
5. Will the data be used to support subject’s safety?
6. How will patient privacy and confidentiality be maintained?
Validation vs. compliance
A software product used in clinical trials must be validated and compliant with the Code of Federal Regulations (CFR) incorporated in Title 21 (Food and Drugs), Chapter I, Subchapter A, Part 11.14 21CFR Part 11, as it is commonly referred to, sets forth the criteria under which the FDA considers electronic records, electronic signatures, and handwritten signatures executed to electronic records to be trustworthy, reliable, and generally equivalent to paper records and handwritten signatures executed on paper.
There is a distinction between validation and compliance. Validation is objective evidence which demonstrates that a given piece of software performs as intended against the user needs and the documented specifications and requirements.2
While it is true that software requirements and resulting features within the software can support compliance with 21 CFR Part 11, it is also true that no software or hardware system can be 21 CFR Part 11 compliant on its own. The software implementation, including processes and procedures for use of the system, are what constitute 21 CFR Part 11 compliance.
And validation of the software goes beyond just the software requirements and resulting features necessary for Part 11 compliance. Validation is necessary for the entire software application and should be a defined part of the Software Life Cycle (SLC).
Software life cycle
There should be very clear SLC management documentation that contains the software development process from cradle to grave. In fact, many SLCs can go beyond the development process, to include ongoing support and even consideration for end-of-life for the software.
In terms of popular development methodology choices, there is the Agile development methodology versus the traditional Waterfall approach. For the former, when it is very clear what the deliverable will be, one makes a detailed plan and then implements the plan. In contrast, the Agile approach accepts that there will be changes during software development and has built in flexibility to change direction at a moment’s notice.
The first validation element is the software concept/requirements, or in other words, what is the problem we are trying to solve and why do we need this software? Usually an executive or the person driving the software writes the concept document to convey the mission and rationale of this new endeavor. Next, the “requirements” phase must clearly document what the software intends to do. For example, the software should allow for study subjects to enter study outcome data over the web using any device and any browser.
Once the requirements are buttoned down, there needs to be clarity on how the requirements will be executed, which comes under the heading of “functional specifications.” For example, if the requirement is that “changes to the data base need to tracked,” then the functional specification could say that there needs to be an audit trail which must display the original value, the new value, why the change was made, who made the change, and what was the reason for the change.
The next phase is left to the programmers and architects who create the very technical “design document.” This document basically is in “geek-speak” and we leave that document to the specialists. Once programming begins, the programmers work closely with the “testers” to test and debug the program until at some point the development team says the software is ready for a formal user acceptance test (UAT). In an Agile methodology, the design document, test plan and test scripts are written concurrently with development, iterating as features are refined. In a Waterfall methodology, the test plan and test scripts are often written with the design document, all of which is completed before development even begins.
Once the “test plan and test scripts” are written and signed off, formal testing can usually begin right away. During testing, the testers document the “errors, bugs and fixes,” and when all that is completed, the software is “released” and signed off by those responsible and accountable. Post-release, “version control and change management” kick in. A plan is also put in place for the retirement of the software when it will no longer be supported or commercially available.
Risk assessments
A quality by design (QbD) methodology, which includes a risk assessment and risk mitigation strategies, is a critical exercise when building software. Using a Boeing 787 analogy, a plane has several electrical systems that control the engine and the coffee machine. We should not assign the same risk to the coffee machine not delivering hot coffee as we do to the engine failing on takeoff. Therefore, in a clinical trial, we propose the following for each risk:
What to add to the protocol
A well-designed and well-written protocol can be viewed as one approach to an ideal quality manual. Therefore, in addition to standard protocol elements that we are all familiar with, the protocol should identify where and when electronic systems will be used, and then explicitly describe at a minimum, 1) all electronic systems being used; 2) the reason for their use; 3) how the systems will be used; 4) how the software will be controlled; 5) the basic elements of the risk assessment and risk mitigation strategies and; 6) data ownership and access controls.
The inspectors will not usually do an assessment of the protocol design, as that was already done by the clinical team at the regulatory agencies. What the inspectors will do is attempt to determine whether 1) the protocol was followed; 2) there are SOPs supporting the protocol; 3) the site and sponsor followed what was stated in the protocol and SOPs; 4) the electronic case report form (eCRF) was intelligently designed to support protocol compliance; 5) there were system-generated management reports of site communications and protocol deviations/violations; 6) there was an integrated risk management plan and risk mitigation strategy; 7) there was a substantiated and complete root cause analysis of observed issues/events; and 8) there were follow-up corrective and preventative action plans (CAPAs) and that the follow-up actions were resolved as planned.
From the safety perspective, the inspectors will want assurances that there was accurate reporting of safety events by evaluating 1) how safety data were collected; 2) the process for determining the source of original safety data based on a data source agreements; 3) a sample of source data from disparate sources, including review of source records written on pieces of paper, if relevant, and; 4) subject records maintained at the study site.
Are electronic source data reliable and fit for purpose?
We are asked many times by those committed to paper records-sponsors or regulators-that if the system is fully electronic, how do we know that the data are real if there are no source records with which to compare the electronic record? Ironically, you can ask the same question about paper records; unless there are major controls of paper, notebooks, etc., nothing prevents a piece of paper from being destroyed and replaced by a new one. Bottom line, as some suspicious data trends are impossible to spot through manual review of records,
fraud and dishonesty can usually only be identified through adaptable, validated checks and the use of informatics and multidisciplinary expertise. Clearly then, misconduct is not dependent of the collection medium of the original record (paper vs. electronic). Consequently, in a double-blind, randomized trial, one effect of fraud, if it occurs, could affect the study results by demonstrating that a potentially effective drug is ineffective from a statistical perspective (Type 2 error). This is where study monitoring techniques and fraud detection programs become vital.
Therefore, when using off-the-shelf or proprietary software systems, the following are some of the quality questions that should be considered:
Pre-approval inspections of an eSource program
Based on our experience and the experience of others, when the regulators/inspectors arrive, they will need to review:
As of this publication, two regulatory marketing applications have been submitted to FDA where direct data entry at the time of the office visit took place in lieu of maintaining paper source records. From the perspective of the pre-approval inspections at the clinical research sites, there were no Form FDA 483 findings related to study monitoring oversight and any of the electronic systems that were used for data collection. The inspections occurred at three of the five major medical centers that were inspected for the first application, and at five of the 20 outpatient clinical research sites that were inspected for the second application.
The following are the redacted excerpts of the FDA inspection of the CRO (Target Health Inc.) for the first program that received marketing approval where the clinical research sites entered the majority of the data directly into the EDC system at the time of the clinic visit, and all documents were maintained in an electronic Trial Master File (eTMF; target document).
Report preamble:
From the detailed report:1. The clinical project manager develops a clinical data monitoring plan and safety monitoring plan for each study, using a company-specific template; these plans are approved by the sponsor before any site monitoring takes place.
2. This study utilized direct entry of data, by the sites, into the EDC system using the eSource process (Target e*CTR; eClinical Trial Record).
3. Two electronic systems were used for the protocol.
a) The document system (eTMF) is a Part 11 compliant system and is the same system used by the clinical investigator sites.
b) The EDC system (Target e*CRF) is a separate software from the document system; both are online databases.
c) Queries were handled within the EDC application. The in-field and in-house monitor user roles are the only persons who can generate queries.
4. The CRO also performed remote/central reviews of data on an ongoing basis, between on-site monitoring visits.
a) Central monitoring reports (CMRs) were generated when remote monitoring of the sites occurred.
b) Items requiring follow-up/queries are specified on these reports.
c) The necessary follow-up action and responsible person was indicated as well as if the issue had been resolved and when.
Discussion and Conclusion
For more than a decade, technologies and processes allowing for the paperless clinical trial have been substantiated and encouraged from both the business and regulatory perspectives. A frequent question is “what are the barriers to entry that have prevented the pharmaceutical and medical device industries
from widely adopting the paperless clinical trial?” Beyond the frustration caused to those who consider technology essential to progress, we are confronted with potential obsolescence of drug and device developers who don’t recognize the current world of data acquisition and monitoring technologies that allow for contemporaneous data flow and data review. The time is ripe to put away fears of regulatory sanctions when using modern technology, and make changes to modernize and optimize the drug and device development processes that will ultimately, positively, support the regulatory approval process.
Jules Mitchel, MBA, PhD, is President, Target Health Inc., email: jmitchel@targethealth.com; Jonathan Helfgott, MS, is Coordinator of the Regulatory Science Graduate Program at Johns Hopkins University. He was previously the Associate Director for Risk Science at FDA CDER OSI and the main author of FDA’s eSource Guidance; email: jhelfgo1@jhu.eduAcknowledgement: The authors would like to thank Joyce Hays and Les Jordan for editing the manuscript.
References
1. Guidance for Industry Computerized Systems Used in Clinical Investigations, May 2007
http://www.fda.gov/downloads/drugs/guidancecompliance regulatoryinformation/guidances/ucm070266.pdf
2. General Principles of Software Validation; Final Guidance for Industry and FDA Staff, January 2002
http://www.fda.gov/medicaldevices/deviceregulationandguidance/ guidancedocuments/ucm085281.htm
3. Guidance for Industry. Electronic Source Documentation in Clinical Investigations. September 2013.
http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatory Information/Guidances/UCM328691.pdf
4. Mobile Medical Applications - Guidance for Industry and Food and Drug Administration Staff. February 9, 2015.
www.fda.gov/downloads/medicaldevices/deviceregulationandguidance/ guidancedocuments/ucm263366.pdf
5. Use of Electronic Health Record Data in Clinical Investigations
http://www.fda.gov/downloads/drugs/guidancecomplianceregulatory information/guidances/ucm501068.pdf
6. Use of Electronic Informed Consent in Clinical Investigations March 2014.
www.fda.gov/downloads/drugs/guidancecomplianceregulatoryinformation /guidances/ucm436811.pdf
7. Guidance for Industry - Internet/Social Media Platforms with Character Space Limitations - Presenting Risk and Benefit Information for Prescription Drugs and Medical Devices June 2014
http://www.fda.gov/downloads/Drugs/GuidanceCompliance RegulatoryInformation/Guidances/UCM401087.pdf
8. FDA Webinar on a Final Guidance for Industry Titled “Electronic Source Data in Clinical Investigations." (http://www.fda.gov/Training/GuidanceWebinars/ucm382198.htm); (https://collaboration.fda.gov/p89r92dh8wc/?launcher=false&fcsContent=true&pbMode=normal)
9. EMA, 2010. Reflection Paper On Expectations for Electronic Source Data and Data Transcribed to Electronic Data Collection Tools in Clinical Trials (http://www.ema.europa.eu/docs/en_GB/document_library/ Regulatory_and_procedural_guideline/2010/08/WC500095754.pdf)
10. MHRA GxP Data Integrity Definitions and Guidance for Industry
(https://www.gov.uk/government/uploads/system/uploads/attachment _data/file/538871/MHRA_GxP_data_integrity_consultation.pdf)
11. Chinese FDA. 2016. Announcement on Issuance of Technical Guidelines on Electronic Data Capture for Clinical Trials (No. 114, July 29).
12. CDISC, 2006. Leveraging the CDISC Standards to Facilitate the use of Electronic Source Data within Clinical Trials
(http://cdisc.org/system/files/all/reference_material_ category/application/pdf/esdi.pdf)
13. Mitchel, J., Park, G., and Hamrell, MR. 2014. Time to Demystify the Myths and Misunderstandings About Form FDA 483. InSite: 2nd Quarter 21-25.
14. CFR Part 11. Code of Federal Regulations
http://www.ecfr.gov/cgi-bin/text-idx?SID=c654e9b1077d05f0b0bb0fbdd45f00b8 &mc=true&tpl=/ecfrbrowse/Title21/21cfr11_main_02.tpl
Driving Diversity with the Integrated Research Model
October 16th 2024Ashley Moultrie, CCRP, senior director, DEI & community engagement, Javara discusses current trends and challenges with achieving greater diversity in clinical trials, how integrated research organizations are bringing care directly to patients, and more.
AI in Clinical Trials: A Long, But Promising Road Ahead
May 29th 2024Stephen Pyke, chief clinical data and digital officer, Parexel, discusses how AI can be used in clinical trials to streamline operational processes, the importance of collaboration and data sharing in advancing the use of technology, and more.
Zerlasiran Achieves Significant Sustained Reduction in Lipoprotein(a) Levels with Infrequent Dosing
November 20th 2024Zerlasiran, a novel siRNA therapy, demonstrated over 80% sustained reductions in lipoprotein(a) levels with infrequent dosing in the Phase II ALPACAR-360 trial, highlighting its potential as a safe and effective treatment for patients at high risk of cardiovascular disease.