Stepped-up training and real-time application key to avoiding compliance pitfalls.
When the Big Pharma giant Pfizer abruptly aborted its relationship with decentralized clinical trial (DCT) specialist CareAccess six months ago, the company issued a terse statement regarding its decision: Pfizer blamed CareAccess for committing unspecified good clinical practice (GCP) violations. Hence, CareAccess could no longer continue recruiting participants for the huge, joint Pfizer-Valneva DCT for their Lyme disease vaccine candidate; those it had recruited were cut from the study. Since then, not a peep out of Pfizer, even after CareAccess told the world, Pfizer was wrong. Of the 96 or so trial sites, CareAccess had 28.1
One would expect all hell to have broken loose at CareAccess. Yes, people lost their jobs and the negative press certainly wasn’t welcome. But one also would expect that other CareAccess clients would have broken their ties, or at least conducted audits of CareAccess’ conduct. But the CareAccess website shows numerous recruitment efforts still ongoing; it also shows it is hiring for multiple positions. As one source said, “Projects are continuing at CareAccess and things are still working. It is a strange thing.”
In a statement about its practices, CareAccess said: “Beyond the required GCP course all employees are required to complete, we also provide hands-on training in small groups both when employees are onboarded and every year as a refresher. Further, we run regular simulations at our sites to learn how to adapt to and address challenges that might arise and to ensure high-quality participant care.”
CareAccess declined to answer questions regarding the trial; Pfizer did not respond to email requests for comment.
Pfizer’s decision to begin new recruitment efforts won’t be cheap.
“It is extremely expensive for Pfizer to do this,” says Francis Crawley, executive director, Good Clinical Practice Alliance – Europe. There is “no sense in working with a site if there is no confidence in the data.”
If it’s a question of confidence in the data, then what would help instill that? Training.
Adhering to GCP requires more than just knowing the rules, says Crawley. It is understanding and knowing how to apply them in real time. Like following a recipe. You have an ingredients list, “but if you don’t know how to cook, you can’t do it,” he notes.
The list of GCP items will only get longer due to more digitalization, the move to DCTs, and the use of massive amounts of real-world data to support new molecular submissions.
“We live in a digital world,” says Crawley.
Another good reason for good training: It is likely that many clinical trial sponsors and contract research organizations (CROs) haven’t been fastidious about GCP. One study from 2011 showed that of 80 published trials reviewed, 32% did not report any protocol violations and none reported on all protocol violations looked for in the study, including patient adherence infractions, data collection infractions, enrollment violations, and randomization violations.2
Getting through a trial without some sort of protocol violation is a long shot. Jonathan Cotliar, MD, chief medical officer, Science 37, says it is virtually impossible to avoid protocol deviations. Participants and study team personnel are human beings—humans make mistakes, he says. Protocols ask study participants to behave in ways that are not natural, and often create additional challenges beyond the obstacles that present themselves as individuals living with certain medical conditions.
Jerry Chapman, senior good manufacturing practice (GMP) quality expert, Redica Systems, says that protocol deviations are to be expected because clinical trials are about research. “There will always be problems with compliance,” Chapman tells Applied Clinical Trials. “We ask [clinical investigators] to do an ungodly amount of work.”
A salient point to make here: A clinical trial sponsor spends more money on finding the necessary number of people to prove an investigational drug’s effect than anything else in the study. What determines the second biggest expenditure is the number of visits those patients will make to the clinical site over the trial’s duration.3 Wrote the study authors: “Our statistical model showed trial costs rose exponentially with these two variables.”
DCTs could help with these two variables, but the irony is that to do so will involve copious levels of technology—and the latter will involve copious amounts of training.
Training is an area often overlooked and often used as a checkbox, contends Chapman.
“You have to be able to test someone on their knowledge, to ensure they absorbed it, or they can demonstrate certain skills,” he says. “You can’t read about how to dose a patient and say, ‘I can do it.’” Redica Systems has published an analysis, the data taken from years of FDA 483 inspection reports and warnings, that details the GCP violations made in clinical trials. Just saying that it was a protocol violation doesn’t help, says Chapman.
He has known quality professionals, he adds, who will look at the boilerplate information on an FDA inspection report and the references to the Code of Federal Regulations (CFR) “and not go further. That is a mistake.” It is necessary to read the whole report, because “there are really important parts that let you determine [violation] severity so you can determine how to remedy those violations with action items,” says Chapman. “You can use this data for internal retrospection.”
Whether by coincidence or calculation, FDA, within a span of a few months, has released new guidances covering training, International Council for Harmonization (ICH) E6 (R3), risk-based trial monitoring, and conducting DCTs.4-6 The agency last addressed the first two topics in 2013. The DCT document, long awaited by DCT providers such as Science 37, advocates for “appropriate training, oversight, and up-front risk assessment and management.” Crawley points out that the ICH GCP guideline includes information on the “appropriate management of data integrity, traceability, and security.”
Another training in waiting: Regulators like real-time monitoring. A 2020 report on a US-European regulators’ symposium said, “The ICH recommends use of centralized monitoring to ID protocol deviations. It emphasizes the need to focus on trial activities that are vital to safety and trial data reliability.”7
FDA does not regulate GCP training details—not the scores, not the topics. Sponsors should provide the clinical investigator site team with training specific to the study and their roles as necessary, said an FDA spokesperson via email.
CITI Program is one of many organizations that offers GCP certifications. Jaime A. Arango, EdD, vice president, content and education, said in an email exchange that its subscribers determine their own needs and minimum-score thresholds for GCP training. CITI Program recommends an 80% passing score per module. The core level comprises about 13 modules.
Over the years, CITI has added video case studies, webinars, and new courses, including for clinical trial recruiting, DCTs, and remote and/or centralized monitoring.
Crawley says he has tested online training in which the answers were wrong. It is difficult, he notes, to learn online. GCP requires not only knowing the guidelines, but also being able to apply them, he adds. Classroom training far exceeds online training, but actual experience in a wide variety of situations for applying GCP is really what is needed.
At that 2020 symposium, representatives of FDA and the UK Medicines and Healthcare products Regulatory Agency parsed what GCP is required in preserving data integrity. It’s an important question, considering that data-generation no longer is exclusive to the trial itself, but also includes data from electronic health records (EHRs), real-world evidence and more. For regulators to reconstruct study conduct, a digital hair cannot be out of place.
The attendees, according to that report, said sponsors must monitor use of e-systems, and address issues with data, be they missing, inconsistent, or outliers. They need to fix tech programming issues and “potential protocol deviations that may be indicative of systemic or significant errors.”
In its risk-based approach guideline, FDA said centralized monitoring would allow sponsors to aggregate and then compare data coming from the various sites to “detect potential anomalies more,” including data entry delays and incomplete entries.5
In this guideline, FDA is even advising sponsors when to do assessments—early in the morning. This way, if a few protocol deviations are found, they can be, secularly speaking, nipped in the bud.
In practice, risk-based monitoring is, albeit at a snail’s pace, becoming more reality than theory in the field. A 2020 report that reviewed 6,000 published trials found that 77% had at least one risk-based quality monitoring component in play—such as centralized monitoring and key risk indicators —as compared to the prior year’s 47%. Said the authors: “Risk-based approaches have been around for a long time, but adoption has been slow.”8
Considering the operational complexity, technical prowess, administrative detail, and skills necessary to help patients, one wonders if a day’s worth of instruction can cover it all, considering the use of telehealth in a DCT.
A passing comment on inspections: They have been down, across the board, over the past few years. But now, thanks to new legislation passed in December, (the Food and Drug Omnibus Reform Act), FDA can request records before a bioresearch monitoring program inspection.9
As for the drop in those inspections, while the pandemic is certainly to blame, a US Government Accountability Office (GAO) report from January found there are fewer independent institutional review boards (IRBs) due to consolidation and private equity investment.10
FDA, when asked, did not supply a response for the drop in inspections.
Redica Systems, which has designed software to identify regulatory and quality issues so its clients can reduce their own regulatory pitfalls, had been curious about what was behind the reasons for the 483s and the warnings. Protocol violations aplenty, but, says Chapman, the firm wanted to dig deeper. Using artificial intelligence (AI), machine learning, and natural language processing, along with IRB and clinical investigator (CI) reports, they trained their software to look at the violations through a clinical inspectors’ eyes and using their vocabulary. “Based on the IRB and CI, we can drill down on where the issues are coming from,” says Chapman.
They looked at FDA warning letters dated over a seven-year period. The biggest problem they found was that sponsors and investigators were not adhering to their responsibilities. They found that 75% of the data integrity violations concerned issues like attribution or accuracy. They also found a steep drop in voluntary inspections—82 in 2021 vs. 182 in 2017.
Analyzation of the 483s, for which Redica Systems used a data pool from a 20-year period, found that data integrity issues were more common—these included legibility, accuracy, completion, sourcing, and availability issues. Drilling deeper, Redica Systems found enrollment problems such as failure to properly determine subject eligibility. Some participants had supplied bogus names, phone, and social security numbers.
Severity of a violation, according to Chapman, can be difficult to determine and measure.
“A data integrity issue means maybe somebody didn’t save the original document that they transcribed. It may not be faulty, but it means the data is suspect. That gets kind of murky,” he says. “When you talk about a warning letter, beyond a 483, what you will see, for example, is maybe ‘not according to plan.’ That is one of the bigger citations, but how serious is that? That part is in the text that follows. Seriousness isn’t determined by the number of violations.”
Operations at Science 37 seem to reflect regulators’ proposed guidance: one virtual trial site with real-time protocol inspections across all patient data entries—in short, Sauron-like vision from all digital angles. Not necessarily fewer protocol deviations, but if the trends are there, they are easier to identify. One virtual site, says Cotliar, “provides probably a clearer path.”
And that path is pebbled with study team documentation, from the trial investigators to the visiting nurses to those running the platform. “I don’t have to rely on site files to look at other single platforms, to achieve proper oversight,” adds Cotliar.
Training at Science 37, as one would expect, involves various levels, from issues that are protocol specific to getting into the weeds of the technology. “We spend a lot of time in training, its sole focus being, does everybody understand how to use the platform?” says Cotliar.
Is everyone comfortable with it? According to Cotliar, the visiting nurses go through mock visits to trial participants’ homes. They set up the technology that is automatically linked to a unified platform, “to make sure we are complying.” They learn how to handle the technology if the power goes out. All of this, adds Cotliar, takes repeated practice.
Crawley believes that as clinical trials become more digitized, more AI-dependent, attitudes regarding GCP need to advance.
“You have to operationalize GCP,” he asserts. “Training will be important if we are going to have confidence in trials that are increasingly digitized and conducted through different systems of AI.”
Christine Bahls is a freelance writer for medical, clinical trials, and pharma information.