Applied Clinical Trials
Among the many clinical development processes that need to be conducted in a smarter, more cost-effective manner, clinical data monitoring stands out as a promising area in which operational efficiencies can not only reduce costs but also improve research quality and patient safety.
SPOTLIGHT EVENTRisk-Based Monitoring – Part Two In Depth ReviewMarch 13, 2014
Cambridge, MassachusettsDownload BrochureRegister
RELATED
- Increasing Intensity of On-Site Monitoring a Troubling Trend
- Industry Metric Indicates Low ROI with Full Source Document Verification
- Has FDA Guidance on Risk-based Monitoring Impacted SDV Coverage Yet?More in Risk-Based Monitoring
Faced with the near-perfect storm of an increase in drug development costs, a reduction in the number of drug approvals, and an increased level of drug safety scrutiny, biopharmaceutical manufacturers are under enormous pressure to make their operations leaner and more efficient. Among the many clinical development processes that need to be conducted in a smarter, more cost-effective manner, clinical data monitoring stands out as a promising area in which operational efficiencies can not only reduce costs but also improve research quality and patient safety.
Although the U.S. FDA guidance and the International Conference on Harmonization (ICH) guidelines for Good Clinical Practice propose an "adequate" level of clinical monitoring to ensure subject protection and data quality—and acknowledge that there is generally a need for on-site monitoring throughout a clinical trial—these regulations fall short in dictating the frequency of on-site visits. As such, in a historically conservative practice, trial sponsors and CROs traditionally monitor every site in a study every four to eight weeks. Given the volume and size of studies, in addition to the ever-growing cost of bringing a new compound to market, a new practice of risk-based clinical monitoring using triggering techniques can be adopted that enables organizations to focus resources on high-priority sites without compromising safety or quality of research.
Techniques in efficiency
Risk-based monitoring is a program of risk assessment for clinical conduct and data collection that applies available monitoring resources according to the identified risks—and reassesses those risks on a regular basis throughout the study. Under risk-based monitoring using triggering techniques, the fundamental premise is that monitoring sites where there's little or nothing to monitor is not a useful proposition.
In a typical study, for example, one-third or more of the sites do not enroll a single patient. Yet the standard industry practice is to visit these nonenrolling sites as frequently, as if they were enrolling the same amount of subjects, and sometimes even more frequently under the need and guise of boosting subject enrollment. If a trial sponsor is not using risk-based monitoring techniques—and they're simply deploying clinical research associates (CRAs) to every site based on "usual practice"—they limit the resources that could be deployed to high enrollment sites where monitoring is needed. In addition, the attention of limited and costly resources are dissipated and focused away from problem areas instead of toward them. Likewise, the average clinical trial site only enrolls three to four patients, but every site is treated equally from a monitoring perspective. This practice is an enormous waste of resources and energy that could instead be focused on where the needs are.
To ensure that valuable resources are allocated in the most efficient manner, risk-based monitoring promotes the use of data to initiate a site visit only when justified by on-site workload or other quality triggers. The method involves the identification of risks and then links each risk with appropriate triggers that will initiate source data verification. Study risks may include past site performance; the number of subjects and rate of site recruitment; staff feedback on protocol compliance, site contact, and record keeping; information received from data management, such as missing case report forms (CRFs), query rates, and CRF completion delays; inaccurate or repetitive data; and safety issues. It's important to emphasize that risk-based clinical monitoring using triggering techniques doesn't compromise quality, but rather improves it.
Triggering visits
One way to manage risk and use data as a trigger for on-site visits is to monitor the workload at each site. Predictive algorithms and historical data can be used at the beginning of a study to predict patient visits, establish a schedule of events, and to estimate the amount of source data verification work expected. These calculations can then be used to establish work volume thresholds to justify a monitoring visit. Once pre-established volume accumulates at a site, the system triggers a monitoring visit. The time-and-events schedule established at the study's outset adjusts on a daily basis as new information is collected about each site, and is used to instruct the deployment of CRAs.
What if an adverse event occurs at a research site before enough work has accumulated to set off a volume-based trigger? Protocol-based monitoring can be used in tandem with volume monitoring conditions. A site scheduled for a visit in one month's time will trigger an immediate visit in the event that a death occurs, for example. Under current standard industry practice, that same site would have gone without a visit for an additional month.
To ensure quality, time thresholds can also be built into risk-based monitoring systems so that in the event a visit has not been triggered within a certain time frame because of volume, protocol or other risk-based measures, a site visit will occur regardless. Data quality issues, and unacceptable lags between patient visits and data entered in the system are additional examples of risk-based triggers utilized to initiate a monitoring site visit.
Triggered monitoring reduces risk because site visits are initiated in reaction to what's happening at the site, rather than as a result of arbitrarily applied time intervals. Subject protection as well as data collection and oversight are still very much the priority—and can be done more intelligently and more efficiently.
This concept of using triggering techniques to identify risk isn't a radical idea in the pharmaceutical industry. FDA's Division of Scientific Investigations (DSI) is discussing the same approach. They also need to make data-driven decisions on whom to inspect.
As such, DSI staff would apply triggering techniques to develop systems that answer the questions "What are the risks?" and "What triggers can we establish that would permit the inspection of sites and, by implication, the certification of data?" By using a triggered-based system for detecting signals, DSI would be able to deploy personnel only to facilities that require a physical inspection—thus saving time, energy, and valuable resources.
At the ACRP Global Conference in April 2009, Dr. Leslie Ball, DSI Director, succinctly stated that the current system of clinical monitoring "encourages an approach similar to old-fashioned manufacturing systems: Produce the product, catch the defective ones, and throw them out after the fact." Furthermore, this type of system does little, if anything, to ensure the end goal of obtaining quality data that could be used to establish confidence in the safety and efficacy of the compound in question.
Although the FDA and other regulatory agencies are actively encouraging the use of risk-based clinical monitoring, there's still considerable uncertainty among biopharmaceutical manufacturers regarding its implementation. Part of the issue is the concern that regulators may not be as willing to accept data from studies using risk-based clinical management plans that incorporate these ideas because of the lack of consistent applications. Due to the potential risk to future market potential of its product, it's often difficult for pharmaceutical companies to promote unproven practices without more guidance from regulators and other stakeholders. This translates into the field as a continuation of business as usual, and the standard practice of regular visits to every site in a study at the traditional intervals.
Demonstrating value
From a risk-based perspective, the best way to attack risk is to identify the risk—and then try to mitigate it. In the present industry model for clinical monitoring, the risk is usually not well characterized. As such, we're not able to focus our resources on precisely where the risk is, and essentially, we end up fighting the war on multiple fronts.
With risk-based monitoring using triggering techniques, enormous cost-savings can be realized by a more thoughtful, coordinated approach to identifying risk. As previously discussed, triggered monitoring replaces time intervals as the basis for monitoring frequency and substitutes a combination of metrics (e.g., data quantity, subject enrollment, safety signals) to predict risk and trigger site visits. Similarly, instead of all sites being treated equally in a traditional model, a triggered model identifies sites by metrics such as workload and quality, safety issues, as well as subject enrollment and retention. Most significantly for subject safety, all safety events are treated equally in a traditional monitoring model, whereas a triggered model catches safety signals as they happen, which then triggers a site visit. Quite simply, triggered monitoring enables focused attention of high risk and/or high benefit, while promoting efficient utilization of time and resources.
A recent Quintiles pilot exercise comparing a traditional monitoring model to a triggered monitoring model produced some encouraging results. Using triggering techniques, a 25% reduction in the number of site visits was achieved, and on-site CRA utilization increased from 10% in a traditional model to 75% in the triggered model (see Figure 1). These results show that trial sponsors and clinical research organizations not only can work less and save resources, but also obtain better output.
Other studies lend support to the value of this model as well. A postmarketing study for a cardiovascular drug conducted in 2008 by a large pharmaceutical company demonstrated quality improvement with the use of a hybrid on-site, centralized monitoring method, as compared with sites monitored exclusively by CRAs. The hybrid method produced significantly fewer data clarification forms (DCFs) as compared to the traditional on-site CRA method. In addition, the hybrid approach reached 100% DCF resolution six days earlier than the traditional approach.
Eisenstein demonstrated significant cost savings with the use of EDC and modified site management practices, with as much as 20% of a trial's total cost saved with the use of a more centralized approach to monitoring, as compared with traditional on-site evaluation methodologies.1
In a review of study design and acceptable monitoring practices, Baigent suggests that central statistical monitoring should guide the frequency and content of on-site visits.2
In fact, the ICH-GCP guidelines acknowledge the potential for central statistical monitoring to play a prominent role in any monitoring plan, stating: ''In general there is a need for on-site monitoring, before, during, and after the trial; however, in exceptional circumstances, the sponsor may determine that central monitoring in conjunction with procedures, such as investigators' training and meetings, and extensive written guidance, can assure appropriate conduct of the trial in accordance with GCP. Statistically controlled sampling may be an acceptable method for selecting the data to be verified.''
Tools of the trade
To apply a trigger monitoring algorithm, there are three primary tools required. First, it's necessary to have a robust predictive modeling tool using sophisticated statistical algorithms that take into account variables such as historical data and the statistical distribution of patients. This enables sponsors to predict enrollment and an accurate time-and-event schedule. Without this capability, there is a high degree of risk that fundamental assumptions made will be inaccurate.
Second, an electronic system is needed to essentially function as a bridge between scheduling systems and EDC. This tool must be capable of extracting data from multiple sources (EDC, IVR, etc.) and feeding them into the CRF scheduling mechanism. For example, if a death occurs at one of the study sites and was noted in an EDC/CRF system, this program must be able to translate the event into a trigger that initiates a monitoring visit.
Finally, a scheduling tool is necessary. Once the study's time-and-event schedule has been built, there must be a program capable of performing the scheduling and making adjustments to that schedule as new data are incorporated.
Smarter development processes
With pressure on industry to control costs and do more with less, sponsors can institute smarter processes at every step of the development continuum. Simply put, given the benefits and potential efficiencies triggered monitoring provides, industry should continue supporting triggered monitoring projects and initiatives to define the value of this method and criteria for implementation. Only through these efforts can industry determine how triggered monitoring can improve clinical research while also promoting quality and safety.
Scott Cooley is Executive Director, Product Management, email: [email protected], and Badhri Srinivasan, PhD, is Vice President, Enterprise Transformation Unit, at Quintiles, Durham, NC.
References
1. E. Eisenstein, R. Collins, B. Cracknell et al., "Sensible Approaches to Reducing Clinical Trial Costs," Clinical Trials, 5, 75-84 (2008).
2. C. Baigent, F. Harrell, M. Buyse, J. Emberson, G. Altman, "Ensuring Trial Validity by Data Quality Assurance and Diversification of Monitoring Methods," Clinical Trials, 5, 49-55 (2008).
To whom all correspondence should be addressed.
Moving Towards Decentralized Elements: Q&A with Scott Palmese, Worldwide Clinical Trials
December 6th 2024Palmese, executive director, site relationships and DCT solutions, discusses the practice of incorporating decentralized elements in a study rather than planning a decentralized trial from the start.