Using performance indicators in CRO/Vendor oversight ranges from the basic, to the unreliable to emerging innovations and strategies.
The field of risk-based monitoring (RBM) has introduced many structured approaches towards assessing and addressing clinical trial risk. Specifically, RBM has driven the industry to develop standardized key risk and performance indicators in clinical studies, nonetheless, the industry has still yet to develop standardized analytical measurements to assess CRO/vendor performance. I was recently at eXl Pharma's CROWN Congress, and there was a lot of discussion on assessing vendor performance, however, it was apparent that the industry lacks standardized analytical measurements in measuring vendor performance.
During this conference, much of the industry and technology enterprises focused their CRO/vendor oversight approaches on evaluating basic performance metrics, such as milestone delays including IRB/protocol approval, first patient in, last patient out, screen failure rates, and protocol deviations, to name a few. While these metrics can add value, especially from an RBM standpoint, they do not assess the performance of a vendor/CRO, as these insights do not differentiate between CRO and site performance. Such insights can assist sponsors with optimizing site performance during clinical trial conduct by focusing on poor performers, nonetheless, these insights merely direct efforts at study sites, rather than comprehensively uncovering underlying issues with CROs, such as CRA turnover, performance, relationship strength with the study site, and study site burdens associated with clinical technology system user friendliness.
A few sponsors spoke about their approaches towards evaluating CRO performance, and such practices involved the traditional vendor feasibility process and scorecards. Selection criteria included therapeutic expertise, technology systems (i.e., CTMS, EDC, ePRO), enrollment strategies, risk identification/mitigation approaches, project management tools, amount of resources per site, SOP infrastructure, how the study team interacts with the CRO study team, and cost.
However, without underlying performance analytics, this evaluation process can be unpredictable and risky at times. To analogize, this process is similar to hiring personnel who seemed to perform well on the interview process, however, did not meet expectations in the role; it doesn’t always happen, but, there are occasions in which it does, and this risk can be higher if the sponsor is inexperienced with CRO oversight, or are evaluating an inexperienced CRO. Selecting an underperforming CRO not only delays timelines, but also results in significant risks towards the study’s budget (especially if a sponsor is to select another CRO in rescue mode).
More experienced sponsors leveraged advanced strategic planning techniques and analytical tools to define and measure CRO performance indicators. Notable indicators include study conduct (i.e., site relationship health, investigational medical product accountability & deviations, consent form quality), treatment and compliance (i.e., % dose reduction and days), safety reporting (i.e., AE/SAE rates), and data management (i.e., % overdue forms, query rates, and query resolution time). PPD’s Preclarus system was mentioned as a CRO performance measurement technology. The aforementioned factors can be linked directly to CRO performance, nevertheless, there was no mention of leveraging standardized and benchmarked performance metrics to evaluate CRO performance during the feasibility process. Moreover the presentation neglected assessing CROs from the study site’s standpoint, a critical area that affects study performance.
There was an interesting presentation where a head clinical research coordinator at a study site spoke about some of the challenges they underwent with poor sponsor/CRO performance. Some of these issues included contract/budgetary challenges, IRB approval hindrances, and software/technology issues (i.e., many different technology systems, and dual data entry). Clinical SCORE, a service provider, introduced a market research solution that uncovers the site’s perspective in revealing study performance issues, specifically unveiling CRO and sponsor performance, hence, allowing study teams to optimize decision-making based on anonymous and highly granular study site feedback. The system compares the study’s performance to a normative data set of global trial performance to evaluate areas of outperformance/underperformance. Some of the most common site hindrances included issues with CRAs, sponsors, passwords/software, and hardware/vendors. “It is important for study teams to rapidly identify all issues hindering completion of clinical studies by gathering key insights directly from investigators and coordinators. Comparing these findings to a normative database of global clinical trial performance is very powerful at removing roadblocks and getting trials back on track,” said Alec Pettifer, VP of Operations at Clinical SCORE.
Another technology enterprise, CRO Analytics, leverages Quality Performance Indicators (QPIs) aimed at shortening clinical trial duration and reducing costs. CRO Analytics’ software application measures functional and professional skill factors that impact clinical trial performance, and that data enables mid-trial adjustments, prioritizes functional improvements, and creates data-driven vendor relationships. “Sponsors have to measure service quality to understand what is driving their key performance indicators,” said Peter Malamis, CEO of CRO Analytics. “If the industry continues to simply rely on operational metrics to guide performance improvement efforts, the frustration they feel now will only grow,” added Malamis.
The biopharmaceutical industry recognizes that it needs to develop ways in which it can better measure vendor performance. Some biopharmaceutical enterprises are looking into innovative ways of vendor performance measurement, however, these methods are conducted in the midst of the trial without any forms of benchmarking/predictive approaches to risk mitigation during feasibility. New service providers that offer benchmarking services are entering the industry with innovative solutions to better measure vendor performance.
Driving Diversity with the Integrated Research Model
October 16th 2024Ashley Moultrie, CCRP, senior director, DEI & community engagement, Javara discusses current trends and challenges with achieving greater diversity in clinical trials, how integrated research organizations are bringing care directly to patients, and more.
AI in Clinical Trials: A Long, But Promising Road Ahead
May 29th 2024Stephen Pyke, chief clinical data and digital officer, Parexel, discusses how AI can be used in clinical trials to streamline operational processes, the importance of collaboration and data sharing in advancing the use of technology, and more.
The Rise of Predictive Engagement Tools in Clinical Trials
November 22nd 2024Patient attrition can be a significant barrier to the success of a randomized controlled trial (RCT). Today, with the help of AI-powered predictive engagement tools, clinical study managers are finding ways to proactively reduce attrition rates in RCTs, and increase the effectiveness of their trial. In this guide, we look at the role AI-powered patient engagement tools play in clinical research, from the problems they’re being used to solve to the areas and indications in which they’re being deployed.