AI Already Starting to Deliver Faster, Safer, More Effective Clinical Trials

Commentary
Article

From design and trial start-up to conduct and analysis, there is enormous potential for applications of artificial intelligence within clinical trials to have a profound impact on human health.

Credit: Shuo | stock.adobe.com

Credit: Shuo | stock.adobe.com

Artificial intelligence (AI) has had an undeniable impact on society in the past few years, having fundamentally changed how people from diverse industrial sectors do their jobs day to day. Drug discovery has been the focus of pharma attention.

Last month alone, we have seen Insilico announce its lead candidate has met its primary endpoints and is in the process of designing a Phase IIb trial, while a DeepMind spinout, Isomorphic Labs, has just issued £182 million in new shares. It’s also worth looking at the impact of AI for trial management and clinical efficacy, an area still in its infancy, as we think this could grow into an even bigger force, fundamentally transforming how we approach drug development.

There are many areas in which AI could improve speed, reduce complexity, and consequently improve efficiency in workflows and costs. From design and trial start-up to conduct and analysis, there is enormous potential for applications of AI within clinical trials to have a profound impact on human health.

One of the best areas for AI to improve clinical research is in site feasibility and selection. Picking the right sites optimized for the eligibility criteria and trial specifications is perhaps the single most important decision impacting the success or failure of the trial because of the causal impact on enrollment rates.

Trials that don’t enroll quickly enough risk failure to provide the statistical power needed to conclude the drug under investigation is safe and effective, and this remains a top cause of trial failure today. AI prediction engines, which use large language models (LLMs) to read a protocol, understand the context, and select the “best” sites for the protocol, are starting to become available.

Yet these engines are, at best, on par (and sometimes worse) than human performance on the same task. The reason is that there is a lack of available site performance data aligned with the key metrics that would help drive successful predictions.

The key to overcoming this is existing data that can be used for training. We use data pipelines that aggregate KPIs and raw data from past protocols, while obviously being careful to clean these data pipelines to ensure that no sensitive or client confidential information is inappropriately disclosed.

This is something contract research organizations (CROs) and sponsors could learn from and attempt to implement. Another approach to improve reliability is to build the AI model along with publicly available data to make better predictions.

Taking this a step further, we can apply machine learning (ML) techniques that can incorporate feedback from trial outcomes to help improve site feasibility and selection, and the more we do this as an industry, the bigger the data pool and the greater the accuracy.

The next usage for AI is to help shorten timelines before and after the study conduct period. As an example, the protocol authoring process is slow and is ripe for disruption.

Medical writers prepare to author protocol drafts by reading synopses and relevant medical information about the disease being studied, and then start the authoring process with a protocol template using past protocols as examples. What if this research gathering can be conducted at scale by an AI, using a standardized and templated approach?

For example, LLMs can synthesize large amounts of information and, using carefully crafted prompts, can autofill a draft much as a medical writer would. Good training data can make these drafts very accurate and then can be passed to a human editor to ensure a quality protocol is written, essentially removing the grunt work from the equation and leaving trial designers to add experience and finesse to protocols.

It’s like how junior lawyers will create base drafts for senior partners to tailor. Similarly, AI prediction engines can estimate site and patient burdens by comparing protocol drafts to past protocols, ingesting aggregated performance data from those protocols, and summarizing them for the author. The writing team can use these to consider possible design changes to the protocol that would reduce burden and the AI could also suggest such changes.

By our best estimates, we believe that protocol digitization capabilities can reduce build times by as much as 30%, and we invested in this recently. Our approach was to turn unstructured protocol documents into structured Excel files, in which each important element is extracted into a format that is both human- and machine-readable. This Excel file can be loaded into various EDC build systems. It’s a simple idea in many ways but provides the standardization needed to have the capability to build complete models.

The next step is to build a “metadata ingestion capability” into your eClinical platform. Putting the jargon aside, this means extracting structured data elements from the protocol, such as visit schedules, cohorts, and case report form data elements, and feeding an algorithm that can then automatically generate electronic case report forms, visit schedules, edit checks, and other database configurations. Once the data is cleaned in this way, you start to reap the rewards of the technology, but you must build this in from inception.

Using this approach, the database build stage can potentially be completed weeks earlier than before, ensuring systems are ready in production as soon as sites get activated. Another application of AI is to help shorten the time to database lock and delivery of the clinical study report.

For example, in mid-2022, we achieved automatic data standardization to a relational format based on CDASH with SDTM-controlled terminology incorporated. The benefit here is that the automation does about 95% of the mapping that’s required for an FDA submission (fully SDTM).

More generally, having standard data available upfront, and in real time after collection, is a huge boon to data management. This is because it enables you to write edit checks using the standard data instead of study-specific data dictionaries, which leads to much greater reuse of library content and helps find similar and/or numbers of issues in the data with fewer checks needed.

In layman’s terms, it provides greater reusability of code for reports and analyses, meaning the lessons of previous trials stay learned. When you put all this together, you can build a portfolio of interactive reports and pipelines that automate the generation of tables, figures, and listings. Our estimate is that you can reach your analysis-related milestones in half the time it took without such technology.

Another example is risk-based monitoring and quality management (RBQM). Here, the goal is to quickly identify problems in clinical trials, allowing for timely adaptation and intervention when necessary.

AI-enhanced tooling can optimize the risk-based assessment and mitigation plans for a trial, given the protocol, critical data, and processes, by learning from across trials. It can then help automatically detect issues in data quality, compliance, and safety. This real-time monitoring of identified risks is tied back to the mitigation plan and allows for timely intervention, heavily reducing inefficiencies in addressing underlying operational risks in a trial.

Emmes, in fact, famously used risk-based monitoring technology way back with the H1N1 pandemic in 2009, so it has some pedigree. AI models can take this much further and enable us to detect risk across sites or based on specific people involved in trial operations, driving adjustments to the monitoring strategy to ensure the utilization of available resources at the sites with higher risk. The AI acts as the “big brother” always watching, and in clinical trials, that’s a good thing.

While it’s tempting to look at the potential of AI in clinical trials with rose-colored glasses, the reality is that AI tools are still only as good as their training (and prompts, for LLMs). Evaluating the performance and reliability of an AI platform in clinical trials requires building guardrails into AI solutions, which is a must-do from a safety and ethics stance.

For example, engineered prompts can control response quality and independent AI instances can critically review and evaluate correctness and safety risks. Solutions that deliver medical knowledge are designed to ensure the presence of a human reviewer in the loop to ensure patient safety, maintaining the vital human accountability in the process.

From a regulatory point of view, AI doesn’t change the requirements for auditability or traceability in the data, nor does it change who is accountable for each aspect of the clinical trial. The biggest challenge to growing AI use is acceptance and the change required by its users.

In summary, AI will be most effective when it augments human capabilities, and to achieve this, we need to consider not only the technology, but also how it fits into workflows and how to upskill clinical CROs to get the most out of it. Over the next five years, we will see AI solutions design better studies, reduce site and participant burden, accelerate information exchange, reconcile data, and automatically draft patient profiles and clinical study reports.

We should all be rooting for these solutions, as improving technology will allow more clinical trials to be run, deliver results more quickly, and ultimately lead to faster treatments and cures for diseases that impact all of humanity.

About the Author

Noble Shore, VP, Technology Strategy & Product Adoption, Veridix AI.

Recent Videos
Related Content
© 2024 MJH Life Sciences

All rights reserved.