Dominique Demolle, CEO of Tools4Patient, discusses the effect artificial intelligence is having on the statistical impact of clinical trial placebo response.
The biostatistical impact the placebo response has in clinical trials is significant; so significant that it can make a clinical trial fail or skew data to unfavorable outcomes. However, novel statistical approaches to mitigating the placebo response are emerging by creating personalized baselines per patient with artificial intelligence (AI), ultimately strengthening statistical results. In this interview, Dominique Demolle, CEO of Tools4Patient, will discuss the effect AI is having on the statistical impact of clinical trial placebo response.
Moe Alsumidaie: Please describe the clinical trial placebo response and how it presents challenges in placebo-controlled studies.
Dominique Demolle: The gold-standard of clinical trials are placebo-controlled studies where a therapy’s
Dominique Demolle
benefit is compared to a placebo. The placebo response consists of patients perceiving improvements or experiencing improvements. Some of this symptomatic relief may result from the psychological effects of receiving treatment. As much as 69% of Phase II and III clinical trial failures occur due to safety issues or the inability to demonstrate clear superiority of the tested therapy versus a placebo, regularly leading to the premature abandonment of entire development programs.
Within disease states such as pain management, Parkinson’s disease, inflammatory diseases and psychiatric conditions, the placebo response may have a pronounced effect on the primary outcomes of clinical studies and jeopardize the ability of the trial to demonstrate efficacy – amounting to a false negative. High placebo response is also significant across a slew of therapeutic areas including dermatology, ophthalmology and women’s health.
Alsumidaie: How can clinical operations and medical monitor personnel design protocols to avoid these problems? What study designs prevent these problems, and are they feasible, given the FDA’s requirements for statistical analysis?
Demolle: Several strategies exist. In terms of study design, monitoring patients in the “lead-in” phase is a common one. In this paradigm, all patients first receive a placebo for up to several weeks. Placebo responders are stratified between the different study arms or disregarded before randomization into the treatment phase of the trial. The removal of placebo responders has to be done with caution because, in some indications, placebo responders can also be good drug responders. This strategy would not apply to later stage trials as the patient population exposed to the drug would not be representative of the overall population. Other techniques to reduce errors in the measurement of disease intensity include training of site personnel and patients. In some indications, this may help to reduce the variability associated with the placebo response as well. These strategies, however, do not solve the crippling impact of the placebo effect on clinical trials. Regulators appreciate the need for a new tool to enable the evaluation of true drug efficacy by reducing the negative impact of the placebo effect, and novel methods are using AI to help overcome this problem.
Alsumidaie: Can you discuss these novel placebo response methods?
Demolle: There are innovative solutions to improve test sensitivity by characterizing and managing the individual placebo response by using novel covariates in a variety of disease states where the placebo effect masks the true efficacy of potentially essential therapies. This method calculates a baseline covariate for each patient. This means that before receiving the first study dose, a patient is characterized concerning his/her placebo response propensity. The method integrates each clinical trial participant’s personality traits, disease characteristics and demographics into a model powered by an AI-based algorithm. The Covariate can then be utilized to inform internal decisions or decisions in regulatory-compliant statistical analyses.
Alsumidaie: Are these baselines founded on individual patient characteristics or does it aggregate these data to create an amalgamated baseline? How does that work when you’re evaluating a specific endpoint?
Demolle: Novel methods that are now coming into practice involve generating a composite – or amalgamated – baseline covariate, in which multiple factors are combined in a unique way for each disease state. This method utilizes AI to calibrate the model for each disease, thus determining the respective features that are included in this composite covariate and their weights. This calibration exercise must first be conducted on a subset of placebo-treated patients for the particular endpoints that are being evaluated in clinical trials for a specific disease state. Once this calibration is complete, the model can be applied to all patients (including those treated with placebo and active drug) in subsequent analyses.
Alsumidaie: How do these novel methods impact data submissions to regulatory authorities?
Demolle: Regulators expect to see a justification of sample size based on conventional statistical methods, historical data and previous clinical data. With the deployment of new placebo response statistical methods, sponsors can reduce sample size in the foreseeable future. We are currently enabling sponsors to make better use of existing sample sizes in cases where the placebo response may substantially impact data variability. Baseline covariates have been used to reduce errors in the measurement of efficacy for several decades in clinical trial data analysis. With this approach, study leaders are better able to characterize the population, disease and response to treatment, thereby more accurately estimating the power of the study. Baseline covariates are commonly used in the analysis of primary endpoints in pivotal trials and are generally considered to be acceptable by the FDA and EMA.