A look at the prevalence of placebo response in clinical trials and how advanced solutions can mitigate the risk it poses to drug development.
Every year, the pharmaceutical industry spends millions in an effort to improve the evaluation of efficacy in clinical trials – including strategies to reduce risk relating to the placebo response. To an extent, the industry has learned to live with this issue because efforts to truly reduce placebo response have seemed to reach their maximum potential. The result? Increased costs. Longer timelines. And still high trial failure rates in Phase II and III. In fact, drugs in Phase III only have about a 50% chance of approval in areas like CNS and cardiovascular disease.1
In this article, we discuss the prevalence of placebo response in clinical trials, the history of how trials have addressed it to date, and new approaches leveraging machine learning to predict placebo response and reduce the risk of trial failure.
On one hand, a randomized, placebo-controlled trial is designed to clearly demonstrate efficacy of an experimental therapy over the placebo with an acceptable safety profile. On the other hand, it’s not this simple—and the data are rarely this clear.
In part, this is because of the placebo response, the measured improvement of a patient in a trial after receiving a sham treatment. Though there is no active ingredient in the placebo group, patients receiving placebo experience real clinical improvements that make it difficult to discern drug efficacy. The problem is that a high placebo response doesn’t necessarily mean the treatment doesn’t work, but the data variability from the placebo response generate noise that obscures the efficacy signal. So, trials fail.
The placebo response can account for as much as 60% of the measured treatment response across indications with pharmacologic interventions–regardless of indication.2 High placebo response rates are well-recognized in the development of CNS therapies; they have been implicated in a wide variety of indications and therapeutic areas, including immunology3–5, dermatology6,7, ophthalmology8 and others.This issue is even more pronounced for drugs targeting pain and central nervous system conditions, which have some of the lowest success rates1. For example, in osteoarthritis pain, the placebo response accounts for as much as 75% of pain reduction, 71% of functional improvement and 83% of stiffness reduction in patients receiving active drug.9 In depression, the placebo response is responsible for 67% of the measured treatment response, regardless of whether assessments are conducted by the patient or the physician.10 This undoubtedly has jeopardized the ability to clearly demonstrate efficacy of the study drug, preventing needed therapies or treatments from reaching patient.
To understand more about why trials fail due to the placebo response, let’s discuss where it actually comes from.
By nature, the placebo response is influenced by many factors, including study biases, symptom reporting errors, regression to the mean, clinical site factors, demographics and, finally, the placebo effect–a complex psychobiological phenomenon with significant psychosocial components.
Some of these influences are extrinsic, meaning they can be addressed by altering certain external circumstances. Others, like the placebo effect, are intrinsic, meaning they are unique to the patient and, thus, are more difficult to influence. For example, the placebo effect is a real, multifactorial change in biochemical pathways in the brain, producing a real and beneficial change in symptoms. It’s inherently patient-specific, influenced by the patient’s expectations for improvement and certain well-defined personality traits.
So, when a patient takes a treatment, there’s a chance that the patient will have a placebo effect, contributing to the overall placebo response. Consequently, the patient-specific nature of the placebo effect may introduce a bias and variability in RCTs. This bias and variability have proven difficult, if not impossible, to prevent.
Historical attempts to identify placebo responders involved placebo run-in periods and the exclusion of high placebo responders. With this approach, patients showing an improvement upon administration of the placebo during the pre-study period were excluded from the study.
Unfortunately, the results consistently fail to show a substantial benefit of this approach. Meta-analysis in antidepressant RCTs demonstrated that placebo lead-in phases did not decrease the placebo response, nor increase the difference in response between active drug and placebo groups.11,12 Beyond this, run-in periods require additional exposure of patients to experimental therapies, longer study timelines and increased costs–all for little or no benefit.
Trials have also implemented patient training to address symptom reporting errors and site training to moderate staff-patient interactions and avoid expectation inflation. Patient training helps patients learn how to report their symptoms consistently and reliably before the study begins, eliminating some bias caused by inconsistent symptom reporting. Many clinical trials also train site personnel to use certain communication styles and body language techniques that limit any subtle cues, as the empathy of a clinical investigator may influence patient expectations.
For example, training of patients to accurately report pain symptoms improved consistency (correlation between multiple pain reporting instances) by approximately 10%, yet this difference failed to reach statistical significance (p=0.10).13 While these methods may have been moderately successful in reducing data variability, they are also resource demanding (both for the patient and investigator site staff).
Indeed, one universal solution to limit the impact of the placebo response in drug development in a variety of diseases has not yet been found. Optimizing study designs may be helpful but may also be a significant source of study bias—not to mention the strategy has been met with regulatory resistance. For example, use of a Sequential Parallel Comparison Design (SPCD) to identify placebo responders was cited as a potential reason for rejection of a recent application of a drug to treat depression.14 Patient and investigator site training has been widely employed in pain trials, although they can be challenging to implement in large, multi-center trials. It is also far more difficult to train patients to accurately report symptoms in studies with endpoints that are highly complex and less intuitive than pain (e.g. Quality of Life scales).
Because a universal solution has not yet been available to the industry, clinical trials continue to invest in patient and site training with minimal payoff. The need persists as drug development costs continue to rise and placebo response rates climb.
The advent of advanced data analytic techniques like artificial intelligence (AI)–which includes the sub-discipline of machine learning (ML)–have opened an opportunity to approach this decades-old problem with new tools. AI has primarily been used in drug discovery to predict things like bioactivity, toxicity and drug-protein interactions.15 Applications in clinical trials and drug development have been more limited, focusing on areas like optimizing trial startup, patient matching to trials, and accelerating patient enrollment and recruitment. Considering the complex and dynamic nature of the placebo response, AI poses a ripe opportunity to take the next step forward in mitigating the impact of the placebo response.
Indeed, predictive models powered by machine learning have been built that can predict the individual nature of the placebo response; in other words, predict why some patients experience very high placebo response and some patients experience essentially no placebo response.
Predicting placebo responsiveness for every trial patient would allow statisticians to adjust for the range in the entire clinical trial population. This reduces data variability in both drug-treated and placebo-treated groups, thus improving the ability to detect true drug efficacy.
Once each patient’s placebo responsiveness can be predicted, clinical trial statisticians can account for this in clinical data analysis. The Placebell method developed by Cognivia has been shown to improve assay sensitivity (i.e. the ability to distinguish placebo treatment from drug treatment) by nearly 40%16 and improve study power by 14% (i.e. decreased risk of trial failure due to false negative results). This approach can be implemented for about 1-3% the total per patient costs for the trial and can be applied to virtually any therapeutic area or indication.
The value of this statistical method is clear. So, how do you implement it? To do this, we need two things: a predictive model and patient psychology data.
To predict placebo responsiveness, you need a properly constructed machine learning-based predictive model that combines standard machine learning techniques with deep insight into clinical trials and patient characteristics.
A couple of key things need to come together to build this model.
First, the right features need to be selected as model inputs. For example, patient characteristics like personality, expectation, baseline disease intensity and demographics may be important.
Second, performance evaluation and model selection should be conducted with care for robustness as opposed to maximum variance explained. Linear models are a good place to start because they’re simple and may be easier to interpret. Finally, it’s critical to optimize methods to ensure translatability between trial designs, modes of drug administration and other trial factors.
After training and evaluating the predictive model in disease-specific applications, it can be used to define predicted placebo responsiveness for patients based on their unique psychological characteristics. This brings us to the second critical element: gathering patient psychology data.
As discussed, we know that placebo response results from not only extrinsic factors but also factors intrinsic to the patient. To understand and predict placebo response, then, trials should gather data on patient psychology and expectations. This is most easily accomplished with a questionnaire.
This questionnaire must address specific facets of a patient’s personality, expectations of the treatment, perceptions of the trial, beliefs and motivations. These characteristics that are intrinsic to the patient are critical to understand how likely they are to respond to a placebo.
When this data is combined with other baseline data (like demographics and medical history) as inputs in the predictive model, the model can assign a score to each individual patient, representing their predicted placebo responsiveness.
This allows statisticians to distinguish the range of placebo responders at the beginning of the study so they can define a covariate, reducing the impact of the placebo response in the final data analysis.
Historically, the industry has used methods available at the time–ranging from excluding placebo responders identified through placebo run-in periods to training patients to reproducibly report symptoms. Unfortunately, the impact of these methods is modest at best–and has plateaued, considering that the placebo response continues to increase over time.17–19 Ultimately, the overall risk of trial failure is still too high, meaning more repeat trials, lost timelines and premature abandonment of programs.
Using predictive models based on machine learning provides drug developers with more advanced options to manage the placebo response at lower cost, patient and site burden than historical methods.
With the help of modern tools, we can begin to think about placebo response in new ways, helping more clinical trials succeed.
Erica Smith, PhD, Chief Business Officer, Cognivia
Master Protocols: Implementing Effective Treatment Adaptations in the Randomization
August 23rd 2023It is unrealistic to include infinite adaptations in an IRT system, thus identifying the optimal level of adaptations requires examination of the study’s characteristics and planning phase considerations.