Are Any Data in Clinical Trials Better than No Data At All?

Commentary
Article
Applied Clinical TrialsApplied Clinical Trials-11-01-2024
Volume 33
Issue 11

The answer comes down to context of use—and knowing the motives for "missingness."

Scottie Kern, Executive Director, eCOA Consortium at Critical Path Institute

Scottie Kern, Executive Director, eCOA Consortium at Critical Path Institute

The expectation set on the clinical trial participant around the accurate and timely collection of data that details that participant’s perspective and experience during a clinical trial demands careful consideration. Electronic methods of capture of clinical endpoint data are becoming increasingly dominant, and yet it’s notable that in some respects the industry continues to critique electronic methods in a way paper-based methods are not.

One such dynamic is the persistent anxiety around how technical issues can arise with electronic systems and the consequences of them occurring. They do arise—to pretend otherwise is an exercise in futility—but the setup of any clinical trial should incorporate a thorough risk analysis that extends to factoring in what happens when any tools used to collect data are, for whatever reason, unavailable or unable to be used. To that end, paper could be considered a reasonably robust technology itself, but the issues with paper pertain to how it’s used. Key facets of electronic data capture such as accuracy, contemporaneousness, and the associated enhancement of quality can be compromised if the primary mode of data capture isn’t employed consistently throughout a trial, so backup strategies are planned for and occasionally employed.

In the context of electronic clinical outcome assessment (eCOA), this may be alternative electronic modes (e.g., web backup) or the use of an interview or assisted completion approach. From an analysis perspective, this raises an interesting philosophical question—is data collected via a mechanism other than the primary mode dictated in a clinical trial protocol, and that may well have a lower quality index than the primary mode, preferable to having no data at all? This question was the subject of a recent discussion between eCOA and allied service providers of the Critical Path Institute’s (C-Path) eCOA Consortium, sponsors from C-Path’s Patient-Reported Outcomes (PRO) Consortium, and colleagues from the FDA.

Ultimately the context is everything—what the data is and how it will be used. The fourth guidance in the FDA’s patient-focused drug development guidance series1 gives clear direction that reasons for data being missing are key and ultimately inform the analysis of such data, But at the heart of the matter are two concepts: dealing with missing data and understanding why it’s missing. From an eCOA perspective, there are two areas of interest: missing item responses within a COA measure, and an entire COA being missing. But in either case, the key concern is whether data is missing intentionally or accidentally; this is generally classified as “missing completely at random,” “missing at random,” and “missing not at random,” the latter being of the highest concern and reflects where the missing data systematically differs from observed values. In any case, we need to know why something is missing.

eCOA and ‘missingness’

In the context of eCOA systems, some sponsors have endeavoured to collaborate with eCOA providers to define reasons for missingness—which appears to me to be an opportunity for cross-industry collaboration. But asking trial participants to record such explanatory data points has the specter of additional burden hanging over it, and we do not, as yet, have empirical evidence to prove that is or is not the case. In fact, there’s a dearth of analysis around the burden of use of eCOA systems in general.

And yet, the potential risk is high; where key endpoints are affected by missing data, this may have significant consequences on power calculations for trials and, hence, the value of the trial. Similarly, missing data can induce bias in the estimation of any treatment effect or safety signal. In this scenario, it’s not unreasonable to conclude that a general strategy would be that it’s better to catch every bit of data we can irrespective of its inherent quality. And yet, no one rule governs all. Applying a standard philosophy to trials with comparatively small populations, such as in rare diseases, means every data point’s cruciality is amplified.

As obvious as it seems, avoiding missing data is unquestionably the best approach, and we know how important impactful training and fostering among participants a true sense of understanding as to the importance of the data being collected as well as why they’re being collected at all. We also need to look at what clinical trial protocols incorporate in that regard. While there is evidently growing concern around the increasing complexity of trials and the associated protocols that govern them, the protocol is the key to defining strategies to avoid data missingness.

Trial designs must account for missing data, and a good trial protocol will dictate what happens in scenarios where data might be lost for whatever reason. The default is likely to be collect data by any available means, if there isn’t any explicit instruction available to the contrary. By doing so, a decision can be made about the utility of the data at a later stage—that’s not an option if the data isn’t collected at all.

Reviewing missing data trends is also crucial, as this can provide insights into the appropriateness of the trial design or even the selected COA measures themselves. This also drives us to consider what a COA measure developer dictates around the use of their measure and what is acceptable in terms of missing data that may affect a calculated score.

Having a back-up plan

Back-up approaches also require careful consideration, and we know FDA has advised sponsors to have a justified back up plan. Beyond the previously described web back-ups, and the potential for bring-your-own-device approaches, both sponsors and eCOA providers have anecdotally described an increase in the use of assisted completion, sometimes termed “interview administered.” Some COA measures have specific interview versions that have been developed intentionally for that purpose, but that is not the rule. Assisted completion allows a trained supporter to complete a PRO on behalf of the participant. There may still be questions around the qualitative comparability of self-report versus assisted completion, but the consensus is that, if appropriate training is provided, assisted completion still delivers participant-reported data.

However, this approach is not a universal salve for missing data—questions of a personal nature need a different solution. In general, electronic back-up approaches for electronic COA systems are a good idea. However easy paper is perceived to be, it is not the case that paper-derived COA data sits on the same plane of quality as electronic methods. Paper fails at the hurdle of attribution quite easily.

Eyeing digital equity

We must also reflect, as ever, on the issue of digital equity, and how accessible these systems really are. This is, again, an area that demands more guidance and research. Challenges to the use of electronic systems can range from physical limitations to religious objections. Despite technology’s expanding impact on us all, there can be a distrust in these systems that we must account for to assure equitable access to clinical research.

Reflecting on whether there is a broadly applicable answer to the title of this article and the discussion that inspired it, it’s hard to argue against the premise that bad data is worse than no data. The goal of a clinical trial is to arrive at an answer, and we need good data to achieve on that goal. If we arrive at an answer based on bad data, then we are failing on every level.

Scottie Kern is Executive Director of the eCOA Consortium at Critical Path Institute

Reference

1. FDA, FDA Patient-Focused Drug Development Guidance Series for Enhancing the Incorporation of the Patient’s Voice in Medical Product Development and Regulatory Decision Making (February 14, 2024). https://www.fda.gov/drugs/development-approval-process-drugs/fda-patient-focused-drug-development-guidance-series-enhancing-incorporation-patients-voice-medical

Recent Videos
© 2024 MJH Life Sciences

All rights reserved.