In an interview with ACT editor Andy Studna at SCOPE, Deyle, VP & GM, Clinical Research, Flatiron Health discusses real-world evidence in clinical trials.
ACT: How can the use of RWD advance a trial compared to the uses of more traditional data sources such as earlier trials and publications?
Deyle: This is an area that Flatiron has been spending a lot of time looking into and talking about with our collaborators and our partners. To start, I think one of the things that you hear all over SCOPE is the different challenges that are happening with clinical trials in terms of the feasibility of identifying the right patients and activating sites actually being able to deliver clinical trials on the timelines that they're anticipated. A lot of times people look for individual solutions to those problems and don't realize that actually, some of the problems are started way up front. What we've seen is that when we partner with our customers, there's a huge bias towards kind of status quo and how to design protocols, which is by and large copy and paste of past clinical trial protocols. What we found is there's a huge opportunity to use clinically deep contemporaneous real world data to generate new insights into how care is actually being delivered today across the US, and then being able to optimize your protocol to be more likely to be feasible for that. When you think about why real world data versus other sources of information, I think the answer is it's never just one source above others, I think it's always going to be a combination approach, you always need to use your organizational understanding of a disease area for how to inform a trial design looking at past clinical trials talking to care wells. But the reality is, real-world data can help inform the choice of a comparator regimen based on what's actually happening at the point of care and can drive provide data driven guidance on informing inclusion and exclusion criteria to make trials more representative and to increase the ability to accrue to a clinical trial.
ACT: When using RWD in their trials, what regulatory concerns should stakeholders be keeping top of mind?
Deyle: What we hear again and again from the FDA and other health authorities is this concept of fit for purpose, which is understanding what is the use case or the application for which you're planning to use the real world data? And then, how do you make sure that the data that you're using and the evidence that you're generating is fit for purpose for that? o start, I'm going to talk here at SCOPE tomorrow with one of my colleagues, Galen Ritter from Bristol Myers Squibb about how you can use real world data to inform protocol optimization and trial design. That's actually an area where I think there's a huge amount of opportunity without a lot of concern or regulatory risk around the use of that data, because that's being used as an input to inform clinical design and clinical studies. I think there's a huge opportunity there. We're actually seeing FDA guidance to encourage more use of neural data in that context. As an example, and I'm sure as you know, the FDA has recently put out guidance on ensuring that clinical trials are designed and accrue patient populations that look like the patients who will receive those drugs in the real world. One of the core guidances that they've mentioned in that is to actually include real world data to inform not only how you select your inclusion exclusion criteria, but also which sites you activate for, for clinical trials. There's real positive momentum and encouragement from health authorities for the use of wearable data in that setting. If I look at the spectrum of different use cases you can think of, maybe even on the other end of the spectrum from protocol optimization would be the actual use of real world data as an external control arm to contextualize a single arm clinical trial. think in those cases, it really becomes important to think about that question around fit for purpose data. At Flatiron, we actually recently put out a publication on this topic that was commented on by members of the FDA, defining what does quality look like for real world data and how do you define quality? The reality is we've seen a lot of different health authorities and agencies have actually done a really good job to try to provide guidance here and put out multiple frameworks on what it means to define quality. But the challenge is, when you look across different groups, you see different and not always congruent framework for finding that. So, Flatiron recently did some research where we looked at a number of different quality frameworks or real data and synthesized what we saw as the common themes across those. It really comes down to understanding the relevance of the data for the question at hand and the reliability of that data for informing the scientific inferences that are going to be used based on that particular application.
ACT: What do you think is lacking to further industry adoption of RWD?
Deyle: I actually think we've come a really long way. Just looking at the the focus on the use of real world evidence; real world data, real-world evidence has moved so far since the FDA started talking about this, in the context of putting out their initial frameworks, and their initial guidance is around the the adoption of real world data, which has been great. In terms of what I think is kind of holding us back the most, I think there are potentially three things. One is actually having access to the right data for the right research question. I think this goes back to that question of understanding the fit for purpose data, and understanding and finding the right source that has the right level of quality and the right depth of clinical variables. That said, I think I'm a little bit biased coming from Flatiron. I think we have now made significant strides, being able to have high quality real-world data in oncology available for the vast majority of potential real world evidence questions. I think the second thing is moving beyond just having access to the data but having the right analytic capabilities and prowess to be able to generate the real world evidence that comes out of that data. This is an area that I think is really evolving, there's been a ton of work that's been really exciting around advancing methods for what is the appropriate way to analyze real world data to generate meaningful and reliable real world evidence, whether it's understanding appropriate methods related to endpoints, or whether it's assessing the impact of certain things like missingness and real world data. How do you account for potential unknown bias that might exist in data that wasn't originally collected for the potential purpose of that research question. But that's another area where I think we've made huge progress both as an industry and I'm proud to say that I think Flatiron has been really leading the way in that effort around new methods development for what is the appropriate use of real world data to generate meaningful and reliable real world evidence?
The third thing is actually getting clear and specific around what are the different use cases where real world data and real world evidence can have the greatest impact? I think that's where there's been a lot of evolution. If I think back to a couple years ago, I think there was a lot of excitement around the opportunity for real world data and real-world evidence to completely get rid of the need for RCTs. I think we've learned a lot around the real opportunities and challenges and the fit for purpose nature for that. There are still opportunities to leverage real-world data in those types of examples, but I think I think we're finding there's a lot more examples of where real world data can be infused in the clinical development cycle to accelerate generation of insights and access for patients.