In a recent interview and software demonstration, Rohit Nambisan, CEO of Lokavant, and Craig Lipset, founder of Clinical Innovation Partners and Lokavant advisory board member, briefed Applied Clinical Trials on what current uses of real-world data and analytics can mean to clinical trial execution.
Nambisan, a neuroscientist by training, spent 10 years in academic and clinical research before moving into industry, working for 16 years in the areas of healthcare and technology. Three years ago, he launched Lokavant as an incubation within Roivant Sciences. There, Lokavant identified its opportunity to develop an automated intelligence clinical trial platform to bring data-driven decision-making to the industry. In 2020, the company entered its first multi-year enterprise license agreement with Parexel and has since partnered with ERGOMED as well as H1 to help with both performance and planning for clinical trials. Last December, Lokavant became a separate entity through Series A funding.
Nambisan and Lipset answer questions ahead from ACT. The conversation is edited for length.
Applied Clinical Trials: What does Lokavant’s solution mean to you?
Rohit Nambisan: It is a great opportunity to show the value of data—not just in the collection—to prove that a therapeutic is safe and efficacious, but also to ensure that clinical trials and clinical research are running in the most efficient manner.
Craig Lipset: I’ve been with Lokavant since the early days, out of my interest of, how do we leverage historical data, together with data from existing studies, to better inform decisions?
Spending time with [Nambisan] and his team over these last few years has been great because they’re really smart and apply that intelligence and energy to current challenges in clinical trials, addressing the industry’s biggest pain points.
ACT: What do you see as a the most current pain point?
Nambisan: Diversity. Sponsors are being required [by FDA] to submit diversity plans. I want to be very specific…this is not a pie- in-the-sky goal in terms of needing to have “x” amount of representation across all studies. It’s very specific. If you’re going to develop a therapeutic for a disease, you need to understand the epidemiology of that disease and the demographic makeup of the populations most affected by that disease, and you want your studies to reflect that population. This is important not only from the perspective of access and making sure that participants feel comfortable in the study, but also to guard against any type of safety or efficacy concern —for instance, an epidemiological facet about a particular underrepresented group that may not present itself unless it is actually tested in that group in the trial.
Lipset: Our industry is good at coming up with many tactics that we look to deploy for diversity. But what we don’t see is a lot of successful execution. In fact, there is no penalty for failing to provide or abide by a diversity plan. The guidance just says “if you miss on the plan, come and talk to us, and we’ll talk about post-marketing surveillance.”
It’s almost an expectation that these plans are going to be hard to execute, and there’s not a lot of robust tools for holding people accountable around delivery.
Nambisan: Part of the explanation of why these plans are difficult to execute is due to the current state where we are today with clinical research. Often sponsors will work with one CRO and/or go to the same sites and facilities they have experience with. It makes good sense—they have contracts in place and/or preferred relationships. They know what to expect in terms of the time to set up a trial site, the time to get contracting docs done, etc. In terms of expediting a trial, that’s a really valid perspective. However, research has indicated for many years that clinical trials have been limited in diversity, and if we continue our current practices, we shouldn’t expect diversity to change in the trials going forward.
Another factor is that the industry looks at historical enrollment performance on indications that are very similar to the indication that is going to be studied. But the speed at which they were able to enroll is just one factor among many that should be assessed in site performance.
There are good reasons for using historical data. But there’s an inherent bias in that data or in the approach. We can reduce the bias by accessing multiple types of data populations. For example, diverse investigators or diverse sets of participants that do not preclude the preferred sites, but also brings in historical clinical trial data across multiple sponsors, multiple CROs, as well as real-world data and social determinants of health information. Overlaying the data, you see where particular patient types go for their medical care. What is the demographic and socioeconomic makeup of those participants?
Lipset: We hear a lot of people talk about social determinants of health data to drive more enrollment in a trial. That’s helpful, but what I think people miss…just someone having difficulty participating in a trial. There are many other unmet needs out there. By bringing all that data together—low social determinants, what happens to these patients over time? Are they the people we’re losing in terms of discontinuation or with protocol deviations? And so on. This ability to segment and slice and layer these predictive elements together becomes particularly important when we’re using more diverse and representative sources for data.
ACT: What does real-time feedback look like for diversity data?
Nambisan: Real-time feedback means that you have access directly to the demographic information that is generated at each site. You can understand how many participants are enrolled of a particular demographic; if you are on track to meet your diversity goals or deficient at this point.
And it’s not sufficient just to have [feedback] loops with the sponsor or CRO. Sites have been too long disenfranchised by not having this type of real-time feedback for themselves. Allowing that information to be in front of the site coordinators or the site staff can help them self-modulate, as well.
ACT: After diversity, what other efficiencies can be gained by data-driven analytics?
Nambisan: Once you overlay different types of data sets and extract that information, you can create metrics specific to that study. You can look at the quality of those sites in relation to screen failure rates and discontinuation, how long site activation takes, quality, major protocol deviations, etc.
Maybe enrollment rate is very important for the study you’re planning to select sites for, as well as screen failure, but discontinuation is less of an issue. Or maybe it’s a mega trial and you expect many participants in that trial. Then you can weigh specific factors and develop a composite ranking that would be much more holistic and comprehensive than just looking at enrollment alone.
In another example, many of the top sites that were based solely on enrollment rates, ended up dropping to the bottom using an overall composite measure. On the other hand, many sites that would not have been considered or brought into the study based only on enrollment rate ranked much higher with the composite measure. This is a powerful example of what it means to bring in multiple forms of data to “unbias” the data, and then look at the ability to perform, recruit, and address diversity needs—or other goals—in a holistic manner.
We have another use case where we forecasted the performance every day within a study of how likely it would achieve its recruitment goal within the recruitment window. And in many of the studies we deployed that forecast on, we’ve identified there’s absolutely zero-percent chance.
That gives the study team the opportunity to make changes in flight, and first in silico, before they actually do that in real world. They can understand what to adjust instead of trying something very expensive or complex without getting the desired results.
Lipset: There is this word in [clinical] development that leaders love to use. Predictability. They like to do things that are predictable. I grew up in New York, so to me predictability was a negative word. But for operations leads or executives, this is what they want. If you could make diversity more predictable in terms of the effort that’s going to be required, then why wouldn’t you do it? When we throw tactic after tactic out there, and sometimes stacked on top of one another, we don’t even know what’s moving the needle, even incrementally.