Martin Gouldstone, Global SVP, business development, Owkin, speaks to Applied Clinical Trials on the use of artificial intelligence (AI) in data practices and approaches for clinical research, and the distinction of “ethical AI.” Gouldstone was previously chief business officer of life sciences for Sensyne when this Q&A was conducted, as part of an ACT podcast.
Applied Clinical Trials: Can you explain what ethical AI and ethical data means?
Gouldstone: Ethical AI and ethical data sharing is essentially a trust model for patient data. Organizations that generate high-value findings—in particular, the analysis of patient data—describe the way that the technology works to both protect the identity of the patient, but still generate incredibly valuable analyses and insights from the patient data sets. These can be used to drive the successful execution of new drug development, while ensuring that patient identity is secure and privacy is secure.
ACT: We are hearing more and more about the use of synthetic control arms in clinical trials. Why do you think that is?
Gouldstone: I think the reason is that we’ve reached a kind of inflection point where both the technology and the ability to handle large data sets and the access to the data have come together and enabled synthetic control arms to be possible. Before, there just wasn’t the technology to support managing the data. For instance, all of our work is done in the cloud. We don’t have racks of big servers anymore. And that has enabled us to manage rapidly and integrate large data sets. I think the other challenge that the industry faces or has faced is data interoperability. Getting the [numerous] data sets to communicate is a real challenge.
ACT: Can you explain the difficulty of integrating all that data together to make more sense?
Gouldstone: A patient’s health record has many different components to it; some are called structured data, and they’re quite simple—for example, a prescribed medication, the data is the medication name and the frequency of dosing. But then there are doctors’ notes, which are unstructured data; imaging data, including X-rays, CAT scans, MRI scans; pathology reports, which are in PDF format. So multiple formats, multiple information sources, all on one patient, probably in different locations. And you’ve got to make sense of all of that and bring it all together related to the patient—all de-identified data that has to be anonymized, aggregated into datasets that makes sense for disease areas, indications, and more. Then there’s also outcomes, which is probably the biggest value driver of the data from my perspective for pharma. So bringing all of that together in a common data model is hugely challenging.
ACT: What do you think is the most significant change in drug development in the past five to 10 years, or the past five years?
Gouldstone: Early in my career, I was in clinical research. I started my pharmaceutical career carrying the bag for a year and then went into clinical research and ran clinical trials in areas as diverse as female health and oncology. The increased use in Wi-Fi-enabled technologies has made a big difference with processes including source data verification for [clinical research associates].
As I said earlier, this data revolution tied in with the development of the technologies to enable access to that data in a secure environment. And then be able to apply that to not just the clinical research process, but also the market access for new drugs, pricing models, etc.
I think there’s still a long way to go. The pharmaceutical industry has always been quite slow in adopting new technologies, particularly large pharma, because of scale. It costs a lot of money just to move to the next MS Word package, for instance, let alone change how you do processes. But I do think that is where these new AI technologies, such as machine learning, make a difference.