In this video interview, Mark Melton, vice president of biospecimen data and operations, Slope, touches on the challenges with hand-written data collection and how they can be addressed with AI.
In a recent video interview with Applied Clinical Trials, Mark Melton, vice president of biospecimen data and operations, Slope, discussed navigating data challenges in clinical trials, emphasizing the need for understanding complex data sources and ensuring a proper chain of custody for samples. Melton highlighted the importance of data mapping to standardize reporting across different labs and the necessity of secure data transfer to protect patient privacy.
A transcript of Melton’s conversation with ACT can be found below.
ACT: Is there any potential for the use of artificial intelligence (AI) or machine learning (ML) when it comes to data management or sample collecting/tracking?
Melton: The biggest trend that's kind of grown with precision-based medicine and with more complex testing has been, we're having trouble tracking the samples and ensuring integrity of those samples. What we've done, which is pretty normal for most businesses, is if you don't have a technology solution, say, if we don't know how to leverage AI, we're going to throw people at the problem. This isn't to say that AI will replace those individuals. It will give them a better tool to do their job. Right now, the bulk of the industry doing this is taking what we call non-quality control data, so data where if you collect a sample at a hospital and/or a clinical research site, and it goes out to the other facilities, like I've been mentioning. When they get that in the primary method to transfer the data—say, I was a physician, or I was a nurse, and I collected the samples, and then those samples went out outside my facility—believe it or not, it's all transported through paper. It's a carbon copy piece of paper. They write information down. My handwriting is horrible as an example. Let's say they get it down there and they can't read it. They're processing tens of thousands of samples a day. They have to get through that. They have KPIs to hit and contracts to hit to get that information out. The leverage of AI to have the ability to help them facilitate that process would be tremendous in our industry. The ability just to interpret the data, but also help direct them to where there's probably errors, as opposed to people doing that now. Often very manual, and even in the best case, we're doing some programmatic language, but that programming is dependent on the data available. It would be really interesting to see where AI can get into interpreting that data, with keeping in context the databases that are reporting, so AI having that ability to do that, I think, has the biggest potential that we haven't even tapped yet, that we're just starting to get into. To be honest, to my knowledge, there isn't any company doing that yet, not that maybe someone's doing it and going to roll it out, but right now, the bulk of AI is really around interpretation of results and trying to work through what we call high throughput data, but the biggest potential probably is upstream before you have all of that data to work through, is to ensure that we're leveraging AI to ensure integrity up to the point of testing. It would be really interesting to see how that plays out.