In this video interview, Mark Melton, vice president of biospecimen data and operations, Slope, discusses how different collection methods are creating complexities.
In a recent video interview with Applied Clinical Trials, Mark Melton, vice president of biospecimen data and operations, Slope, discussed navigating data challenges in clinical trials, emphasizing the need for understanding complex data sources and ensuring a proper chain of custody for samples. Melton highlighted the importance of data mapping to standardize reporting across different labs and the necessity of secure data transfer to protect patient privacy.
A transcript of Melton’s conversation with ACT can be found below.
ACT: How can sponsors better navigate some of the current data challenges in clinical trials?
Melton: The first thing of any problem is understanding, you know what the problem is, and what that means is understanding every facet of it as much as you can. This issue is really complex because it involves multiple stakeholders for data, multiple sources of data, so it's not like you're getting data from one database, right? Also, moreover, there are different companies you're dealing with. As much as we'd like to think there's a ton of regulation around data reporting for, at least for sample data, there's not, so you have to understand how each database functions, what goes into that data reporting ahead of time, and then understand how you're going to navigate it. I think the biggest challenge I see, because I've spent the past decade in this and specializing in this area, is that there's a host of different approaches, there's different viewpoints and different concepts, but I think we should simplify it. I think the simplifying answer is we're in the process of treating people who are sick, we're collecting their samples, we're giving them experimental drugs, so at the very least, we need to ensure that, as soon as possible, we have the ability to ensure that the data that we're collecting and specifically around samples, that it's the right sample to the right patient. Basically, data follows samples, so at the site, samples are collected, and they go to a host of different places that are in the hospital, people don't realize that, especially in the clinical trial, it goes to commercial labs with top scientists in the world who are going to take those samples and eventually test them. You can't put name, date of birth, and those sort of things on this, it’s pseudo-anonymized data. You have to have the ability to compare what happened, say, at a hospital system, and then when those samples go downstream, ensure that's the same sample, but moreover, what they'll end up doing is processing those samples into derivatives. Think about when you go get blood work at a doctor, they're not actually looking at their blood, they're going to look at derivatives of that blood, like serum or plasma as an example. The ability to have that chain of custody is huge. You have to understand, in every location you're sending it to, how do they handle that chain of custody, but most importantly, how do they report on it? How does that fit into how you're handling data?