Quality metrics are hard to define and measure, but are crucially important.
This month, let’s take a look at a quality metric. Quality metrics are hard to define and measure, but are crucially important. Since “what gets measured gets fixed,” if you don’t define a sufficient number of quality metrics, people will end up improving the cycle time and timeliness metrics and may compromise quality to achieve those goals.
Why this metric is important: Sponsors contract with Central Labs to analyze blood and/or tissue samples collected by clinical sites. The results of these analyses provide important safety and endpoint data. During the trial, you need to be sure that there is minimal sample loss so you’re not risking patient safety or sacrificing data for your analyses and reports. This metric should be checked regularly. Furthermore, lost data may be an indication of site, protocol, training or shipping problems. Low metric values or changes over time should be immediately investigated to minimize future problems.
Definition: At the MCC, we define Reportable Lab Tests as the number of tests that are reported by the lab divided by the total number of tests received by the lab for testing. Tests that are canceled are not included in these numbers.
How to calculate this metric: You simply divide (the number of tests reported by the lab) by (the total number of tests received by the lab for testing). To get a percentage, simply multiply the result by 100.
So as a simple example, 200 samples are received for calcium testing (just one calcium test performed per sample) and the lab reports results for 180 samples. Thus 90% (180/200 x 100) of the calcium tests are reportable.
Here’s a slightly more complicated example: Let’s say each of the 180 samples that are testable from our previous example now has 10 required tests. The lab is able to complete all 10 tests on160 of those samples. But for some reason, the lab can only run 5 tests on the remaining 20 samples. Then the metric calculation would be
What you need in order to measure this: You need the number of samples received and the number of tests that should be run on each sample. Additionally, you need the number of test results that are successfully run and reported. If possible, the data should be available both at the protocol and site level.
What makes performance on this metric hard to achieve: This metric is tracked by most central labs and is available for monthly reporting. However, some labs are not able to report the results at the protocol and site level, which makes it hard to figure out where any problems might lie. In addition, if you are not using a single central lab, you are going to have to collect this data piecemeal and integrate it to get a complete picture of the study.
Things that you can do to improve performance: If you determine there is a trend in the number of unreportable tests, you can look to see if there are:
Once you’ve narrowed the problem down a bit, you can look for problems in site initiation and training or contracting problems with shippers that may be causing the data loss.
Companion metrics: Another metric that you should consider in tandem with this metric is Tests Reported Within Expected Turnaround Time.
Example: In the following graph, the blue line represents the metric value over time. As you can see, performance is stable through month 6 and then suddenly starts to drop off. The team sees this sudden change and immediately starts to do problem diagnosis and remediation. By month 11 they have resolved the problem and performance returns to former levels.
Dave Zuckerman
CEO, Metrics Champion Consortium
[email protected]
Linda Sullivan, COO, Metrics Champion Consortium
[email protected] www.metricschampion.org