Applied Clinical Trials
A clearly defined set of performance measures is an integral part of the central laboratory selection and management process.
A variety of economic and social pressures have fueled a widespread search for reductions to the cost of research and development (R&D) in the health care industry. Pharmaceutical companies (pharma) and clinical research organizations (CROs) have been working vigorously to improve their productivity and return on investment by modifying processes, decreasing cycle times, and implementing improvements wherever possible.1 Furthermore, experts believe that pharma companies previously averse to strategic alliances are being required to find more long-term development partnerships. This trend is expected to continue due to the enormous competitive emphasis on demonstrating a positive return on R&D investment and continuous need to bring significant new products to market.2 While ultimately beneficial, management of relationships with multiple strategic partners further complicates efforts to reduce the cost of R&D, improve operational efficiencies, and benchmark performance. Consequently, the need to better manage the efficiency, productivity, and economics of long-term drug discovery through the use of performance metrics has become a top priority for outsourcing teams involved in clinical trials. The value of performance metrics has taken on a dual meaning: to measure and compare performance between service providers and to analyze operational proficiencies that reveal potential elements to be adjusted during the conduct of the clinical trial.
One aspect of clinical research that offers opportunity for application of performance metrics is that of central laboratory services. A central laboratory is a network of people, capabilities, and facilities structured to process human specimens for diagnostic testing that generates information subsequently reported to participating clinicians and the organization sponsoring the research. The primary role of a central laboratory is to obtain accurate, timely diagnostic testing data to support a clinical trial protocol.
Due to the diverse nature of clinical research, many companies have invested a substantial amount of time and resources to develop standard central laboratory performance metrics that are used to benchmark service providers. When the design and application of central laboratory performance metrics are done well, a number of benefits can be realized. These include:
Full realization of the benefits available through the use of central laboratory performance metrics requires thought and planning in their design, selection, and implementation. Some key considerations in successfully implementing and managing the overall process include:
The responsibility for developing a performance metrics scorecard is equally placed on the sponsor and the central laboratory participating in a clinical trial. When a critical, actionable, and measurable metric is identified and mutually agreed upon, process improvement is enabled. Recently, some companies have undertaken to collect actionable metrics as a means for facilitating rapid change through performance gap analysis while a study is underway, and although outsourcing groups have traditionally used metrics primarily to manage costs, recent reports indicate that their primary responsibility is to increase supplier productivity.3 Therefore, selecting an appropriate set of performance metrics is critical to both the sponsor and the central laboratory. As noted above, in the clinical research environment, the use of metrics can be overdone considering the scope of an average global clinical trial in a challenging therapeutic area, and decision makers can find it very tempting to track a large number of metrics, which can lead to a disproportionate amount of time being spent on managing the data as it accumulates over time. Therefore, clinical researchers should adopt a "vital few" or "less is more" approach to collecting only those metrics that are actionable or meaningful.
A combination of laboratory metrics, financial outcomes, and study team satisfaction ratings are typically the key components of a comprehensive central laboratory services scorecard. These categories encompass the performance expectations for the various interested parties within the sponsor organization. The attempt to drive process improvement is the premise by which performance metrics are identified and categorized; however, it is unfortunate that outcomes are analyzed more routinely than processes. A well-balanced scorecard should incorporate a combination of both process-based metrics and outcome-based metrics.
While financial outcomes are bottom-line oriented, they are not particularly useful in driving rapid change nor do they routinely impact productivity. The value of financial metrics is to provide information regarding the actual "burn rate" of the project budget in terms of service fees, pass-through costs, and other expenses.
Study team satisfaction ratings are becoming more common on scorecards because they offer easily quantifiable metrics that reveal trends regarding the customer–sponsor relationship. Depending upon the methodology used to obtain customer feedback, a detailed view of functional areas can be highlighted to uncover specific information regarding opportunities to increase efficiency and improve outcomes. This information can be used to impact both short-term changes during the conduct of a study and to illuminate issues that require long-term process enhancements.
Well-designed central laboratory performance metric categories essentially reflect the scope of responsibilities of a central laboratory within the clinical trial process. There are three general categories of scorecard metrics: Study Start, Study Management, and Study Close. The purpose for grouping performance metrics into these categories is to differentiate between functional areas and as a means for driving process improvements. These categories are comprised of fundamental processes and critical outcomes, including many that are project specific. In practice, grouping project-specific metrics under these categories is the first step toward identifying a set of common benchmarks to use across the central laboratory industry. Outcomes-based metrics are considered by industry experts to be common to central laboratories, i.e., number of kits shipped, number of demographic holds, TAT of specimen results, and time from LP/LV to database lock. Since performance metrics must be actionable in order to drive change, they must focus on processes and standard operating procedures (SOPs), which vary from company to company.
Performance metrics that pinpoint operational processes can drive efficiencies and improve productivity. These process-based metrics uncover issues that can sometimes be resolved by modifying SOPs, increasing training or allocating additional resources to meet study timelines and contractual obligations. For example, in order to measure improvement over time, a metric that measures how a patient specimen is handled is much more valuable than one that measures the outcome, i.e., the number of specimens handled.
Within each category there are specific processes and outcomes to be analyzed that reflect progress toward achieving study goals. The Study Start processes are arguably the most important ones to support a clinical trial due to their direct impact on project timelines, which in turn significantly impact the overall project budget. Therefore, the activities that surround communication, setting expectations, and establishing project timelines are critical. The number of change orders (CO) made to the original, approved scope of work (SOW) is a key indicator of whether the initial communications were well managed. Another example is adherence to study timelines that control getting the initial test supplies to the study sites, programming the study database, and validating the database. Outcomes associated with each of these processes are completion of the tasks on time and the number of days to complete the task.
Once a study has been initiated and sites are enrolling patients, Study Management performance metrics are tracked and analyzed through the duration of the study, which can run for months or years. Performance metrics collected during this phase have the greatest impact on driving process improvements, assuming that the metrics selected are actionable and can positively impact study outcomes. For example, once sites receive their initial supplies, the processes that control resupply and getting specimens to the laboratory become an ongoing priority for project managers and coordinators assisting with materials and logistics. Shipping specimens to the central laboratory for testing in accordance with project specific requirements is only the first step in a series of interrelated Study Management processes and outcomes. Other examples of well-known study management performance metrics include: turn-around time (TAT) of laboratory results, reporting results to sites, percent of demographic holds, and percent of data corrections. Outcomes associated with these processes usually involve time requirements to complete the task, although actual counts and percentages are also frequently tracked project outcomes.
The completion of a study, in terms of central laboratory responsibilities, is usually defined as the period from last patient/last visit (LP/LV) to final database transfer. The processes and outcomes associated with Study Close performance metrics are often very few. Most common are activities that occur between LP/LV to database lock and database lock to final database transfer. An outcome associated with each of these processes is either time-related or a measurement of the number of transfers that occurred.
Figure 1. Common central laboratory performance metrics by study phase.
Figure 1 indicates common central laboratory performance metrics by study phase. Some performance metrics such as financial measurements and customer satisfaction ratings are outcomes that transcend each of the three metric categories, as these outcomes can be linked to different processes in more than one category. Examples of financial outcome metrics include percent of total spend versus budget forecast and deviation from original project budget due to change orders. Customer satisfaction performance metrics can also be very general if the ratings measure subjective questions such as "comparison to other central labs"; however, specific feedback should also be obtained that highlights operational performance in one or more of the three study categories. An example of a customer satisfaction outcome is the number of comments made about a specific area of operations. Figure 2 indicates examples of outcomes used on central laboratory scorecards.
Figure 2. Examples of outcomes used on central laboratory scorecards.
A clearly defined set of central laboratory performance metrics is essential in selecting and managing a central laboratory service provider. An optimized set of metrics is a valuable asset in managing the sponsor–supplier relationship and can be used to drive efficiencies during the life of the trial by focusing on both outcomes and processes. Measurement of operational processes helps to pinpoint areas for improvement, which are in turn measured by study outcomes. These resulting improvements drive long-term efficiencies and have a direct positive impact on productivity and managing the cost of research and development. By following the "process versus outcome" model and designing and implementing relevant and meaningful central laboratory performance metrics, sponsor teams can develop well-balanced scorecards that accurately measure central laboratory performance.
Mark M. Engelhart is vice president, sales & marketing, Anthony J. Santicerma, MS, is director of strategic alliance and global marketing, and Jay E. Zinni, MBA, is manager, bids & contracts, all with Quest Diagnostics Clinical Trials, 1201 South Collegeville Road, Collegeville, PA 19426, (610) 454-6542, fax (610) 983-2120, email: Anthony.J.Santicerma@questdiagnostics.com
1. E. Pena, "Making Metrics Matter: The Changing Paradigm of R&D Metrics," PharmaVoice, 8–20 (March 2005).
2. L.D. Fitzsimons, "Stronger Together," R&D Directions, 11 (5) 36 (May 2005).
3. K.A. Getz, "Entering the Realm of Flexible Clinical Trials," Applied Clinical Trials, 14 (6) 44 (June 2005).
In Focus: Addressing the Health Literacy Roadblock in Patient Recruitment
Published: November 15th 2024 | Updated: November 15th 2024With universal adoption of health literacy best practices slow going over the years, advocates are redefining the term to encompass much more of what health-related communication requires beyond simply words.