The technique offers a clear window into drug effects and provides strong safety and efficacy evidence.
Over the past decade quantitative medical imaging has played an important role in an increasing proportion of clinical trials. This is particularly true in areas where both the disease and the effects of treatment manifest as structural or functional changes that can be directly imaged. Imaging is, of course, less useful in areas such as pain management or psychological illness, where the method offers little insight into biological processes, and in areas where convenient chemical biomarkers are available.
The reasons for including a quantitative imaging component in a particular trial are varied, but in general investigators have been attracted by the potential to provide a window into the direct biological effects of a drug (either augmenting or replacing subjective radiological interpretation) as well as downstream endpoints such as subjective pain scores or clinical assessments. The motivating force behind this transition is the increasing need to demonstrate mechanism of action and to assess both safety and efficacy with small numbers of experimental subjects in the early stages of the drug development process. In order to achieve this goal, an endpoint is needed that has both tight linkage to treatment effects and low measurement variability. In many cases, quantitative imaging meets both of these requirements.
One important example of an area where quantitative imaging can provide vital insight is the use of anti-angiogenic or vascular disruptive agents in the treatment of solid tumors. There are numerous molecules in these classes in the development pipeline at a wide range of pharmaceutical and biotechnology companies. Experience with drugs of this type that are already in clinical use, such as Avastin, has shown that the correlation between semiquantitative assessments such as response evaluation criteria in solid tumors (RECIST)1 and patient survival is very poor for these compounds.2 This is a result of the mechanism of action of anti-angiogenic and vascular disruptive agents, which is to alter the blood supply to the interior of the tumor, thereby either directly inducing cell death or increasing access to the tumor core for a second cytotoxic agent in combination therapy. A typical early effect of treatment is then either diffuse cell loss within the tumor or necrosis and cavitation rather than outright shrinkage. Neither of these effects will be captured by a RECIST assessment.
A more useful examination of treatment effect in a small trial in such a case might be to combine measurement of tumor radio-density using computerized axial tomography (CT)3 with direct measurement of tumor blood flow and vascular permeability using dynamic contrast-enhanced MRI (DCE-MRI).4,5 These assessments are targeted directly at the compound's mechanism of action and are sufficiently reproducible to permit confident determination of treatment effect with a relatively small sample size.
Osteoarthritis is another area where quantitative imaging can have a major impact on study timelines, cost, and success. This disease is characterized by structural changes in the joints affecting both the cartilage and the bone. Disease progression is typically assessed using subjective pain scoring and/or measurement of joint space from plain film x-ray images. Both these measures carry high measurement variability, and joint space changes slowly enough that subjects must typically be followed for years before significant changes are seen. Quantitative MRI, however, has the capability to directly track cartilage changes such as local volume loss or the development of focal defects, as well as bone marrow edema and the development of osteophytes.6,7 Additionally, using a technique known as delayed gadolinium-enhanced MRI of cartilage (dGEMRIC), it is possible to track changes in the structural integrity of cartilage that occur long before the onset of joint space loss.8,9 Taken together, these techniques have the potential to greatly reduce both the size and duration of a clinical trial designed to assess the mechanism of action or efficacy of a disease-modifying osteoarthritis drug (DMOAD), from several hundred to a thousand or more subjects followed over two–five years to 100 or fewer subjects followed for 12–24 months.
Photography: PhotoSpin
These examples are not meant to suggest that a Phase III trial could be carried out using these techniques with 50 or 100 subjects. However, quantitative imaging may permit the decision point on mechanism of action and efficacy to be pushed back from Phase II/III to Phase I in many cases. This makes it possible for the pharmaceutical company to intelligently allocate resources away from ineffective compounds and toward effective ones in a much more efficient way. The Tufts Center for the Study of Drug Development in its Impact Report (Volume 4, Number 5, September/October 2002) estimated that it costs $808 million, on average, to develop and win market approval for a new drug in the United States. This report stated that better preclinical screens, which would increase success rates from the current 21.5% to one in three, could reduce capitalized total cost per approved drug by $242 million. These are exactly the types of changes that are needed if drug development costs are to be brought under control, and the increasing incorporation of quantitative imaging into both early- and late-phase studies is one critical tool for bringing these changes about.
While quantitative imaging has the potential to provide significant savings to the drug development process in terms of both time and total cost, it is important to understand that both the imaging and analysis techniques required can be complex, and the value provided will be in direct proportion to the quality of the data and analysis. Quality in the case of a clinical trial is expressed primarily in terms of the measurement coefficient of variability (CV). This is defined as the standard deviation of repeated measurements divided by their mean and represents the amount of measurement variation expected in the absence of any true biological change.
To see why measurement CV is so important, consider the case of a cardiovascular clinical trial for a compound that is expected to reduce plaque in the carotid arteries. Measurement of plaque burden is to be accomplished using ultrasound imaging at baseline and again after six months of treatment. The control group is expected to show a mean increase in plaque burden of 2% ± 2%. The treatment group is expected to show a decrease in plaque burden of similar magnitude and variability. With perfect measurement (i.e., CV = 0), a statistically significant result (p < .05) can be achieved with only seven subjects per arm. Unfortunately, this is generally not achievable. If the measurements are made using a well-controlled semi-automated analysis system (CV = 4%), the number of subjects per arm required for statistical significance increases to 28 per arm—a reasonable size for a Phase I trial. If the measurements are made manually at the imaging site (CV = 16%), the number of subjects required to achieve a significant result increases to 353 per arm. The last option may well be the least expensive on a per-subject basis. However, going down this path will clearly lead either to an immensely longer and more costly trial or failure to achieve a statistically significant result.
There are a number of points in the image acquisition and analysis chain where measurement variability can be introduced. It is vital for a trial incorporating imaging to address them all. As an example, consider an oncology trial incorporating DCE-MRI to measure treatment-induced changes in tumor vasculature. This technique involves repeated MR imaging, typically at 5- to 10-second intervals, over a period of several minutes, during which the subject is given an injection of a gadolinium-based contrast agent. Because gadolinium is a paramagnetic element, local gadolinium concentration at a particular time can be inferred from observed changes in MR signal from baseline. A linear system model of the vascular bed is then used to calculate biologically relevant parameters such as blood flow and vascular permeability.
This analysis assumes first and foremost a spatially invariant and well-defined relationship between gadolinium concentration and signal change. However, problems with the MR coils, pulse sequence, and magnetic field homogeneity can render this assumption, and therefore the entire analysis, invalid (see Figure 1). DCE-MRI analysis is also vulnerable to subject motion during the imaging session. Respiratory motion in the chest and abdomen is particularly troublesome. Typical solutions to this problem, such as navigator pulses and respiratory gating, are ruled out due to the need for rapid imaging. A DCE-MRI analysis package, therefore, requires a robust image co-registration capability. Perhaps most significantly, DCE-MRI analysis requires the estimation of an arterial input function (AIF), defined as the tracer time-concentration curve in arterial plasma. This presents a problem, however, because motion corrupts the apparent MR signal, and blood is obviously in continuous motion. For this reason, it is typically not sufficient to simply observe a region in a large artery in order to estimate the AIF. Several groups have developed automated methods for calculating an accurate AIF from standard DCE-MRI data.10–12 These or similar methods are critical to achieving minimum measurement variability in DCE-MRI analysis.
Figure 1. Data showing the relationship between MR signal increase and concentration of gadolinium from a scan of a DCE-MRI phantom. This relationship should be roughly linear in this case. Data indicate a severe problem with spatially varying coil sensitivity for this system.
DCE-MRI has historically been severely limited in its clinical utility by poor reproducibility even in untreated subjects over very short time frames, with scan–rescan CVs ranging from 18%–25%.13 However, recent studies undertaken by VirtualScopics in conjunction with MediciNova (San Diego, CA) and others have demonstrated that CVs of 7%–10% are achievable with a properly designed protocol, strict quality controls on the imaging site equipment, and a well-conceived analysis system that addresses the major issues outlined earlier.
The central dilemma faced by an investigator who wishes to incorporate quantitative imaging into a clinical trial is this: Current radiological practice is geared toward subjective interpretation rather than objective quantification. Figure 2 shows a uniformity and linearity phantom scan obtained during study initiation QA of an MRI system that was in clinical use and had passed standard QA procedures. The obvious distortion seen in this scan was not noted by the site radiologists because that level of distortion is simply not relevant to subjective interpretation. However, if the goal of an imaging session is to obtain precise structural measurements, distortion of this sort is completely unacceptable. Moreover, imaging site technologists and radiologists are often not familiar with the image acquisition techniques that are necessary for obtaining quantitative biomarkers, and very few site radiologists are in possession of or trained in using the analysis software packages necessary for extracting biomarkers like cartilage volume, plaque burden, or blood flow from image data. Finally, it is vital in a multisite study to ensure that the quality and consistency of data obtained from the various sites are comparable despite the differences that may exist among sites in scanner type and locally available expertise.
Figure 2. MRI scan of a volume and linearity phantom obtained using a well-maintained system (left). MRI scan of the same phantom from a clinical scanner that was in routine use at the site (right).
All of these considerations point to the necessity of applying centralized control to the image acquisition and analysis process for clinical trials if reliable quantitative results are to be obtained. Such control can be applied in a wide variety of ways—by a CRO, an imaging core lab or even an expert group within the sponsor's organization—but it must be applied early in the process and consistently throughout the trial. Specifically, it is necessary to have an imaging protocol designed with the particular requirements of the desired imaging biomarkers in mind. It is necessary to perform a thorough QA check on the scanning systems at all imaging sites, with the understanding that it is not possible to rely on the site's own QA procedures, and to review scanner quality periodically throughout the course of the trial. It is also necessary to provide thorough training to the site technologists and radiologists in the particulars of the study protocol and to ensure that each site is able to implement the designed protocol precisely. Finally, it is necessary to perform the analysis in a robust and consistent way, automating as many aspects as possible to reduce the possibility of measurement variability and human error.
Photography: PhotoSpin
Introducing an imaging component to a clinical trial without sufficient planning and attention to detail will generally result in a tremendous amount of wasted time and money. However, quantitative imaging has the potential—by improving the researcher's ability to assess efficacy early in the development process—to crack many of the bottlenecks that currently make drug development such a time-consuming and expensive endeavor.
1. P. Therasse, S. Arbuck et al., "New Guidelines to Evaluate the Response to Treatment in Solid Tumors," JNCI, 92, 205–216 (2000).
2. R. Twombly, "Criticism of Tumor Response Criteria Raises Trial Design Questions," JNCI, 98, 232–234 (2006).
3. R.S. Benjamin, H. Choi et al., "Response of Gastrointestinal Stromal Tumors (GISTs) to Imatinib by Choi Criteria and Response Evaluation Criteria in Solid Tumors (RECIST) as Surrogates for Survival and Time to Progression," 2006 ASCO Annual Meeting Proceedings Part I, 9506 (2006).
4. P. Tofts, "Modeling Tracer Kinetics in Dynamic Gd-DTPA MR Imaging," J Magn Reson Imag, 7, 91–101 (1997).
5. G. Liu, H. Rugo et al., "Dynamic Contrast-Enhanced Magnetic Resonance Imaging as a Pharmacodynamic Measure of Response After Acute Dosing of AG-013736, an Oral Angiogenesis Inhibitor, in Patients with Advanced Solid Tumors: Results from a Phase I Study," J Clin Oncol, 23, 5464–5473 (2005).
6. F. Eckstein, F. Cicuttini et al., "Magnetic Resonance Imaging (MRI) of Articular Cartilage in Knee Osteoarthritis (OA): Morphological Assessment," Osteoarthritis Cartilage, 14, A46–75 (2006).
7. J. Tamez-Pena, M. Barbu-McInnis, S. Totterman, "Knee Cartilage Extraction and Bone-Cartilage Interface Analysis from 3D MRI Data Sets," Proc. SPIE, 5370, 1774–1784 (2004).
8. A. Bashir, M.L. Gray et al., "Nondestructive Imaging of Human Cartilage Glycosminoglycan Concentration by MRI," J.Magn Reson Med, 41, 857–865 (1999).
9. D. Burstein, J.H. Velyvis et al., "Protocol Issues for Delayed Gd(DTPA)2- Enhanced MR Imaging (dGEMRIC) for Clinical Evaluation of Cartilage," J Magn Reson Med, 45, 36–41 (2001).
10. E. Ashton, T. McShane, J. Evelhoch, "Inter-Operator Variability in Perfusion Assessment of Tumors in MRI Using Automated AIF Detection," LNCS, 3749, 451–458 (2005).
11. M. Rijpkema, J. Kaanders et al., "Method for Quantitative Mapping of Dynamic MRI Contrast Agent Enhancement in Human Tumors," J Magn Reson Imag, 14, 457–463 (2001).
12. K. Murase, K. Kikuchi et al., "Determination of Arterial Input Function Using Fuzzy Clustering for Quantification of Cerebral Blood Flow with Dynamic Susceptibility Contrast-Enhanced MR Imaging," J Magn Reson Imag, 13, 797–806 (2001).
13. S. Galbraith, M. Lodge et al., "Reproducibility of Dynamic Contrast-Enhanced MRI in Human Muscle and Tumours: Comparison of Quantitative and Semi-Quantitative Analysis," NMR Biomed, 15, 132–142 (2002).
Edward A. Ashton is chief scientific officer with VirtualScopics, Inc., 350 Linden Oaks, Rochester, NY 14580, email: ed_ashton@virtualscopics.com
In Focus: Addressing the Health Literacy Roadblock in Patient Recruitment
Published: November 15th 2024 | Updated: November 15th 2024With universal adoption of health literacy best practices slow going over the years, advocates are redefining the term to encompass much more of what health-related communication requires beyond simply words.