With variation in costs between different stakeholders in clinical trials, there is often disagreement on fair market value.
“Fair market value” (FMV) can be defined as the price in an arm’s length transaction between a willing seller and a willing buyer. This simple concept gets complicated in clinical research, where the sellers (clinical research sites), the buyers (study sponsors and CROs), and the services (study execution) are highly variable. In a given study, both the study sponsor and the site probably will not agree on the FMV of the site’s services. Various authors have addressed these complications, but confusion persists.1-7
In the clinical research industry, the concepts of cost and value are often conflated. While cost is often thought to determine FMV, it is value that matters. “If a site moves to expensive new quarters that are more convenient to patients, the value to the study sponsor is not created by the higher rental cost but by the site’s ability to enroll and retain patients more quickly,” says Alethea Wieland, COO and founder of Clinical Research Strategies, a leading valuation consultancy to study sponsors.
“Study sponsors are subject to the Anti-Kickback Statute and other U.S. federal regulations that prohibit paying undue inducements to healthcare providers. In particular, the U.S. government does not want manufacturers of medical products, e.g., pharmaceuticals, to pay excessive prices to healthcare providers for clinical studies as an inducement to prescribe the manufacturer’s products to patients covered by Medicare and other government programs,” says Darshan Kulkarni, principal attorney at the Kulkarni Law Firm.
From the study sponsor’s perspective, including a standard budget template in every clinical trial agreement for a given study is the simplest and safest way to proceed, since it may have to defend that standard budget template in an audit by the U.S. Centers for Medicare & Medicaid Services (CMS). Any significant exceptions may invite CMS scrutiny.
Federal regulations do not protect study sponsors from site overcharges. However, charging a study sponsor for a procedure or assessment a lower price than the site charges Medicare would violate the Medicare Best Price Rule.
When a study sponsor prepares the budget template for a study, it may refer to the prices it has paid previously for similar activities in similar studies. It may utilize a commercial database. It may hire a specialized consultant to calculate FMV prices for procedures, assessments, and other activities. Whatever methodology it employs, it is likely to create a single price list applicable to all sites, with only limited, well-defined variations permitted, e.g., for institutional overhead rates. In other words, the resulting price list (budget template) implicitly assumes a commodity market for clinical research services.
When a healthcare system establishes its charge master (price list) for regular clinical services, it generally considers its costs, third-party reimbursement rates, and competitor pricing. It may hire consultants with FMV expertise to help with the research and calculations and provide an authoritative stamp of approval. When developing a rate card (price list) for clinical research services, it may use its charge master as is, may adjust it for clinical research, or may simply base prices on a fixed percentage increase over Medicare rates. Whatever methodology it employs, the resulting rate card implicitly assumes that the same price should be charged for a given service on every study under any conditions. In other words, the resulting rate card assumes that each service exists in a commodity market, with only limited, well-defined variations permitted, e.g., based on variations across therapeutic departments.
Both study sponsors and sites thus tend to standardize pricing based on FMV principles, ignoring variations in the value of a given service for a given study at a given point in time. For example, two study sponsors may be racing to market with competitive medications. A given site may have only a few patients with the relevant, rare medical condition. Everything else being equal, preferential access to those patients carries a very high FMV. A similar situation exists when a study sponsor finds itself in need of “rescue” sites that can quickly and reliably enroll the last few patients needed to bring a study to a timely conclusion. Thus, there are clearly situations in which study execution services are not just commodities. In such situations, the standardized-pricing mentalities of either the study sponsor or the site may stand in the way of correct FMV pricing. The parties on both sides of the transaction should have the flexibility to price accordingly.
Not only does FMV-based higher pricing attract and motivate sites that provide higher-value services but it may also motivate sites to increase their capacity to provide those services, e.g., by securing expensive equipment. Increasing payments for study rescue would encourage more sites to develop that capacity. With more sites set up for study rescue, a study sponsor could start a study with fewer sites than previously, knowing that it can easily bring on rescue sites, if needed. (Peaker plants provide a similar service in the power generation industry.)
Nevertheless, study sponsors often pay sites widely varying prices based on institutional overhead rates, for example. In theory, study sponsors can defend paying high overhead rates based on the extra value a high-profile institution offers. An academic medical center may have specialized equipment, a compounding pharmacy, a rare patient population, a renowned key opinion leader, or simply a lustrous reputation that the study sponsor wants to associate with its study. However, if the study does not need the specialized equipment, the compounding pharmacy, the rare patient population, the renowned key opinion leader, or the lustrous reputation, the study sponsor would gain no value from them and should not pay extra for them. Similarly, if you buy an automobile for driving around Phoenix, Arizona, snow tires may provide no value to you, so you would not want to pay extra for them. You may not want them at all.
A site in an expensive city like San Francisco does not provide value because San Francisco is an expensive city. It provides value when a study requires regional diversity, including expensive cities like San Francisco, for statistical, patient population, or other reasons that do provide value to the study.
An independent site may not have specialized equipment, a compounding pharmacy, a rare patient population, a renowned key opinion leader, or a lustrous reputation that the study sponsor wants to associate with its study. Nevertheless, its rate card may be substantially higher than the industry average based on the value it provides, e.g., reliably rapid enrollment, high retention and adherence, clean data, and impeccable GCP compliance. Because of its consistently high performance, the site may charge premium prices—what the market considers its FMV—that are much higher than a given study sponsor’s value-based budget template would anticipate. The site may run at capacity because other study sponsors—the market—are satisfied that the high value of its services justify its high prices.
Study sponsors may want to employ the institution with a high overhead rate or the independent site with high-performance, but they do not relish the task of defending the exception to a CMS auditor. As mentioned above, study budget templates must be defensible. Any exceptions must also be defensible. The key point is that, to be safe, the study sponsor must have a valid, consistent, and well-documented process for making FMV exceptions. The study sponsor should document how it reconciles the institution’s high overhead rate to the benefits provided. It should not just trust the high-priced independent site that its high prices are supported by the market; it should demand evidence from the site or do its own due diligence.
There is no FMV rationale for the study sponsor to pay for irrelevant high performance. For example, the ability to enroll 20 patients provides no value to the study sponsor if the statistical analysis allows only five patients from each site.
If a study sponsor is confident that it can achieve its study goals with average sites, a high-performing, high-priced site may appear to provide no extra value to the study sponsor. However, all costs and benefits must be considered. For example, if a site enrolling 20 patients eliminates the cost of selecting, starting up and monitoring three 5-patient sites, that site provides real, measurable value to the study sponsor. If the high-enrolling (or high-retaining) site helps the study complete a week earlier, that is one week of study overhead costs that can be saved. If the saved week helps the study sponsor beat a competitor to market, the value could be very high indeed.
A single high-performing site may not enable a study to complete a week or even a day earlier, but a policy of working with and paying for high-performing sites certainly can.
Clinical studies are exercises in uncertainty. A highly reliable site that reduces this uncertainty provides significant value to the study sponsor. As an analogy, a reliable automobile provides much more value to the driver than avoiding the cost of a repair. A roadside breakdown could have serious consequences.
Consider the unfortunate study manager who must guarantee that a study will be completed faster than normal. Willpower will probably not get the job done. This study will need superior, highly motivated sites ready to start immediately. FMV pricing will have to reflect the required high level of performance.
Consider the unfortunate study manager with an unappealing study. The drug may appear to provide marginal benefits, the safety profile may be problematic, the protocol may be very complex, or the study sponsor, itself, may have a poor reputation. Under these conditions, good sites will likely prefer to apply their resources to more appealing studies. If the study sponsor cannot remedy the unattractive aspects of the study, they can, of course, pay a premium. In other words, the FMV for an unappealing study may very well be higher than an industry database would suggest.
Consider the fortunate study manager with an extremely appealing study. The study manager can pick and choose sites, almost regardless of pricing. The CMS is not concerned about pricing that is below normal FMV.
Think of sites as square pegs and budget templates as round holes. You can always fit a square peg in a round hole if the peg is small enough, but who wants to run a clinical study with only small pegs?
FMV pricing is not a simple, one-size-fits-all proposition. In some cases, an industry-standard budget template may be sufficient—all the pegs will fit in the standard holes. However, in some cases, all the holes will need to be larger than normal. In some cases, you will need large pegs that do not fit in standard holes, so those holes must be enlarged. In such cases, defensibility requires a valid, consistent, and well-documented process for significant, value-based deviations from standard FMV pricing.
Study sponsors can make a choice: They can explain to high-performing sites why their high performance does not justify high prices. Or, they can take the risk that they will have to explain to average sites why they are not receiving the same high prices that high-performing sites receive. The best way out of this conundrum is to develop a valid, consistent, and well-documented process for making FMV exceptions. There may still be some awkward discussions but they will be with average (and worse) sites, which may be motivated to improve their performance. And, most importantly, high-performing sites will be highly motivated to apply their capabilities to the sponsor’s studies.
Norman M. Goldfarb is executive director of the Site Council and executive director of the Clinical Research Interoperability Standards Initiative (CRISI). Previously, he was chief collaboration officer of WCG Clinical, founded and led the MAGI conferences, and published the Journal of Clinical Research Best Practices.