The Site Council Value Calculator: A New Tool for Optimizing Site Payments

Feature
Article

Giving sites and sponsors an effective tool for principled negotiations.

Norman M. Goldfarb, executive director of the Site Council and executive director of the Clinical Research Interoperability Standards Initiative (CRISI)

Norman M. Goldfarb, executive director of the Site Council and executive director of the Clinical Research Interoperability Standards Initiative (CRISI)

Study sponsors want clinical research sites to enroll patients and generate high-quality data in a safe, ethical and timely manner. Study sponsors expect to pay fair market value (FMV) for these services.1-8 With this intention, they create budget templates based on their own experience, commercial databases, advice from consultants who specialize in fair market valuations, current market conditions, and other factors. A major problem with this methodology is that it treats sites largely as commodities, which they clearly are not. It does not account for significant variations in site performance—it generates budget templates that are lower than high-performing sites deserve and higher than low-performing sites deserve. When a typical study sponsor negotiates a study budget for a typical study with a typical site under typical market conditions, this methodology may work well. However, many sites are not typical, many studies are not typical, many study sponsors are not typical, and market conditions are often not typical.

Clinical research sites expect to be paid what they consider FMV for their services. With this expectation, they create price lists (“rate cards”) based on experience, costs, the competitive landscape, and other factors. (Institutional sites often employ healthcare consultants who specialize in fair market valuations for their clinical (not research) services.) These rate cards—based on the site’s assessment of its FMV—often do not align well with the study sponsor’s idea of FMV.

A common FMV problem in study budgets is conflating cost with value. When you are shopping for a product for your own personal use, do you care more about the supplier’s costs or the value of the product to you? If you see similar products on sale for different prices, do you ever buy the more expensive product without first asking what justifies the higher price? You may not need a good reason, but you do need a reason.

A hospital in New York City does not deliver high value to a study sponsor because healthcare costs are high in New York City. It may deliver high value because it has unique access to the patients the study needs, the investigator is a key opinion leader, or other reasons.

Imagine an oversimplified scenario in which a study sponsor is launching a new study. Based on its past experience, it expects 50% of the study participants to drop out and generate no useful data. It develops its budget template accordingly. Now imagine a site that, based on its past experience, can guarantee the study sponsor 100% retention. The value of that site to the sponsor may be double the standard budget, and the parties may agree on a budget that is double the normal amount.

Now imagine a realistic scenario in which a site offers varying degrees of value on many dimensions. For example, its retention rate may be excellent, its data entry may be above average, the diversity of its patient population may be below average, and its expertise in the study’s therapeutic area limited. What will an FMV-based negotiation look like in this case?

Complicating the matter, the parties are unlikely to agree on how to select, define, measure and value the various elements of site value for a specific study. As a result, the parties can discuss the site's strengths and weaknesses for the study at hand in only general terms that do not lend themselves to any definitive impact on the study budget.

The end result is that high-performance sites—those sites most important to study sponsors—often conclude that negotiated study budgets do not reflect the site’s expected contributions to a study. The study sponsor’s claim that its hands are tied because of FMV considerations sounds to the site like a bargaining tactic. “The term ‘FMV’ has fallen in disrepute because, from the site’s perspective, FMV is why high-performing sites should get the premium price we deserve,” said Kurt Mussina, CEO of Paradigm Clinical Research.

In reality, the study sponsor’s hands may very well be tied. It knows the study would benefit from reallocating its funds from low-performing sites that deserve less than the standard budget to high-performing sites that deserve more than the standard budget. It fully appreciates the value of high-performance sites for its study, but it simply does not have the tools to satisfy its lawyers that significant exceptions will comply with the Anti-Kickback Statute and other US federal regulations—or to explain to other sites why their budgets are relatively lower. “We are constantly asking our study sponsors how we can improve our performance. We need a framework with explicit attributes of site performance that can justify variations in site payments more objectively,” said Carlos Orantes, president & CEO of Alcanza Clinical Research.

The solution to this conundrum is to give both parties the tools to conduct principled negotiations—not just bargaining sessions—based on quantitative measures of site performance.

The Appendix explains in more detail how markets work in general and the implications for the clinical research site services market.

Clinical research is not a commodity

In commodity markets, e.g., for crude oil, every barrel of oil carries the same value, provided that every barrel of that product is, for practical purposes, identical, i.e., homogeneous. Commodity markets establish the FMV of crude oil. However, there can be many different types and grades of a commodity, each with its own price (FMV). For example, 13 different types of Texas crude oil each have their own price because they each deliver a different value to buyers.9

In labor markets, people with different experience, expertise, certifications, etc., earn different wages. For example, in the United States, an apprentice electrician earns an average of $41,925/year, a journeyman electrician earns an average wage of $65,475/year, and a master electrician earns an average of $73,987 per year.10 Master electricians earn 76% more than apprentice electricians because they deliver more value. Electrical contractors count on a master electrician’s work to be more efficient, more reliable, and of higher quality than that of an apprentice electrician, so it can get more jobs at higher profit margins from general contractors. General contractors benefit because they can complete projects more quickly, with higher quality, and with less schedule risk, not to mention that the buildings are less likely to burn down. As with electricians, site performance varies, so FMV should also reflect differential value across the entire range of performance and its impact on a study.

Clinical research sites are highly heterogeneous, the opposite of homogeneous. When a clinical research site proposes a relatively high budget for a study because of the high value it expects to deliver, that high value is justified not only by the speed and quality of the site’s work but also by larger implications. While payments to sites constitute only a small fraction of the total cost of a study, a problematic site’s performance can impact the entire cost. High-performing sites reduce the cost, risk and timelines of clinical research studies. Every day that a clinical study continues is one more day of payments to the study sponsor’s project team, one more day in getting to market, and one less day of patent protection. Every error a high-performing site does not make is one less problem that will not delay the study, reduce the quality of the data, or increase the regulatory risk. These problems can add up.

The Site Council Value Calculator

The Site Council is tackling the FMV problem with a new tool: The Site Council Value Calculator, available by clicking here. This tool assesses site performance across 10 dimensions, each of which can be prioritized and scored.

Preparation

Before the Calculator can be used for a study, it must be prepared for that study.

The first step in calculating a site's value to a study is to determine which dimensions matter the most. (See Table 1.)

Table 1. Performance dimensions

Table 1. Performance dimensions

The second step is to set the priority (weight) for each dimension of performance. For example, safety should be a relatively high priority in a potentially dangerous study, while speed should be a relatively high priority in a study with a short timeline.

Three core performance dimensions—productivity, quality and safety—are more important than the others because a low score on any of these dimensions curtails a site's possible overall value, regardless of its performance on other dimensions. Therefore, these three dimensions should carry more weight than the other dimensions. However, every dimension matters to some extent. For example, low-enrolling sites can be very attractive if that enrollment is reliable.

The third step is to determine the minimum acceptable score for each dimension. Productivity, quality and safety should have relatively high minimum scores because they are essential. The minimum score for safety may be nine, while the minimum score for speed may be only five. As with weights, minimum scores can vary across studies.

All performance dimensions cannot have the same priority. Setting them all at the same level means that none of them have priority. This uniformity encourages people to be influenced by their own (perhaps implicit) priorities and allows priorities to drift unintentionally based on circumstances. Setting priorities is always a balancing act. For example, it is easy to say that safety should never be compromised, but the only way to achieve absolute safety is to enroll zero patients. The only way to achieve absolute speed is to ignore quality.

The leadership dimension of performance is an exception. Key opinion leaders are key opinion leaders regardless of their other attributes. Also, leadership may be a very low priority for all but one site in a study, but for that site, it is the top priority.

The fourth step is to determine how to adjust a site’s budget proposal based on its expected performance. The study sponsor may, for example, offer a site that is expected to perform at a minimally acceptable level the standard budget, a good site a 20% premium, a very good site a 50% premium and an excellent site a 100% premium. A 100% premium may sound excessive, but in current practice, it is not uncommon for top sites.

Application

Once the Site Council Value Calculator has been prepared for a study, it can be applied to each site. If a study sponsor is entirely unfamiliar with a site, it may assume that the site will perform in line with similar sites or perhaps a bit lower to be conservative. Since site performance changes over time—hopefully for the better—sponsors can use Bayesian analysis to update a site’s expected performance on each new study.

After rejecting any disqualified sites, the study sponsor then determines whether each site deserves a premium to the study’s standard budget and, if so, what that premium should be. The study sponsor can then propose the adjusted budget to each site.

Study sponsors have two choices: They can keep the site’s scores confidential or reveal them to the sites. Keeping the scores confidential offers the advantages of minimizing debate and the possibility of hard feelings. However, it yields an inferior form of interaction: bargaining based on the relative power of the parties. Revealing the scores yields a superior form of interaction: principled negotiation based on the study sponsor’s explicit priorities. A frank discussion of a site’s performance may cause the study sponsor to adjust the site’s scores and give the site constructive guidance for improving its performance. Relationships of trust require open communication, especially on difficult topics. As a study sponsor develops a relationship of trust with a site, it should become more comfortable discussing sensitive issues of performance.

Important implications

Today, sites understand that study sponsors want them to enroll patients and generate high-quality data in a safe, ethical and timely manner. However, this high-level statement does not does not give sites a clear understanding in any detail of the study sponsor’s priorities, which can vary based on the study sponsor, the therapeutic area, the study design, the study manager, the competitive landscape, and other factors. In the absence of clear priorities, sites proceed with only a vague understanding of the study sponsor’s priorities, which may serve the study sponsor poorly when a choice must be made between competing priorities. The Site Council Value Calculator not only makes the study sponsor’s priorities explicit; it also communicates them in financial terms that motivate sites to proceed accordingly.

Study sponsors cannot assume that their own people will understand their priorities for a specific study. As a result, they cannot expect their protocols to be designed in accordance with their priorities. They cannot expect their CROs and other solution providers to intuit their priorities. They cannot expect their study leaders—especially new ones—to select the best sites for a specific study and to manage those sites in accordance with their priorities.

Over time, study sponsors can adjust their priorities based on past experience. They can identify the sites that best address their priorities. Sites can focus their attention on improving their performance in high-priority dimensions. They can find study sponsors that best appreciate their performance profile. To some extent, this sorting process already occurs, based, for example, on the ability of various sites to handle more or less challenging study phases.

Study sponsors can use historical Site Council Value Calculator scores in site selection. Sites can use their scores to identify priority areas for improvement and to select studies that match their performance profiles.

Risk in a study is not just a question of quality; there is risk in every dimension of site performance. Risk-based site management should, therefore, consider the study sponsor’s priorities across all performance dimensions and the past and current performance of each site in each dimension. "Health systems must routinely manage and mitigate various risks. The first step is to identify and characterize potential risks. Then, prioritize based on potential significance, prevalence, and/or likelihood. While patient safety and regulatory risks related to clinical research are more obvious, there are other peripheral risks, including financial risks, that should be carefully considered and managed,” said Bishoy Anastasi, senior director, UCLA clinical research finance & strategy.

Regulatory perspectives

US federal laws and regulations do not specify how study sponsors should construct study budgets. However, payments to sites must comply with a thicket of U.S. federal laws and regulations, including the False Claims Act, the Anti-Kickback Statute, the Stark Law, the Civil Monetary Penalties Law, the Beneficiary Inducement Statute, and the Code of Federal Regulations 21 Part 56. Most of these rules are designed to protect the federal government from overbilling Medicare, Medicaid and other federal healthcare programs. State laws and regulations may also apply, as well as professional codes of ethics.11,12

The US Code of Federal Regulations does not explicitly address study sponsor payments to sites. However, CFR 21 Part 56 protects study participants from improper clinical research design and conduct, which may be influenced by study budgets. For example, excessive financial incentives for patient recruitment may motivate sites to enroll ineligible, ill-informed or reluctant patients in studies.11,12

While most regulatory compliance concerns relate to overpayments, underpayments can also have deleterious consequences. For example, inadequate compensation may cause sites to underinvest in quality management systems. They may also cause financially challenged sites to operate in a low-cost, unsafe manner or to improperly enroll or retain patients to earn even inadequate income.

A previous article discussed regulatory considerations pertaining to FMV pricing in more detail.1

Conclusion

A properly constructed study budget should have the following properties:

  • Incorporates the principles of coherence, consistency and transparency.
  • Takes into account current and expected market conditions.
  • Reflects the expected value of each site’s contribution to the study.
  • Provides adequate but not excessive compensation for patient enrollment, retention and support.

The use of FMV concepts in clinical research has been limited by a lack of clarity as to the value that a site brings to a study. As a result, study sponsors and sites often use cost as a crutch to measure value. Basing study budgets on cost rather than value sends the cockeyed message that study sponsors value high cost more than high value. As discussed above, high cost does not mean high value, nor does low cost mean low value. For example, a site with a large population of qualified patients should be able to achieve a given enrollment target at a lower cost than a site with only a few qualified patients. The value is in the enrolled patients, not in the effort to enroll them. The logical conclusion of cost-based FMV exposes the fallacy: The per-patient fee for a site that, despite infinite efforts, cannot enroll any patients at all should be infinite.

FMV is not just about pricing. It is also about efficient resource allocation and communicating what customers value, so suppliers can optimize their services accordingly. For example, the rapid development of COVID-19 vaccines was possible only because study sponsors paid a premium for rapid, consistent, and high-volume performance.

The Site Council Value Calculator facilitates price negotiations based on value, not cost, with manifold important implications. It clarifies study sponsor and CRO priorities and provides a framework for site selection, site improvement programs, and other purposes.

The Site Council Value Calculator will evolve under scrutiny by a wide range of industry participants and with testing in actual use. The author invites readers to suggest clarifications, revisions and other improvements

Appendix. How markets work

A properly functioning market operates on three fundamental concepts: cost, value and price. Suppliers compete on price, value and other factors. When supply is limited, customers compete on price and other factors for the suppliers that offer the best value.

To stay in business, suppliers generally must sell at prices higher than their costs, and customers generally must buy at prices lower than the value they receive. A transaction results when the two parties agree on a price somewhere between those two numbers. Suppliers care about their costs, and they should also care about value because it sets a ceiling on price. Customers care about value, and they should also care about cost because it sets a floor on price.

Cost becomes a tricky concept when, as is usually the case, it has fixed and variable components. For example, if a supplier’s costs are 100% fixed, the marginal cost of one more unit of product is zero—an impractical price in the long run. In the short term, it can sell below its average cost, but it cannot do so indefinitely.

If a customer offers a price below cost, suppliers can consider opportunity cost, i.e., would another customer pay a higher price for this unit of product? In general, a supplier should seldom sell below its average cost and never below its variable (“marginal”) cost unless it is confident of a bigger payout down the road.

Cost and value considerations can be complicated. In clinical research, for example, hospitals often conduct studies for reasons other than revenue, such as patient care, marketing and physician interest. The author is aware of a case in which a hospital paid the study sponsor millions of dollars for the right to conduct the initial study on a revolutionary medical device.

Customer attention should be on the value it gets from a product—that is what it is paying for. However, it cannot completely ignore cost because it may set a floor on price.

There may be indirect considerations. For example, a high price may motivate exceptional performance by sellers, so they can make more highly profitable sales later. A low price may bring to mind the saying, “penny wise, pound foolish.”

Except in a perfect commodity market, different customers need different products with different value propositions, and different suppliers offer different products with different value propositions. A dynamic sorting and re-sorting process occurs as suppliers and customers find the business partners that best meet their overall needs.

Based on this discussion of markets, we can reach the following conclusions about non-commodity markets:

  • Different sellers offer their own product features (e.g., price vs. quality), which may vary depending on the circumstances.
  • Different buyers have their own product feature priorities, which may vary depending on the circumstances.
  • Different sellers and different buyers have their own financial imperatives.
  • Sellers should have good reasons to charge less than their expectation of value and very good reasons to charge less than their expectation of cost.
  • Buyers should have good reasons to pay more than their expectation of value and very good reasons to pay more than they can afford.
  • Within these limits and considering other opportunities, the parties negotiate for their respective advantages.
  • If the parties cannot come to terms, they do not do business together.
  • If a buyer or seller cannot find counterparts that meet their financial and other imperatives, they leave the market.
  • Both parties optimize by finding business partners that best help them achieve their business goals.
  • Finding the best business partners requires a shared understanding of value priorities and capabilities.
  • When sellers clearly understand a buyer’s value priorities, they can tune their performance accordingly and focus on improving their capabilities accordingly.
  • More value is delivered at less cost when the seller’s value capabilities and the buyer’s value priorities match.
  • Sellers who cannot afford to supply the product features wanted by buyers will not survive in the market.
  • Buyers who cannot afford the product features they need cannot survive in the market.

About the author

Norman M. Goldfarb is executive director of the Site Council and executive director of the Clinical Research Interoperability Standards Initiative (CRISI). Previously, he was chief collaboration officer of WCG Clinical, founded and led the MAGI conferences, and published the Journal of Clinical Research Best Practices.

References

  1. “When FMVs Collide: Coming to Terms with Fair Market Value,” Norman M. Goldfarb, Applied Clinical Trials, January 2024, https://www.appliedclinicaltrialsonline.com/view/when-fmvs-collide-coming-to-terms-with-fair-market-value
  2. “Fair Market Value Conundrum: Solutions for Sponsors and Sites,” Andrew Snyder, Applied Clinical Trials, 2014, http://www.appliedclinicaltrialsonline.com/fair-market-value-conundrum-solutions-sponsors-and-sites
  3. “FMV and the Market Failure in Clinical Research,” Norman M. Goldfarb, Journal of Clinical Research Best Practices, July 2016, https://www.elimarsystems.com/Documents/FMV_Market_Failure.pdf
  4. “Physician-Investigator Compensation, Suzanne Rose, Journal of Clinical Research Best Practices, December 2017, https://www.elimarsystems.com/Documents/FMV_Investigator_Compensation.pdf
  5. The White Guide, Rady A. Johnson and Douglas M. Lankler, Pfizer, 2020, https://www.elimarsystems.com/Documents/FMV_Pfizer_2020White_Guide.pdf
  6. “What is Fair Market Value?” Norman M. Goldfarb, Journal of Clinical Research Best Practices, December 2017, https://www.elimarsystems.com/Documents/FMV_What_FMV.pdf
  7. “When Can a Study Sponsor Pay Different Prices to Different Sites and Not Violate Fair-Market-Value Principles?” Norman M. Goldfarb, Journal of Clinical Research Best Practices, February 2020, https://www.elimarsystems.com/Documents/FMV_What_FMV.pdf
  8. “Why Fair Market Value Is Not One Number,” Norman M. Goldfarb, Journal of Clinical Research Best Practices, January 2019, https://www.elimarsystems.com/Documents/FMV_Physician.pdf
  9. “Oil Price Charts,” Oilprice.com, https://oilprice.com/oil-price-charts/#prices
  10. “Salary in USA 2024,” Talent.com, https://www.talent.com/salary
  11. “Legal and Ethical Considerations for Offering Clinical Trial Recruitment Payments and Enrollment Incentives,” Anna Zhao, Food & Drug Law Institute, undated, https://www.fdli.org/2024/04/legal-and-ethical-considerations-for-offering-clinical-trial-recruitment-payments-and-enrollment-incentives/
  12. “Clinical Fair Market Value: Why it's essential and what methodology to use,” Casey Armstrong, IQVIA blog, 2020, https://www.iqvia.com/blogs/2020/06/clinical-fair-market-value-why-its-essential-and-what-methodology-to-use
Recent Videos
© 2024 MJH Life Sciences

All rights reserved.