Applied Clinical Trials
Collecting too many metrics can lead to misappropriation and misinterpretation.
In the pharmaceutical industry, costs are becoming an increasingly scrutinized component of the clinical development process. In addition, companies are looking to increase profits by getting drugs to market faster and to bolster margins by conducting trials with fewer and fewer resources. Along these lines, many clinical operations groups are collecting metrics as fast as they can in order to get their bearings and improve performance. However, most groups do not have a true understanding of the "why" behind the metrics, and as a result end up collecting far too much data and subsequently misapplying or misinterpreting the resulting information—when they aren't too busy drowning in their own data. Stepping back, it becomes apparent that on the whole an R&D organization's approach to metrics shouldn't differ much from that of any other department, or that of any other industry. In fact, many valuable lessons can be learned by reviewing cases where the use of metrics went horribly wrong in other industries. By analyzing from the top down why metrics are important and how they should be utilized in order to maximize the impact on a company's—not a department's—bottom line, clinical operations and other groups within the industry can save much time and effort while making a difference where it counts: their company's bank account.
In his book "Pharmaceutical Metrics: Measuring and Improving R&D Performance," David Zuckerman discusses his three principles of measurement: (1) Metrics measure progress toward our goals; (2) Metrics help us prevent failure; and (3) Metrics get us out of ruts. These principles handily describe on a strategic level why metrics exist in an R&D environment. Before we get to this tactical level, an understanding of the more strategic needs behind the use of metrics needs to be in place. The first question that needs to be asked is, "Why do we need to collect metrics to begin with?" The answer to that question will be rooted not at the departmental level, or even the divisional level—it will ultimately be embedded at the corporate level. Since the overwhelming majority of pharmaceutical and biotech companies are publicly traded (or wish to be), it follows that like any company they share the goal of serving shareholders first and foremost. One issue that clouds this seemingly straightforward proposition is the nature of the industry itself. Many people whose work in the industry—and particularly those whose jobs are closely tied to clinical research—view the idea of having monetary profit as the primary goal of their company as distasteful and contrary to the more idealistic notion of helping their fellow man. As a result, many process improvement efforts (and related metrics) are justified not by the support of solid business cases but by qualifications such as "this will make our lives easier." The problem with this approach is that even if everyone can agree that their lives will be made easier, no one seems to know if the company will actually benefit financially from such an endeavor. Helping people and benefitting monetarily are not mutually exclusive ideas; just because money is being made, doesn't mean that benevolent intentions are any less.
The financial benefits of an improvement project are not necessarily easy to figure out. For example, many such projects, and the metrics surrounding them, revolve around cycle-time reduction. This leads to a few issues. Example: if you do in fact reduce cycle times in one process, does that gain propagate all the way through to releasing the drug to market? According to business management guru Eli Goldratt, "Throughput is the rate at which the system generates money through sales."1 In other words, will the improvement really allow the company to increase throughput and start earning revenue sooner? Even if it does, is it possible to truly estimate the financial impact of that improvement given all of the variables that exist in attempting to get a drug approved years down the road? The financial impact of an improvement project centered on operational efficiencies is simpler to estimate—it usually isn't connected to future revenues from drug sales, but rather reduced operating expenditures and/or increased throughput in the here and now. Even so, it can be difficult to agree on what the net impact of a change might be, or even how to measure it accurately.
In order then to maintain alignment with the "ultimate" metric—shareholder value—senior management of pharmaceutical and biotech companies must ensure that all projects, whether improvement related or otherwise, will ultimately benefit the bottom line over the long run. Considering that the cost of bringing a single drug to market can easily reach $800 million or more over 10 plus years, it is imperative that a sponsor organization has all the metrics it needs to maximize its return on this enormous investment. Any metrics collected for the sake of improvement efforts must satisfy the criteria of adding shareholder value, with the possible exception of projects that are part of a larger strategic initiative and do not have shorter-term payoffs which can be directly measured. Similarly, even operational metrics should align to the corporate—not departmental—bottom line as much as possible. Only cycle times that have a direct impact on last subject out, for example, should be monitored and managed. Many sponsor companies focus on the date of first subject screened or first site initiated as "key" milestones, but really, isn't it the last subject screened and the last site initiated that really drive the timelines for the rest of the study? And even then, how important (or more specifically, how financially important) is it to worry about these things for a non-critical path study? We all want our trials to run smoothly and quickly, but the return on investment for making this happen for each and every study needs to be considered if companies are to maximize value for their shareholders.
Many companies still do not seem to derive the benefit from metrics that they should. As mentioned above, a large part of the problem is that they just don't "get" why metrics should be collected in the first place. Looking more closely, there are certain pitfalls that are common throughout the industry, which companies and departments continually tumble into. By being aware of what they are, operations groups will be better able to navigate around them.
The biggest mistake companies make is that they tend to make things way more complicated than they need to be. Specific examples abound, but the facts can be boiled down into three general categories.
Collecting too much data. Over collection makes it hard to focus and act on the metrics that matter. Albert Einstein once said, "Everything should be made as simple as possible, but not simpler." Following this bit of advice, the first requirement of metrics, as mentioned above, is that all measures should be "actionable." If you're not taking an action (or refraining from doing so) based on a metric, you don't need it. Moreover, those actions should serve to drive positive changes on some level of the organization. Having too many metrics causes confusion and obscures any clarity of purpose. Employees are charged not only with carrying out practices aimed at improving metrics, but must also report back progress so executives can see if those programs are working. If metrics are too complex, the handoffs from goal to implementation to feedback can be easily mishandled. Keep it simple: if an objective can't be expressed in three to five key thoughts, it's probably too complicated. Simplifying drives performance by enabling comprehension.
Companies must be fully aware of the Data Rich but Information Poor (DRIP) principle. In the 1990s, GM followed several hundred metrics on a monthly basis according to Jay Wilber, director of Quality Programs at GM, "We were measuring everything." GM executives went through an extensive analysis of the metrics, keeping only those that offered meaningful and actionable information. The manufacturing division later narrowed its scorecard down to 30-50 metrics.
In another example, the firm Bain & Co. at one time had a large client, with revenues on the order of $20 billion. The client's CEO received 6,000 metrics and was somehow expected to create a mental picture of the health of the company. Much time was wasted in debating what all the numbers meant. The metrics were later condensed to a single-page dashboard of 25 key performance indicators for the CEO to focus on. In addition, each executive team member received a personal one-page dashboard tailored to their area of oversight.
Do any of these examples ring a bell? While these may be extreme cases, it's not unusual for a clinical operations group to utilize dozens (and dozens) of charts, graphs, and tables to manage its business where a fraction of that amount would do.
Poor/misaligned data or lagging metrics. Each company is different (products, market position, vision, etc.), and must decide for itself which metrics, based on which data, will provide a road map to success. For example, Continental Airlines was so focused on cost cutting in the mid 1990s after emerging from bankruptcy, that it rewarded pilots for saving fuel. Many pilots flew slower and behind schedule and skimped on air conditioning, driving valuable business customers into the arms of competitors. Clearly, Continental's business performance doesn't center on saving fuel; it centers on filling up as many planes with passengers as it possibly can. This was a costly lesson for the airline and one that seems to be repeated on different scales across pharmaceutical R&D organizations. How many clinical operations groups focus on "small picture" metrics such as first subject screened and departmental budget targets? The moral of the Continental story is clear: when it comes to metrics, focus on using the right data by keeping the big picture in mind.
Ideal metrics should provide leading indicators, pair financial and non-financial indicators, and display central tendencies with a companion measure of variability. Leading indicators help personnel understand where the business is going (in the same way a fuel gauge gives you an idea of when you'll run out of gas well before you actually do), not just where it's been. In clinical operations, for example, too many reports focus on delivering cycle-time reports, but during a study what is the point of knowing what has already happened? Ideally the reports would center on what would need to happen in order to hit agreed upon targets. So instead of (or in addition to) cycle times for completed milestones, the reports should provide cycle times in the form of "X days and counting" for open items, and add alarms for when days in progress passes a certain point. This will focus the users on what needs to be done rather than what has already happened.
Pairing financial (bottom line information) and non-financial indicators is important in order ensure a company's action, based on a particular metric, is (literally) paying off. It is clear from the earlier example that Continental Airlines failed to do this, and unintentionally sacrificed significant revenue and market share of business travelers in an effort to cut down on costs. Certainly it wouldn't hurt if clinical operations groups started to use this approach when implementing process improvement initiatives. How else will they (or management) know if it was really worth it?
If proper sources of data are not used, then having "ideal" metrics in place will not help very much. Completeness and accuracy of data are obviously crucial; equally crucial is the place you are getting your data from. As an example to highlight this last point, a company by the name of InterFirst Mortgage was getting an inaccurate read on "turn time" (the time its staff took to close a loan) because it asked its sales force, not its customers (mortgage brokers who ultimately created value for the company), for information. The company later discovered through an online feedback tool that its turn time was as good as, if not better than, its competition. By adjusting sources of data and getting a more accurate read from its valued customers, the company avoided focusing on what was really a non-issue and instead focused on improving efficiencies elsewhere. In another example, a company called eePulse once consulted with a software company that had measured the impact of a merger on employee morale and concluded that it was a smooth transition, and a major success in the eyes of employees. Upon closer examination however, it turned out the post-merger data came from workers who joined the company after the merger; most of the employees present during the merger had left the company. Regardless of what is being measured, it is apparent that without authoritative data sources and appropriate information sources (especially given the increased use of outsourcing these days), the interpretation of results can easily be compromised.
Focusing on short-sighted metrics. This practice can produce satisfactory results near term but ultimately guide the company to severe problems later on. Metrics should in some way address the big picture. Many companies focus on short-term metrics to please someone else (such as Wall Street, or a parent company) much to the detriment of the future health of the company. Instead of just saying "our hands are tied, that's what the Street wants," companies should instead sit down with analysts and get them on the right track. Similarly, clinical operations people should sit down with upper management to discuss why blowing past this year's budget will yield benefits for years to come. A different twist on this tale is the struggle between corporate and departmental performance goals. Maximizing performance of a department or division is often done at the expense of corporate financial goals. A department from a mid-size pharmaceutical company, this author was once told, utilized a large amount of contractors despite the fact that they were generally about 50 percent more expensive than fully-loaded full-time employees (for which there was more than enough work for years to come). Why? The simple reason: hiring of permanent employees came out of the departmental budget, whereas contractors were paid out of the corporate budget. In short, for the department to keep under budget they needed to cost the company more money.
In another example of short-sighted metrics, a major North American cell phone company tracked a key performance indicator (KPI) called "customer churn," which was an undesirably high 3 percent. The company spent huge amounts of money through a combination of promotional pricing and marketing activities on customer retention. However, when the company took a closer look, it found that 75 percent of its customers were unprofitable or only marginally profitable. Once it discovered this, it refocused its marketing resources on the 25 percent of customers it really wanted to keep, and on attracting more like them (but only after wasting large sums of money on pointless efforts). This brings to mind the old marketing axiom, "there are some customers you just don't want." In the world of clinical operations, one can argue that a parallel (of sorts) exists when dealing with investigator sites…but that is another article unto itself.
In another example from the cell phone industry, many companies were going out of their way to improve on the primary Wall Street metric of new subscribers (which turned into net new subscribers, then cost of acquiring customers, then net additions of profitable customers, etc.). As a result, early on many companies fell into the trap of pumping up new subscribers (through promotions, discounts, etc.) to the disadvantage of their own profitability and cash reserves. Many of these companies, not surprisingly, are now out of business.
It is obvious from these numerous examples that companies need to keep their "eyes on the prize" when defining metrics and setting goals. If it is not clear that what has been set in place will help to incent and guide employees to corporate (not departmental) success over the coming years (not months), then the company should head back to the drawing board.
Now that we know what the common mistakes are (too much data, using poor or misaligned data, and being short sighted), we can work on how to avoid them through streamlining, proper alignment, and use of leading indicators, and focusing on the big picture. How can we apply this knowledge to the clinical operations framework? By looking at four broad categories of clinical operations metrics—quality (Table 1), delivery (Table 2), cycle time (Table 3), and cost (Table 4)—we can translate the lessons learned from GM and the cell phone companies into some specific and relevant clinical development examples.
Table 1. The financial benefits of an improvement project are not necessarily easy to figure out.
Quality is always a challenge to define in the context of the conduct of clinical trials; ask ten different people, get ten different answers. Regardless, by applying the three avoidance techniques, we can see that there are ways to at least approach the topic in a thoughtful way.
Table 2. When it comes to metrics, focus on using the right data.
Delivery refers to producing the correct output, which in the case of clinical trials can be taken to mean that sites are attracting a sufficient amount of subjects and producing good, clean data as directed by the protocol.
Table 3. Pairing financial (bottom line information) and non-financial indicators is important in order to ensure that a companys action, based on a particular metric, is (literally) paying off.
Cycle-time metrics are often the most simple to measure, and as a result many organizations track down to the most detailed milestone level without really getting much out of it. Many reports are generated that don't get used and obscure the few that are of value.
Table 4. Metrics should in some way address the big picture.
Cost metrics can be the most difficult to collect, but at the same time are often the best and most direct method of measuring value. By looking at cost metrics relative to the other categories—quality, delivery, and cycle time—the organization can get a comprehensive and crystal-clear picture of the effectiveness and efficiency of its operations.
The fundamental purpose of performance measurement is to encourage behavior that achieves the goals of the organization. Especially in a people-driven area like clinical operations, one can never ignore the power of incentives or the damage that can be wrought by fear. The Continental Airlines example of the fuel-savings incentive was a good display of what not to do. The company later decided that on-time departures were a priority, so scheduling and operating department employees were given a $65 bonus every month that the airline finished in the top five in on-time performance. Employees therefore had an incentive to make the corporate goal a reality. The key of course is to avoid "window dressing," where employees can circumvent the system by putting up impressive incentive-related numbers at the expense of other areas (such as in the fuel example), which is why a scorecard approach is valuable since it's extremely difficult to "fool" every dimension of a scorecard. At GM, there is a rewards program for employees who make suggestions that result in cost savings. GM employees receive 20 percent of savings after implementation, for a maximum bonus of $20,000. Results were remarkable (though not remarkable enough, given the company's declaration of bankruptcy)—in 2002, the company saved $397 million and paid $60 million to employees. This shows what a dramatic impact motivated, experienced employees can have on profitability. The point here is that incentives are powerful and, when used properly, are win-win. The key of course is that these incentives should affect the metrics that align with the corporation's market and financial goals.
Albert Einstein said, "Not everything that can be counted counts, and not everything that counts can be counted." As previously discussed, clinical operations groups need to start cutting down on the quantity of metrics collected and start concentrating on the quality and relevance to corporate objectives. The life sciences industry is a nice one to be in as it can have a truly significant impact on people's lives, but still it's a "for profit" industry. Any project—be it a drug program or a process improvement initiative—should be justified with a favorable business case. This is not always easy in an R&D environment as the consequences of actions are often not directly observable due to the long lag time between investment and return.
In the end, it's not just about choosing the right individual metrics, but doing so in a way that provides a complete view of the organization's performance relative to its goals, without overwhelming people or giving them incentive to act in a manner counter to the company's best interest. By following the basic principles outlined above and keeping the bottom line in mind, a company's metrics can be both simpler and ultimately more rewarding.
Eric Lake is a partner at Pharmica Consulting, e-mail: eric.lake@pharmicaconsulting.com
1. Eliyahu M. Goldratt, The Goal, (North River Press, 1986).
2. C.H. Loch, U.A. Tapper, "Implementing a Strategy-Driven Performance Measurement System for an Applied Research Group," The Journal of Product Innovation Management, 19, 185-198, (2002).
Moving Towards Decentralized Elements: Q&A with Scott Palmese, Worldwide Clinical Trials
December 6th 2024Palmese, executive director, site relationships and DCT solutions, discusses the practice of incorporating decentralized elements in a study rather than planning a decentralized trial from the start.