Applied Clinical Trials
The very first article ACT ever published shows how many changes the CRO industry has seen in 13 years.
Foreward
The following article was published in
ACT
's May 1992 inaugural issue. The author talked about the dilemma pharmaceutical companies faced with regard to oursourcing: legal and regulatory assaults if they kept clinical trials in-house, and major concerns about data quality, timeliness, overestimation of patient recruitment, research reports on time delivery, and cost overruns if they outsourced. The author's solutions included better communication of what the sponsor wants and expects, and improved computer technology applications. It has been 12 years since the article was published, and, as readers will find out, much has changed.-The Editors
Clearly, major pharmaceutical companies would conduct all trials in-house if it were feasible. Some pharmaceutical companies use hybrid systems, retaining certain elements while contracting out other elements. This most likely puts senior management of such firms in a bit of a quandary. FDA is taking an increasingly skeptical and adversarial stance regarding clinical trials and studies submitted by pharmaceutical manufacturers-regardless of how they are conducted. This attitude, coupled with the increasingly litigious nature of our society, makes it likely that FDA will attempt to "reform" the new drug approval process by imposing further time-consuming burdens on manufacturers. A likely avenue for such new regulation may lie in federal theories of so-called organizational conflict of interest.
Thus, the horns of an apparent dilemma for pharmaceutical manufacturers: contract out your clinical trials and suffer deficiencies in data quality and timeliness, or conduct the trials in-house and be subject to legal and regulatory attacks by interest groups and official regulatory bodies.
Contract research organizations must deliver the quality and reliability required by manufacturers on a timely basis and at a reasonable cost. This article summarizes the most commonly voiced complaints about their services and attempts to analyze the sources of such complaints and suggest possible remedies.
Common problems
A number of articles and surveys have addressed concerns of pharmaceutical company research staff with the performance of their outside contract researchers.1 Based on both a review of some of the current literature and the author's experience, such concerns resolve themselves into four rather broad categories: credibility, responsiveness, quality of product, and cost.
Credibility. A consistent refrain in criticisms of contract research organizations relates to the companies' tendencies toward what might charitably be described as excessive optimism. Sponsors complain that contractors
There are other complaints, but this list is sufficiently illustrative. Critics of a forgiving turn of mind ascribe such statements to "an excess of enthusiasm" on the part of the contract researchers. Others, of a more ill-tempered disposition, blame them on "sales hype" or worse. The essence of the problem, however, is that pharmaceutical manufacturers and CROs enter a trial or other clinical study with undue expectations of likely performance. Even if the study is eventually successful, the two parties will nonetheless be disappointed that it did not succeed as planned. Thereafter, the manufacturer will be skeptical of contractors offering research services.
Responsiveness. Once a study has begun, there is often the feeling that the contractor will plow ahead without regard to the sponsor's concerns. Sponsors fear that the contractor will
Of all of these, the last may be the most critical. The relationship between the sponsor and the contract researchers must be based on mutual trust, as well as the sponsor's confidence in the researcher's ability to perform.
Quality of product. This is the area in which contract research is viewed with the greatest suspicion by sponsors. Quality problems run the gamut from delivery of sloppy, unclear, and technically inferior research reports to severe protocol violation.
Sponsors complain of contractors who
These examples are just a few of the many mentioned in various articles and surveys.1 The upshot of such activities is the marked degradation of the quality and usability of the clinical trial data. This may require the sponsor to conduct the trial a second or third time, delay the submission of an NDA, and lose time for marketing a new formulation, with the attendant consequences for pricing, revenues, and profitability.
Cost. This is the least understood and most contentious of the problems presented here. Under the best of circumstances, developing full, accurate, fair, and reasonable pricing is difficult. In contract research, it may be more difficult than usual because there are a multiplicity of organizational actors-the research contractor, clinical centers or hospitals, laboratory, data analysis centers, etc.-all with their own cost and price studies. It is difficult to gain the cooperation required to determine all the costs and a fair price for conducting a trial under a given protocol. Indeed, some of the collaborating organizations may be quite competitive and seek to extirpate the participation of the other cooperating organizations over the course of the study. These problems may cause the contractor to
Such problems may be unavoidable by the very nature of the so-called conventional contract research process. Nonetheless, a contractor has an affirmative obligation to present the most realistic pricing feasible. To do otherwise is to fail to exercise the due diligence required of any professional.
Sources of problems
The litany of sins set forth above could lead one to conclude that contract researchers are perhaps on a par with journalists, members of Congress, and others whom Twain once referred to as the "native American criminal class." It is not my intent to give or leave that impression. Following are some potential causes for the difficulties outlined above, focusing particularly on three areas: lack of consonance of organization purposes, lack of management authority, and complexity of communications. These are not the sole sources of difficulty; rather, they are those to which I ascribe the greatest importance.
Lack of consonance of organizational purposes. Contract research involves a multiplicity of independent organizations. Few of these organizations exist for the sole purpose of conducting clinical trials. For example, the clinical sites generally have multiple missions organized around provision of care. Whatever they may consider their driving goals, they are not exclusively or primarily research organizations in most, instances. Data analysis centers are often computer service bureaus, some more specialized in health care data reduction and tabulation than others. Pathology clinics and other laboratories are generally commercial ventures. Although they may appreciate the need for an exquisite precision in the analysis of research specimens, it is unlikely that they will direct any special attention to the research studies if there is no contractual requirement to do so and no provision for enforcement.
In addition to these disjunctions of purpose, consider that supporting clinical trials and studies is an adjunct activity for most of the organizational participants. Moreover, because these are autonomous organizations, it is difficult for a contract researcher, as an intermediary for a sponsor, to impose any realistically enforceable singleness of purpose on this multifarious gaggle of collaborators.
Some of the consequences of this diversity include
Lack of management authority. The contract researcher lacks the power to enforce procedural dicta across the collaborating organizations. The ability of a contractor to discipline personnel for material breaches or errors in protocol execution, for example, extends solely to his or her own staff. If a collaborating organization performs poorly, the contractor has three types of remedies available:
Clearly, these range from the trivial to the draconian. A far more focused approach would be desirable.
Complexity of communications. The diverse group of organizations that collaborate on the clinical trial, whether implicitly or explicitly, are a communications network. Each potential combination of two autonomous organizations constitutes a "communications link." In any such network, the possibility for miscommunication is proportional to the complexity of the net, expressed as the number of potential links. The totals escalate very rapidly. Thus, for a trial that includes the sponsoring organization, a CRO, six clinical sites, a pathology laboratory, and a data analysis center (10 organizations total), there are potentially 45 communications links, each of which adds a potential for miscommunication. If five clinical sites are added to this trial, the number of possible communications links grows to 105. That is, a 50% increase in the number of involved organizations causes an increase in network complexity of over 150%. This is why attempts to speed trials by major expansion of the number of clinical sites and investigators often end in grief.
Pragmatically, not all the potential "links" are necessarily realized in particular trials. For one thing, not all clinical sites are necessarily aware of the other sites involved. This is most often true when there are more than 10 sites, widely geographically dispersed. Nonetheless, the more actors involved, the greater will be the difficulties in clearly and effectively communicating trial purposes and requirements and maintaining appropriate adherence to protocol requirements.
Perhaps of equal importance to the size of the communications net in a trial is the rather common lack of explicit communication rules and procedures provided by study coordinators to each participating organization. In physical computer networks, such rules constitute the communications discipline of the net and are enforced both by the hardware configuration and the network operating system-without overt operator intervention. Unfortunately, there is no equivalent of network software for clinical trial groups.
This potentially dysfunctional complexity may be as much of the source of the difficulties experienced in conducting trials as any other cause. So, the question to be addressed is: What solutions may be feasible?
A consideration of solutions
The objective of this article is not to chastise the sinner but to seek means of redemption; that is, to suggest ways in which clinical trials can be improved, particularly when conducted on a contract basis.
In-house trials. A solution obvious to some would be for pharmaceutical manufacturers to conduct all trials in-house. There are, however, two major disadvantages to doing this. The first is resources. The number of compounds being developed for which trials are needed is clearly outstripping the capacity of any one firm's resources. Perhaps of greater long-term importance is the growing hostility of FDA to free-market pharmaceutical manufacturers. For the foreseeable future, relations between FDA and companies submitting NDAs are likely to become more adversarial and contentious. The use of contract researchers seems to enhance FDA's perception of the objectivity of a trial. I believe, however, that all manufacturers should conduct at least some trials in-house, perhaps through the vehicle of a wholly owned subsidiary that retains a distinct corporate identity. This would allow development of baseline comparisons of product quality, operational efficiency, and cost, which could be used to evaluate extramural contract research. Such an entity could also provide a venue for testing compounds about which the developer wished to maintain a high degree of confidentiality.
Contract language. Another potential solution to the problems experienced with contract research lies in the legalistic approach. That is, pharmaceutical manufacturers should use rather explicit detail in contract terms and conditions to enforce certain performance standards on contract researchers. Doing so could have certain salubrious outcomes. It might well eliminate from consideration those contract researchers unwilling to apply the standards the sponsor feels are necessary. More stringent contracts might also clarify expectations and better define requirements among the parties involved.
Unfortunately, more complex contracts may at some point prove counterproductive. A telling example is found in federal government contracting. The laws and regulations enforced by federal agencies in their contracting practices are extremely detailed and one-sidedly onerous. This attempt to obtain quality by contract, however, is not particularly productive. The government experiences as many or more contractual problems as any private sector organization.
Computer technology. Still another solution is the application of improved computer technology. This holds some real promise, particularly in the light of the growing power of microprocessors and the expansion of open systems architectures. Such approaches, at least in part, address the communications issues alluded to earlier. The disadvantages lie primarily in the implementation. Particularly in very large firms, there may be information services or records management departments that are wedded to obsolescent technology or at least resistant to the technology most effective for the types of problems faced in the contract research arena. Additionally, there is the problem of disseminating the desired technology through the various collaborating research organizations. Each may have distinct views as to the types of computer systems they can or will use. Moreover, if they are involved in multiple studies, which is a common situation, they may be besieged by demands to install and implement this or that system as a condition of a contract or grant.
Organizational change at the CRO. The most attractive solution, particularly because it rests in the hands of the contract researcher, is to markedly improve overall organization and management of the CRO itself. An explicit implementation of the principles of W. Edwards Deming, the leading proponent of statistical quality control, would be felicitous. Some of the specific actions that might fall within this rubric are
A number of CROs have implemented some of the approaches suggested above, though usually in piecemeal fashion. What is required is a comprehensive approach and strategy for improved performance. Karl E. Peace presented an interesting look at how such a research organization might be configured.2 In particular, strong emphasis on quality control and improved, automated data management are key elements of improvement.
There are no panaceas, but improved organization and management efforts on the part of contract researchers themselves will go far to reduce the most obvious difficulties.
References
1. Contract Research Organization Quality Assessment Study (Decision Research Corporation, April, 1991).
2. Karl E. Peace, "TMO: The Trial Management Organization-A New System for Reducing the Time for Clinical Trials," Drug Info. J., 24, 257-264 (1990).
What Can ClinOps Learn from Pre-Clinical?
August 10th 2021Dr. Hanne Bak, Senior Vice President of Preclinical Manufacturing and Process Development at Regeneron speaks about her role at the company as well as their work with monoclonal antibodies, the regulatory side of manufacturing, and more.