Making electronic CRFs friendly to both investigators and reviewers is a challenge.
Making electronic CRFs friendly to both investigators and reviewers is a challenge.
In bygone days, when I was doing my PhD research, my advisor often reminded me that no experiment was complete until the results had been published or otherwise reported. There is no more important place for the reporting of results than in clinical development. No matter how good the data, a clinical trial must be written up and submitted to the FDA or other regulatory authorities to be considered in support of the approval of new drugs, biologics, or medical devices.
A fundamental difference between the review processes for scientific publications and regulatory submissions significantly affects the management of primary data. In all scientific and clinical research, the researcher is expected to collect and retain the primary data from which all research results and conclusions are made. When a paper describing a scientific investigation is submitted to a journal, the reviewers of the paper typically examine derived data, summary tables, and analyses to determine whether the conclusions are justified from this data. Unless there is an investigation for impropriety, the primary data are typically never examined, and are assumed to be accurately reported.
In contrast, regulatory reviewers of clinical trial data dont make the same assumption. The stakes are too high. Tens of thousands of lives are in the balance, and a number of factors create the potential for fraud or unintentionally inaccurate reporting of clinical trial data. Like journal reviewers, regulatory reviewers also determine whether conclusions are justified by the data. Beyond this, however, regulatory reviewers need to be sure that the summary data are properly developed from the listed data, that the listed data accurately reflect the case report form (CRF) data, and finally, that the CRF data accurately represent the investigators observations. Often this is accomplished by field investigations and audits. However, regulatory reviewers usually want to see the actual data from cases that are critical to decision-makingdeaths, dropouts, unexpected serious adverse events, and others (depending on the indication and class of drug). CRFs from these subjects and others must accompany most New Drug Applications (NDAs) and Biologics License Applications (BLAs). In the past, all this happened only on paper. But a new paradigm is quickly emergingthat of the electronic submission of data from electronic clinical trials.
The CANDA era
The FDA has worked with electronic submissions for more than a decade. In the early 1990s, the Computer-Assisted New Drug Application (CANDA) was seen as a way for FDA reviewers to have rapid access to report and data together, in a format that allowed efficient and high-quality analysis of data. Unfortunately, the CANDA era led to a proliferation of unique and proprietary formats for CANDAs, most of which required a stand-alone desktop computer on the desk of each regulatory reviewer. A whole variety of strategies for CANDAs emerged, from simple to complex. Each CANDA required a reviewer to learn a new system for accessing the data, a daunting task in many cases that few reviewers had time for. There were no standards for the structure of a CANDA and no common software platform or file format for the data. The results were mixedmany reviewers and sponsors were delighted with the efficient review that CANDAs provided, but others were unwilling to train on and use multiple different systems, sometimes simultaneously. The FDA soon called a halt to the unstructured CANDA era. But this was certainly not the end of the submission of electronic data.
The development of standards
In 1997, the FDA revealed the beginnings of a new method of electronic submission. The increasing volume of NDAs and the need for expedited review caused by the 1992 Prescription Drug User Fee Act (PDUFA) initiatives demanded that the FDA develop an approach for the efficient review of electronic data. The FDA was looking for a way to deal with the accumulating volumes of paper in its file rooms and the logistical problem of distributing sections of regulatory submissions to appropriate reviewers. By means of a series of guidance documents, the agency intended to carefully define the structure and technology that was acceptable for electronic submissions. In this way, starting with data listings and CRFs, the FDA could ensure a consistent set of electronic submission documents and reviewers could be comfortable that any electronically submitted data would be viewable in a familiar format.
Shortly after the first guidance documents were issued, electronic submission of NDA documents became an emerging standard for many pharmaceutical sponsors, eliminating the need for manual printing, duplication, pagination, and other processes. CRFs required for submission could easily be scanned into PDF form and submitted in the structure and style that the guidance suggests. However, the recent emergence of widespread electronic data capture, and specifically Internet clinical trials, adds a new twist to the task of electronic submissions.
CRFs and the Internet
At first glance, the paper CRF and the electronic CRF have much in common. Both typically consist of questions and answers through which subject data is collected and entered into a database. From here, their form and function rapidly diverge. The paper CRF usually has an audit trail that is visible directly on the CRFs. Changes to individual fields are indicated with a single-line cross out, with the changed data appended, along with the signature or initials of the person making the change, a date, and perhaps a reason for change. Comments are often written in the margin of these CRFs. Data clarification forms are appended to the CRF and contain the questions and responses that generated the change. This time-honored format creates a complete case record with audit trail that is familiar to regulatory reviewers and investigators.
An electronic CRF may appear somewhat similar to the paper CRF, but actually represents a view or report of a relational database. For example, the Internet electronic CRF assembles data from multiple tables into a single Web page, yet that page doesnt exist anywhere in electronic form except on the screen. In addition, audit trails and comments are not found in the margins of an electronic CRFthey are viewable through links from the CRF and also are simply user-friendly representations of data tables. Finally, the databases containing the electronic CRFs also contain metadatadata about the data. Metadata may include who entered the data, exactly when, and so on. These metadata can be used to understand and ensure the quality of the actual CRF data, and must be made available to demonstrate the attributability, legibility, contemporaneousness, originality, and accuracy of the data.
Designing electronic CRFs
When electronic CRFs are submitted as part of an NDA, the unique attributes of the CRFs must be married to the CRF submission guidelines and philosophy of the FDA. Reviewers have conflicting needs when reviewing electronic CRFs. On the one hand, they would prefer that the submitted CRFs closely resemble the actual electronic CRF that the investigator saw when it was completed. It is easy to understand why this would be the case. Subtle differences in the design of a CRF can greatly influence the answers obtained. Optimally, the CRF should be displayed with the questions, controls (text boxes, radio buttons, pull-down lists, and so on), and answer choices that the investigator viewed. The questions must be assembled and displayed as the investigator viewed them. In addition, the audit trails, comments, electronic signatures, and possibly even data clarifications must be viewable along with the CRF itself.
On the other hand, the FDA does not want the CRFs to be overly complicated for the reviewer to navigate. With the experience of the CANDA era in mind, the FDA is trying to ensure that reviewers are not besieged with dozens of different electronic CRFs, each from a different electronic data capture system, that would require learning new navigation, icons, and functionality with each submission. Optimally, the electronic CRF should be as easy to navigate as paper for the regulatory reviewer. Important CRF items such as audit trails and comments must be viewable through obvious means, not through sophisticated functionality.
Another important consideration is that of printing the electronic CRFs. While some FDA reviewers may be comfortable reviewing electronically, many are more interested in printing the CRFs and audit trails and reviewing them off-line. This requirement creates special challenges for the design of an electronic submission of electronic CRFs. Many of the features of electronic CRFs work very well on the computer screen, but may be difficult or impossible to print while still maintaining their context. Anyone who has tried to print out a standard Web page understands that what you see is not necessarily what you get in your printout. In addition, the design decisions for electronic CRFs and paper CRFs may differ, so that a printout of a single subjects CRFs may be hundreds of pages. Somewhere along the line, the designer of an electronic CRF submission must decide whether to optimize the system for printing or for viewing on line.
Have no doubt, electronic data capture and electronic submissions are here to stay. These systems promise to speed the review of clinical data, and to increase the quality and veracity of clinical trial data. As with most technology, the successful deployment of these advances requires careful consideration of the desired outcome and the needs of the people involved in the process. In the long run, the benefits will accrue to both the regulators and the regulated.