How these tools can help expand capacity while maintaining compliance.
Conducting a clinical trial requires multiple inputs of both data and documents. Each partner, vendor, site, or committee must exchange documents and data with sponsors. In some cases, standards exist for document transfers, such as the trial master file (TMF) reference model. Unfortunately, each partner may have slight variations in their list of artifacts and sub-artifacts. Artifacts are the documents that are expected to be found in the TMF. Sub-artifacts are company-specific or possible variations of an artifact. The exchange of data for clinical trial management systems (CTMS) is even more challenging due to the lack of standards or common agreements on which data are required or optional.
Traditionally, document and data exchange has occurred through custom integrations or batch loading that still require significant error prone, manual interventions. These solutions demand time and resources to define requirements, conduct validation, and maintain support. They are costly solutions that can easily break when documents, data, or metadata aren’t clearly understood in advance and vary, even sightly, from the assumed standards. This situation creates a connectivity and a capacity challenge for clinical teams who are under pressure to receive and process the documents and data as quickly as possible to maintain compliance and to make them accessible to the broader organization.
The situation is similar to the problem of connecting multiple input devices and cables to a computer (see Figure 1 below). The computer industry has solved this problem by creating an infinite number of adapters and port configurations. The biopharmaceutical industry has similarly used data warehouses and data lakes to tackle these problems. However, not every company has the technical or financial resources to build and maintain these more complex solutions.
The efficient exchange of documents and data continues to be an elusive goal in clinical operations. This business problem demands more than automation. It also requires artificial intelligence (AI) to address decisions throughout the intake process. The problem grows as more vendors, partners, and even patients are involved in the exchange process. Often each exchange transaction must be mapped, validated, and monitored for quality and compliance. Any change in configuration or process requires the process to be repeated. These activities frequently are time-consuming and require some level of subject matter expert (SME) intervention along the way.
The technology solution requires flexibility for document and data exchange that can only be achieved with AI and machine learning (ML). Machine learning involves users specifying a set of correct data or documents that becomes the baseline knowledge. Another sample of data or documents is analyzed, and the system attempts to predict the results. The system compares the predicted values to the baseline knowledge. The closer the prediction is to the baseline knowledge, the greater the confidence in the prediction.
Once the system is trained and confidence levels are established, the work transitions to managing the exceptions, rather than the full set of documents or data. This is a powerful value proposition for clinical teams who may be faced with bringing in large document collections as a result of merger and acquisition (M&A) or partnering activities.
If needed, the system can easily be retrained, using a new sample of data and documents. As humans confirm or correct the predicted values, the system learns and increases its ability to provide accurate predictions. Much of the manual quality control (QC) work will be automated and becomes reportable. Companies can reduce their 100% QC activities and begin to take a more risk-based approach.
AI is particularly useful for clinical document exchange because documents may contain unstructured data. Context, synonyms, and fuzzy logic can all be used to enhance document identification and indexing. AI can also be used to identify minor variations in similar documents. The technology can also check if the document is in the correct orientation and if a signature is present when required.
Let’s look at a practical example in Figure 2 below from the TMF reference model. Suppose that a contract research organization (CRO) submits a document titled “Trial Master File Index” based on their document naming convention. Using AI, the sponsor’s TMF system would understand this content as belonging to the Trial Master File Plan artifact. The context within the document can be used to enhance the predictive assignment of the correct artifact. In a traditional manual indexing scenario, the sponsor’s document processor may not know where to file the document received from the CRO. This task would then require additional time and direction from a colleague to determine the correct location to file the document.
Similarly, study site data provided by a CRO might use the study name, full country name, and a site number. The sponsor uses the protocol number, ISO country code, and the site number as primary data and concatenates this data to create a unique ID for the site (see Figure 3 below). AI can transform and enhance the data instantaneously. This avoids the need for a person to look up the data and eliminates the potential for transcription errors.
Automated workflows can be used to send the documents and data from the source system to the target system. The use of an analytics tool can then be applied to identify trends and outliers. Cost-effective and user-friendly tools such as Microsoft Power Apps and BI can be used to deliver or improve this type of automation.
This approach allows flexibility for organizations to work with different partners and vendors. The technology can be configured to provide real-time or near real-time transfers. Visualization of trends can significantly reduce the cycle time from issue identification to closure. Leveraging these types of novel technologies can improve sponsor oversight and avoid the traditional burdens that occur at study closure or archival. These tools can also be used for data quality assessments to support M&A activities.
Finally, the tools can be used to export documents, metadata, and audit trail data efficiently. In many cases, maintaining document hierarchy and relationships is critical to preserving the value of the exported documents.
New user-friendly tools that leverage AI can greatly facilitate the transfer of documents and data. This is a cost-effective solution to a long-standing problem. Besides providing a flexible solution to meet changing business needs and collaborations, it can improve compliance and inspection readiness.
It may be time to assess your current tools and business processes used for document and data transfers. If your organization doesn’t currently have access to these types of tools, developing a business case with an ROI calculation will help to educate your management team on the business problem and the opportunity that AI presents.
Michael Agard, Team Leader, Clinical Consulting, NNIT Inc.