Vendor oversight in clinical trials is a subject not often discussed despite a disconnect in how it is carried out between multiple departments. Celeste Gonzalez of Boston Scientific shares her perspectives on the subject.
Cohesively and collaboratively managing vendor oversight between multiple departments is a topic that is not discussed very often inside of a biopharmaceutical or medical device enterprise. Additionally, there seems to be a disconnect in developing robust vendor oversight analytics, assigning cross-departmental vendor oversight responsibilities and accountabilities, and conducting thorough vendor evaluations. While at ExL’s 7th Clinical Quality Oversight Forum, I met Celeste Gonzalez, who is seasoned in clinical quality, and has interesting perspectives to share on the matter. In this interview, Celeste Gonzalez will elaborate on her experiences with vendor oversight.
Disclaimer: The opinions, assumptions and comments made in this article are solely of Celeste Gonzalez, and do not reflect the official policy or position of Boston Scientific. Examples in this article are only examples. They should not be utilized in real-world cases.Moe Alsumidaie: What are the biggest challenges with clinical trial vendor oversight?
Celeste Gonzalez
Celeste Gonzalez: Getting all the folks together at the table who have an interest with that vendor, be it the clinical operations team, the budgetary team or the oversight team. When they come to the table, they need to know where the risks lie for a particular study or therapeutic area when dealing with that vendor, to discuss what risks that team has identified and how they will identify them moving forward, and how to oversee and mitigate those risks in working with a vendor that they either are using or want to use.
Oftentimes, vendors are repeatedly contracted and traditional oversight plans are used, despite technological advances in clinical operations. For example, while risk-based monitoring (RBM) was being rolled out from a data analytics perspective, there were still contracts and scopes of work that were stating a very traditional monitoring model, and the oversight of incorporating the data analytics and technological systems were not even considered, therefore, not overseen.
MA: Can you elaborate on the siloing of departments during the vendor evaluation and contracting process?CG: This phenomenon is common in our industry. What happens is that when clinical operations are comfortable with using a certain vendor, they make all the moves with the vendor as if they're going to move forward with the contract, but they have not incorporated any kind of quality assurance (QA) activities to ensure that the vendor is already either on the approved vendor list or goes through a QA process to get on the approved vendor list. They are not moving in unison, so the people that are in the operations team are often working separately from the people that are in compliance, QA and sourcing/contracting.
What has to happen, is all those players need to come together and talk about what it is they want in a vendor, and then they can cross the spectrum to see what vendors might meet their specifications within their budgetary constraints. There are some vendors that are either best in class or provide such an esoteric service that teams are forced to use that sole vendor. In such cases, they need to call in a subject matter expert to evaluate that vendor and to confirm with QA that they are operating under good clinical or laboratory practices to assure that the vendor can deliver.
MA: Which departments should be held responsible, and accountable to measure vendor oversight and address vendor oversight problems?CG: It depends on the company and culture; sometimes it is the project manager within clinical operations that is held responsible for the vendor oversight. Other times, it is a group that does nothing but vendor oversight. Vendor oversight can take many forms including the performance perspective (such as looking at key performance indicators), the quality perspective, or other non-performance indicators and regulatory compliance indicator perspectives.
The bottom line is that once responsibility is assigned, accountability has to go along with it. It is advisable to plan oversight activities to identify the potential problem area before it occurs or is discovered, and whether the sponsor company has a way to measure their vendor’s performance. Most companies today do have key performance indicators, however, they are not as thorough in the key quality indicators realm, and that poses difficulties at the time of regulatory authority inspection. Hence, before assigning responsibility and accountability, oversight needs to be spelled out, specifically, what is going to be looked at by whom at what time points.
At the end of the day, when a vendor becomes due for either a budget renewal or something similar, all those parties with responsibilities and accountabilities must come together to share information about that vendor, and I don't think that is what is happening on a regular basis across the industry.
MA: How do you balance study risks with incentives for vendors?CG: This can be done in several forms, but common ways include risk-sharing, and performance models, where a vendor is reimbursed a reasonable amount for providing services, but, receive a bonus for overperformance and a cut for underperformance. For example, if you are using a vendor for recruitment, and they say that they have access to a database of 500 potential subjects with a specific condition and they can recruit 20% of those 500 within three months, then you are going to write a very tight contract so that those parameters incentivize them with a bonus if they exceeded it and decrease if they don't.
MA: How can teams set up good quality analytics for vendor oversight?CG: I truly believe that building good quality oversight analytics starts with a Risk Assessment and Categorization Tool (RACT) that is available from TransCelerate. That can be done by study and therapeutic area, and it should hit all the touch points that the vendor is involved with. If there is no risk assessment, there are limitations in effectively measuring, managing and mitigating study risks.
ExL’s 6th CROWN Congress takes place March 7 – 9, 2017 in Philadelphia.
How Digital Technology and Remote Assessment Strategies Can Aid Clinical Trial Research
July 24th 2020While there's been hopeful news on treatments and vaccines, sponsors should plan to discuss necessary strategies and contingencies at the outset of new studies or re-opening of halted studies during the COVID-19 pandemic.