DIY EDC Evaluation

Article

Applied Clinical Trials

Applied Clinical TrialsApplied Clinical Trials-02-01-2013
Volume 22
Issue 2

Options have emerged that make DIY EDC technology more accessible to smaller organizations.

Until relatively recently, taking your company's clinical research into the realm of electronic data capture (EDC) meant going with a full service vendor with a price tag generally starting at $175,000 to $200,000 for a 250 patient, 50 site trial over 24 to 36 months, and as high as your project was complex. If you wanted to conduct smaller scale research these were often just not economically feasible-the same pricing as just mentioned might be provided for a 50 patient, five site trial.

Of course, the option has been available from some of the larger EDC companies to "do it yourself" (DIY) in an attempt to decrease costs, but these were often only less expensive if you were doing large volumes of research and could afford to have skilled "build" specialists trained and on staff. The net result is that smaller companies either remained in a paper world with all its attendant woes, or were compelled by cost-containment needs to select markedly sub-par EDC systems almost as problematic as the old paper model.

Of late however, perhaps over the past three to four years, viable DIY EDC options have begun to emerge onto the marketplace that make what has become a commonplace technology in larger organizations more accessible economically to smaller enterprise. Finding, assessing, and selecting DIY EDC though, is an activity that is by no means as simple as it might at first appear. Having lived through this exact process very recently, this article seeks to share experiences in the hope they may prove useful to others in similar positions.

The scenario

The company for whom I was evaluating systems, a small venture capital backed medical device company, had previously conducted a number of studies on paper using as a backend a well-known and relatively inexpensive CDMS. The largest and most recent of the studies conducted required a considerable amount of data cleaning after the enrollment period closed-largely because data errors had been introduced due to the nature of a dynamic non-point-of-capture-checked data acquisition. The burden placed on the company in cleaning up this rather complex trial dataset was very significant, and contributed in no small part to the decision to investigate DIY EDC systems.

Given the upcoming projects, which varied from moderate to quite small in nature, full-service EDC was not seen to be a viable option. I was requested, given my background in EDC/CTMS system design and implementation, to source, evaluate, and recommend a DIY option. Cost was not a primary consideration, but neither was it immaterial, as one might expect in a small organization.

With years of experience and more than a few systems under my belt, I foresaw little difficulty in conducting this project-get a few demos, jot down a few notes, make a recommendation, and sign contracts. Much to my surprise and chagrin, my assumptions were proved wrong almost immediately.

 

 

Challenges encountered

It became obvious, fairly quickly, that problems with the initial methodology as conceived were legion. These included:

  • The simple task of developing a vendor list was not easy.

  • Scheduling and sitting through system demonstrations is a time consuming laborious process.

  • The disparity between systems and capabilities makes non-systematic comparison inherently an "apples to oranges" exercise and hence invalid.

  • The variation in pricing models made cross-comparisons difficult (this may be a deliberate ploy on the part of the vendors).

  • Specialist knowledge is required to get past minutiae and understand true capabilities (or lack thereof).

  • Sales reps have a habit of showing you what is great about their system and glossing over weaknesses-this can be hard to detect without an understanding of what particular system characteristics actually mean in the real world, and especially so outside of a structured demonstration.

Many, perhaps even all, of the issues listed above exist for a company of any size. In a small company however, the problems, or their impact, become exacerbated, mostly due to far more limited resources in terms of personnel, diversity of expertise/experience, and time.

Developing a list of candidates for assessment was addressed through looking at resources such as ClinPage (Who's in the news? Who are the advertisers?), lists of conference exhibitors (SCDM, Partnerships in Clinical trials, DIA etc.) and most usefully the Applied Clinical Trials' Annual Buyer's Guide. Using search engines such as Google, Yahoo, Bing, and others proved less than useful with an awful lot of CROs and non-clinical research organizations showing up in results.

Other online resources include the various groups on LinkedIn that focus on EDC and CTMS-like ClinPage, these are useful for uncovering vendors that might be suitable-caution should be exercised with the postings/discussion though, as many contributors are vendor employees. There are a couple of website resources that list multiple vendors with profiles of products, but they tend to be largely derived from the vendors themselves, not any third party review, and out of date in many cases.

Considering scheduling, making effective use of the most precious non-renewal resource there is-time-is critical. Quick Internet reviews of corporate websites and industry articles was employed to pare the candidate list down to 25 or so. Scheduling demos and reviewing that many would take months, so further filtering was conducted using simple criteria:

Are the companies still in business? More than a few have become defunct or been acquired over recent months and years.

Are they in danger of going out of business? Have they been delisted from the stock exchange? Have they sustained losses? Have they been shedding staff? Are they on the market for sale?

Do they actually have a DIY product? Many don't offer this (or don't have a good one) but would love to try and sell you a full-service offering.

Are they a CRO that happens to have and EDC system or a technology company? Sometimes it makes sense to use a CRO with and EDC system, but for most DIY solutions it's probably best to stick with pure technology companies (they have an incentive to make their product good so you don't call them-whereas hourly billing companies love phone calls).

What do your colleagues think? What have you heard about town? What does your CRO (if you have one) think? Utilize your network to seek out colleagues who have used different systems and get their unbiased opinions on ease of doing business with the vendor, ease of use of the product, problems, issues, etc. Your CRO can be invaluable at this point too. This criteria should be used to exclude vendors from consideration, not select them-your needs are different than anyone else's.

 

 

Is their base technology from this century or the last? This can be hard to detect without specialized knowledge.

Using the simple criteria above and doing a bit of digging on the Internet and some time on the phone, a list of seven was formed. One, on subsequent examination, didn't truly have a DIY option and tried to obtain customers with vague generalities and then present their full-service product as a viable alternative. Six companies remained.

To deal with disparity between systems and pricing variation, standardization of data was required. Not to standardize, given the variation in system capabilities, look, feel, and even sales representative expertise, leaves you vulnerable to primacy and recency effects (the first seen is best or the most recent-usually the latter, as it is very hard to organize non-standardized data and make valid comparisons, and thus the most recent thing is best remembered).

To ensure standardization of evaluative data an 87 point assessment tool was developed that highlighted capabilities in the arenas important to the company's particular research needs, as well as fundamental system characteristics. This approach meant that "canned" presentations/demonstrations did not meet my needs, which caused some sales representatives some difficulty. In addition two "model" projects were presented to potential vendors and pricing for those and only those was sought.

In the first case, the assessment tool covered the areas listed in Table 1. The assessment grid was made available to vendors prior to the demo taking place, and they were asked to fill out responses insofar as they were able. When the responses were received back, if the answer was clear it was scored as follows:

  • 0 = feature or characteristic not present

  • 1 = feature or characteristic present but difficult to use, poorly implemented, limited functionality etc.

  • 2 = feature or characteristic present in good usable form/implementation

  • 3 = feature or characteristic present in such a novel form that it enhances productivity or system desirability dramatically

The last score of "3" was only awarded twice out of over 600 total responses, and where another product had the same utility or outstanding character both scored "2." The assessment tool then guided the subsequent demonstration and any remaining questions/unclear responses scored. It was only after the assessment tool had been completed that any other features the sales representative wanted to highlight were reviewed for information only. It is important to note "that feature is coming" or "it is in our pipeline" counted as zero-updates to complex software systems often take far longer than expected and in many cases prove impossible once attempts to program begin.

Apples to apples pricing was obtained through use of strict models as seen in Table 2. Some characteristics of the models such as unique CRFs and data fields were kept constant deliberately in order to determine if there were any economies of scale that became evident at higher volumes of data and greater duration. All the data fields were assumed to have at least one edit check or conditional action associated. Importantly, these models were treated as "standalone" projects, that is, any license fees or training that would not necessarily be charged for multiple trials was added in. Overall, this methodology allowed for direct comparison of pricing across vendors.

Dealing with the technical side of specialist knowledge and sales reps-essentially making yourself an informed consumer-is more problematic. If you are a highly experienced EDC user with knowledge of multiple different systems, then no outside assistance may need to be sought. Often, in a small business environment used to running paper trials, that expertise is simply not available. In this case, soliciting help and feedback from your CRO may be invaluable. It may be wise to consider retaining an independent consultant-but be warned, many "consultants" are actually resellers or quasi-distributors of a single EDC system and will be inherently biased; reseller consultants may be of some value in implementing the chosen system and assisting managing process re-engineering. A few key points to look out for are described later in this article, but do not by any means form a comprehensive listing.

 

 

Standardized assessment process

Scores, across the board, were fairly even, surprisingly. There was one outlier with a markedly lower score than the rest of the population. Aggregate grid scores by vendor can be seen in Figure 1. Data in tabular form against maximum possibles by area and total can be seen in Table 3.

It is clear from the data in Table 3 that despite Vendor F being superior overall, they lead only one category (data/build features) and lag the pack in some others. In order to determine the meaningfulness of the scoring data, scoring characteristics were plotted to determine the standard deviation (Excel 2003 STDEVP function) which yielded the results shown in Table 4. Vendor C was removed from calculations-see discussion.

On the assumption that a score that deviated from the median value more than one standard deviation was meaningful (or at least "noteworthy" given the sample size), the "above" and "below" par performances were calculated by vendor.

Findings of the pricing models

Pricing results were interesting, to say the least. As can be seen in Figure 2, a wide variation exists between vendors-pricing was not obtained from Vendor C as they were removed from consideration early (see discussion).

Vendors, in general, were reluctant to provide pricing, and in some cases it took a number of communications to source this important component. Model 1 results, for the five site, 50 patient 12-month single-arm trial were somewhat expected, with the exception of Vendor F whose very low price prompted a phone call in which it was confirmed as accurate. Vendor A did not provide pricing for Model 1 despite being asked to do so on a number of occasions.

Model 2 results were most surprising and unexpected. While one expects to see some cost increases for a longer, larger, and two-arm trial (50 sites, 250 patients, 36 months, two arm), the 18x multiple for Vendor F was very surprising (confirmed by e-mail as accurate). Vendor B gave a multiple of 4.4x, while Vendors D and E were just under 3x. The objective of the two models was not only to standardize for comparability, but also to give an indication of economic scalability for a DIY option. In this case, clearly Vendor F is poorly scalable, with Vendors B, D, and E progressively more so.

It is important to note in these results, however, that Model 2 pricing revealed even more important information than economic scalability-in fact, the pricing given called into question the entire premise that DIY is a cheaper option than full-service EDC. Other full-service research conducted but not presented here indicates that a full custom-built-for-the-project EDC system can be had for Model 2 starting at around $160,000 to about $220,000. Given that in the DIY model it is incumbent upon the sponsor to provide all the project management, CRF/study building, testing, and site training (that is, a personnel overhead that needs to be financed), the prospective purchaser must seriously consider exactly how much research they need to conduct and of what type, assuming Vendors B and D products will be an economic solution for moderate-larger trials is simply not true.

 

 

Discussion

In this particular instance, Vendor E is superior numerically and is highly competitive in their pricing, with moderate scalability. Also Vendor C is markedly below par across the board-this vendor also failed a basic security test: after logging out of the website, clicking the "back" button on the browser allowed not only viewing of the last CRF used, but also permitted manipulation and saving of that data and further use of the system, without reentering user name or password. It would seem then that the choice is clear, but a flaw in methodology became apparent during analysis. Vendor C was excluded from all further analyses once this basic failure was discovered.

The methodologic flaw detected was one of weighting, or the absence of it. When Vendor C was removed from the dataset variability across vendors is very low. Scores remaining are: 90, 94, 81, 89 and 96 (Vendors A,B,D,E, and F) for a standard deviation of 5.79. This excludes Vendor D as probably inferior and leave the remainder essentially even in overall scores. How then to decide amongst the remaining candidates?

In the assessment process the primary effort was in standardizing the data by executing as objective a process as possible. Weighting is, by its very nature, subjective. It could be argued that a form of weighting is present in that the areas of interest do not have an equal number of questions. Additionally, the actual questions chosen represent some form of subjective selection in the first instance. Nonetheless, it emerged that the questions themselves had different inherent values that would-and indeed should-influence the results; some questions were simply more important than others and should be weighted as such.

For example, one vendor who scored well cannot support any Internet browser other than Microsoft Internet Explorer (IE)-this effectively would exclude users of Apple based systems (Safari browser bundled with OS), or those precluded by their IT departments from using IE for security or other reasons from participating in any trial developed by the system. Another vendor had restrictions on what browsers other than IE and FireFox could do-Chrome, Safari, and Opera had limited functionality. These restrictions could effectively hamstring any clinical trial conducted where IE or FireFox were not used-which may be a lot of sites, especially in academic ones. Browser agnosticism is very important, as there is increasing diversity of browser usage, and to remain focused on one or two products may exclude large segments of potential investigative sites. Thus, client software agnosticism, in this case, browser type, should probably carry more weight in ranking. Figure 3 represents current worldwide browser usage.

Assessment questions were posed with respect to known issues-in this case, use of Java. Unfortunately, Java applets carry major security risks, and many institutions completely disallow Java use. Another general area of concern is ease-of-use for the sites-investigators are well known to strongly desire multiple form sign-off with one electronic signature, and to not have this feature threatens their willingness to participate in the trial.

Some systems share a common software code base-that is, anyone using the system shares fundamental application code with others. This is not a bad thing in and of itself, but when the vendor changes the code base whether you like it or not-a forced upgrade schedule in effect-changes can be introduced that require instant SOP revisions, new site training, and/or new staff training, and may have a severe impact on productivity. In the vendor list examined in detail above, two employed a shared code base with forced upgrades (B and D), one had a partial shared code base and was prepared to negotiate code change release schedules (F), and the rest provided a unique instance of the code to the sponsor who could determine if they wished to upgrade or not. Much of the time, a forced upgrade schedule is not problematic, but it can, and does, have ramifications at times that you'd probably rather avoid, so weighting of this component may be appropriate.

Other vendors will not allow "sandbox" use of their software-that is, the ability to use a fully featured system, without obligation, in order to evaluate usability with staff and others at your own pace without vendor representatives present. This peculiar notion-after all, you're going to spend many thousands of dollars and even a car dealer will allow you to test drive a vehicle that costs far less-limits your ability to assess the system to what the sales representative is willing to show you in a highly-sanitized "demo" environment. For a DIY system, I would class the availability of a "sandbox" as extremely important-not, perhaps, to the "must have" level, but close.

All of the above points formed part of the assessment tool, and many more besides. Capabilities vary wildly across systems and it is almost impossible to find one that does everything you'd like. Very careful consideration must therefore be given to what is an absolute requirement, a very important requirement, desirable requirement, an undesirable feature and a must-not-have feature/characteristic, with the responses weighted accordingly. Also consider carefully, what weight does price carry? Is it ok to pay twice as much for a system with one more important requirement? How about two? Three? Given that very few systems will meet all your requirements, after weighting, a conjoint analysis may have to be performed in order to assess which system truly is "best" for your particular needs and circumstances.

While much of the above is applicable no matter what environment you find yourself in, there are a few more considerations for the smaller enterprise to contemplate. For example, the CDMS had inventory and site payment capabilities. In this small company environment, selection of a system without those features means personnel have to manually perform the function, or consideration be given to spending extra to have data from the EDC system populate the CDMS for processing. That extra burden in people hours or money may present problems that are much more easily absorbed by a large organization. Additionally, if the skill-set required to build studies in a particular product is highly specialized with a steep learning curve then the impact of one staff leaving in a small department will have a far greater impact than in a larger department-thus no matter how powerful the product, it may make you inherently vulnerable to staff turnover.

Conclusion

Finding and assessing an electronic data capture methodology is a task that must be approached methodically in a structured manner. EDC systems vary so much in capabilities that the only valid way to make good judgments on suitability is via standardization of the method of assessment. This is especially true for the smaller life sciences company which may not have the in-house expertise to rely upon, and seeking CRO or consultant help may be invaluable. Enabling cross-vendor pricing comparisons through use of models can reveal critically important information on product scalability and economic feasibility for different project types. Finally, careful consideration of your company's specific needs may lead to weighting of particular components or dimensions of DIY systems which can help differentiate vendors when a clear "winner" is not present.

Timothy Pratt, PhD, is a Healthcare & LifeSciences Solution Design Consultant at PowerObjects, e-mail: prattusa@gmail.com.

Recent Videos
© 2024 MJH Life Sciences

All rights reserved.