Access to all available medical evidence can profoundly improve treatment effectiveness.
Recently, I've been privileged to participate in a workshop led by the Institute of Medicine as part of a roundtable sponsored by the Office of the National Coordinator to discuss the "Electronic Infrastructure for the Learning Healthcare System" (LHS). The project has a long-range horizon—the "system after next" that would be in place by 2020. This was the second in a series, exploring three priority focus areas highlighted at an earlier meeting, namely: engaging patients and the public, promoting technical advances and innovation, and fostering stewardship and governance.
Having the opportunity to engage in a forum comprised of so many experts and thought leaders in health informatics, technology, patient advocacy, and health policy reminded me of John F. Kennedy's characterization of his circle of advisors later remembered as the best and the brightest: "the most extraordinary collection of talent, of human knowledge, that has ever been gathered together at the White House with the possible exception of when Thomas Jefferson dined alone."
Indeed, Jefferson himself also later made a spiritual appearance in one participant's summary comments, as having set a noteworthy precedent by successfully marrying his philosophy to architecture in the establishment of the University of Virginia.
This engrossing and thought provoking event began with the premise of defining a health environment that can ensure that we fully capture and capitalize on all of the information resources relevant to our health and healthcare wherever and whenever we need them—quite a stretch from today's predominately paper-based world, but one that we hope will ride the wave of transition to electronic healthcare records in the near future.
When first exposed to this extraordinary mission, I imagined the combined sense of thrill and terror probably felt by the first group of NASA scientists in 1961 after hearing JFK challenge the nation with "the goal before this decade is out of landing a man on the moon and returning him safely to the earth." The magnitude of a goal to create this learning healthcare system involves no less of a technical challenge, but with a much steeper sociopolitical context. Like those anxiously energized scientists, many at the workshop must have been wondering "how do we get there" and "where do we start?"
The focus of this most recent workshop meeting was to define the technical, governance, and social infrastructure of the year 2020 that would support the LHS, which immediately brings to mind the futility of predicting what technology will be like in a decade. Consider, for example, the likelihood of accurately anticipating the impact of the following technology game-changers at various points of time:
So, thinking ahead to 2020, let's begin with the notion that technology will probably be unlike anything we can imagine. Yet given the recent historical pace of technology advancement, it's hard to imagine that it will be the ultimate obstacle. But we can certainly predict that the need to manage the cost and effectiveness of our health care system will still be a critical concern in 2020, and we can expect that considerable human, political, and economic barriers will still be in the way.
Among the technology planners, there was a rapid convergence toward an infrastructure for the LHS based on ultra-large scale (ULS) systems, a concept defined by a team at the Carnegie-Mellon University Software Engineering Institute, as part of a federally funded study to describe how to approach future massive, decentralized systems based on a set of core principles. ULS systems are described by the study lead, Linda Northrop, Director of the Research, Technology, and System Solutions Program at the Software Engineering Institute, as "interdependent webs of software-intensive systems, people, policies, cultures, and economics ... systems at Internet scale."
Applying ULS principles implies a tectonic cultural as well as technical shift in how we design software; when applied to the LHS it might involve building a permissive, loosely-coupled, all-inclusive enabling architecture based upon what Chris Chute described as a "parsimonious core" of standard data elements and design characteristics.
This will be a leviathan of a system—a complex, distributed, adaptive ecosystem beyond anything most of us can imagine, which will affect the quality and extent of lives for the entire population of our nation, and perhaps much of the rest of the globe as well. But the key insight to many of the workshop participants (who may not have been previously exposed to ULS and admittedly may have had widely divergent concepts of what it means) seemed to invoke a "less is more" architectural approach. Design a system that is inclusive enough to easily accommodate the multitude of legacy systems that will still exist, and that encourages widespread adoption with the least burdensome approach.
Of course, the ULS model today consists only of a set of principles intended to guide a research agenda rather than a defined blueprint, and its Wikipedia entry cautions that ULS systems have been defined informally as "systems that are currently impossible to build due to limitations in the fields of software design and systems engineering." Here, again, we must depend on that 10-year window for technology to catch up, but in the meantime we need to start describing a road map immediately, and look for opportunities to move along the path step by step.
Interestingly, a less prescriptive ULS architecture could be somewhat at odds with the tightly defined architectural approach employed by a benchmark example such as the National Cancer Institute's CaBIG project, one of the most ambitious and far-reaching system initiatives which has already proven to deliver many innovative system capabilities. Since the ULS strategy—and the way it would apply to a proposed LHS—has not yet been sufficiently fleshed out, it's not yet clear which approach would better achieve semantic interoperability —a tightly defined set of architectural, data, and content standards that would require strict conformance (though potentially at the cost of more widespread adoption) or a much broader approach with fewer restrictions that encourages broader participation, but requires a lot of things to sort themselves out much farther downstream.
It's important to remember that it's not just a matter of getting data—though what a mountain of data this would be—we also need to ensure sufficient quality and trust so we can understand and use the data consistently and make critical life-altering decisions based upon that knowledge.
Which brings us back to the original question we share with the NASA founders: "Where and how exactly do we start?" Well, one way to start is by simply imagining stories—to visualize the world as we'd like it to be in 2020.
For example, one such story that resonated with me from the first workshop was described as a large-scale, real-time rolling clinical trial, where, when a patient is diagnosed, they are presented with a range of options and associated probabilities of cure versus risk presented by their physician. Once the patient and their doctor choose what looks like the best option, they are enrolled in a virtual clinical trial which tracks the progress of their treatment versus others with the same condition right through to outcome. This information in turn would feed back into the learning system to adjust the probabilities for the next patient treatment scenario and so on.
In such a world, there are no precise boundaries between patient care and clinical research. Patient data is collected once in a common format that can be used equally for patient care, administration, and research. Health information is easily available to research, and research information is readily available to healthcare providers and patients, eager to share their information in the hope that it helps improve the health of others. The learning system is always conducting research, readily adapting to new knowledge, and the entire population can benefit from the cumulative and accumulating wealth of information that is generated from healthcare. In such a world, there's also a blurring between primary and secondary uses of data—since all uses are primary in the mind of the user.
Of course, such a future world must be based on a great deal of trust, and we have quite a ways to go in that area before we can hope to let the technology take us the rest of the way home. But we can start by envisioning more of the stories. Try to visualize what it would be like to have such a system available in 2020, and how it could improve the lives for so many. Then think about what we can begin to do now to help make it happen.
Wayne R. Kubick is Senior Vice President and Chief Quality Officer at Lincoln Technologies, Inc., a Phase Forward company based in Waltham, MA. He can be reached at [email protected]
1. Software Engineering Institute, "Ultra-Large-Scale Systems: The Software Challenge of the Future," June 2006, http://www.sei.cmu.edu/library/abstracts/books/0978695607.cfm.
2. Linda Northrop, Ultra-Large-Scale Systems Presentation, Slide 19, http://www.sei.cmu.edu/library/assets/northropkeynote.pdf.
3. Wikipedia, http://en.wikipedia.org/wiki/Ultra-Large-Scale_Systems.
Empowering Sites and Patients: The Impact of Personalized Support in Clinical Trials
November 26th 2024To meet the growing demands of clinical research, sponsors must prioritize comprehensive support models, such as clinical site ambassadors and patient journey coordinators, who can address operational challenges and improve site relationships, patient satisfaction, and overall trial efficiency.