The ABCDEs of Technology Adoption | Human-Centered Change and Innovation
GUEST POST from Arlen Meyers, M.D.
Every day, doctors have to make daily decisions about whether or not to adopt a new technology and add it their clinical armamentarium, either replacing or supplementing what they do. In doing so, they run the risk of making a Type 1 or a Type 2 adoption error.
Epistemology is a branch of philosophy generally concerned with the nature of knowledge. It asks questions such as ‘How do we know?’ and ‘What is meaningful knowledge?’. Understanding what it means to have knowledge in a particular area—and the contexts and warrants that shape knowledge—has been a fundamental quest for centuries.
In Plato’s Theaetetus, knowledge is defined as the intersection of truth and belief, where knowledge cannot be claimed if something is true but not believed or believed but not true. Using an example from neonatal intensive care, this paper adapts Plato’s definition of the concept ‘knowledge’ and applies it to the field of quality improvement in order to explore and understand where current tensions may lie for both practitioners and decision makers. To increase the uptake of effective interventions, not only does there need to be scientific evidence, there also needs to be an understanding of how people’s beliefs are changed in order to increase adoption more rapidly.
Only 18% of clinical recommendations are evidence based. There are significant variations in care from one doctor to the next. Physicians practicing in the same geographic area (and even health system) often provide vastly different levels of care during identical clinical situations, including some concerning variations, according to a new analysis.
Clinical and policy experts assessed care strategies used by more than 8,500 doctors across five municipal areas in the U.S., keying in on whether they utilized well-established, evidence-backed guidelines. They found significant differences between physicians, including some working in the same specialty and hospital.
The study results were published Jan. 28 in JAMA Health Forum.
One practice difference the authors found surprising was in arthroscopic knee surgery rates. In these cases, the top 20% of surgeons performed surgery on 2%-3% of their patients, while the bottom 20% chose this invasive option for 26%-31% of patients with the same condition being treated in the same city.
The question is why?
There’s an old joke that there are two ways everyone sees the world: those that see it as a 2×2 matrix and those that don’t.
A type 1 error occurs when they make a “false positive” error and use or do something that is not justified by the evidence. Type 2 errors, on the other hand are “false negatives” where the practitioner rejects or does not do something that represents best evidence practice.
The most recent example is the campaign to get doctors to stop prescribing low value interventions and tests. The Choosing Wisely campaign, which launched five years ago, hasn’t curbed the widespread use of low-value services even as physicians and health systems make big investments in the effort, a new report found.
The analysis, released in Health Affairs, said a decrease in unnecessary healthcare services “appear to be slow in moving” since the campaign was formed in 2012. The report found that recent research shows only small decreases in care for certain low-value services and even increases for some low-value services.
The reasons why American doctors keep doing expensive procedures that don’t work are many. The proportion of medical procedures unsupported by evidence may be nearly half. In addition, misuse of cannabis, supplements, neutriceuticals and vitamins are rampant.
Evidence-based practice is held as the gold standard in patient care, yet research suggests it takes hospitals and clinics about 17 years to adopt a practice or treatment after the first systematic evidence shows it helps patients. Here are some ways to speed the adoption of evidence based care.
Unfortunately, there are many reasons why there are barriers to adoption and penetration of new technologies that can result in these errors. I call them the ABCDEs of technology adoption:
Attitudes: While the evidence may point one way, there is an attitude about whether the evidence pertains to a particular patient or is a reflection of a general bias against “cook book medicine”
Biased Behavior: We’re all creatures of habit and they are hard to change. Particularly for surgeons, the switching costs of adopting a new technology and running the risk of exposure to complications, lawsuits and hassles simply isn’t worth the effort. Doctors suffer from conformation bias, thinking that what they do works, so why change?
Here are the most common psychological biases. Here are many more.
Why do you use or buy what you do? Here is a introduction to behavioral economics.
Cognition: Doctors may be unaware of a changing standard, guideline or recommendation, given the enormous amount of information produced on a daily basis, or might have an incomplete understanding of the literature. Some may simply feel the guidelines are wrong or do not apply to a particular patient or clinical situation and just reject them outright. In addition, cognitive biases and personality traits (aversion to risk or ambiguity) may lead to diagnostic inaccuracies and medical errors resulting in mismanagement or inadequate utilization of resources. Overconfidence, the anchoring effect, information and availability bias, and tolerance to risk may be associated with diagnostic inaccuracies or suboptimal management.
Denial: Doctors sometimes deny that their results are suboptimal and in need of improvement, based on “the last case”. More commonly, they are unwilling or unable to track short term and long term outcomes to see if their results conform to standards.
Emotions: Perhaps the strongest motivator, fear of reprisals or malpractice suits, greed driving the use of inappropriate technologies that drive revenue, the need for peer acceptance to “do what everyone else is doing” or ego driving the opposite need to be on the cutting edge and winning the medical technology arms race or create a perceived marketing competitive advantage. In other words, peer pressure and social contagion is present in medicine as much as anywhere else. “Let’s do this test, just in case” is a frequent refrain from both patients and doctors, when in fact, the results of the treatment or test will have little or no impact on the outcome. It is driven by a combination of fear, the moral hazard and bias.
These “unnecessary” barriers, which vary from complicated funding structure to emotional attitudes towards healthcare, have resulted in the uneven advancement of medical technologies – to the detriment of patients in different sectors.
Economics: What is the opportunity cost of my time and expertise and what is the best way for me to optimize it? What are the incentive to change what I’m doing?
The past 600 years of human history help explain why humans often oppose new technologies and why that pattern of opposition continues to this day. Calestous Juma, a professor in Harvard University’s Kennedy School of Government, explores this phenomenon in his latest book, “Innovation and Its Enemies: Why People Resist New Technologies.”
Changing patient behavior has been described as the “next frontier”. To make that happen, we will need to change doctor behavior as well.Some interventions work but passive interventions don’t.
Here are some suggestions:
The job doctors want virtual care technologists to do is that they want you to give them a QWILT: quality, workflow efficiencies,income, protection from liability and giving them more time spend with patients (face to face, since, in most instances, that’s how they get paid) Increasingly, they also want to spend more time “off the clock”, instead of being overburdened with EMR pajama time and answering non-urgent emails or patient portal messages.
While monetary incentives and behavioral “nudges” both have their strengths, neither of them is sufficient to reliably change clinician behavior and improve the quality of their care. Sometimes nudging helps. Organizational culture, while diverse and complex, provides another important lens to understand why clinicians are practicing in a certain way and to put forth more comprehensive, long-term solutions.
The public shares some culpability. Americans often seem to prefer more care than less. But a lot of it still comes from physicians, and from our inability to stop when the evidence tells us to. Professional organizations and others that issue such guidelines also seem better at telling physicians about new practices than about abandoning old ones.
Medicine has a lot to learn from the consumer products industry when it comes to using the power of brands to change behavior. Some are using personal information technologies to give bespoke information to individual patients, much like Amazon suggesting what books to buy based your preferences. We need to do the same thing for doctors.
Like most consumer electronics customers, doctors will almost always get more joy from technology the longer they wait for it to mature. Cutting-edge gadgets can invoke awe and temptation, but being an early adopter involves risk, and the downsides usually outweigh the benefits.
There are many barriers to the adoption and penetration of medical technologies. The history of medicine is full of examples, like the stethoscope, that took decades before they were widely adopted. Hopefully, with additional insights and data, it won’t take us that long.
Image credits: Pixabay, ResearchGate.net, Kris Martin
Sign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.