Clinical Decision Support: Driving the Last Mile


Thank you and I just wanted to say thanks
so much, Scott, for joining us today. If you don’t know Scott, he’s one of the smartest
people in healthcare, but he’s also one of the best souls. It’s an honor for me to share this time with
him. We have a lot to do here today, so we’re going
to fly through these quickly, friends. Hang on. First thing we’re going to do is collect a
little bit of data about the audience and the audience’s perception of decision support
at their organization. Take a look at that poll and Brooke will collect
some data and we’ll share it back. We’re going to go ahead and launch this poll. We’d like to know on a scale of one to five,
rate your organization’s clinical decision support effectiveness. Your options are, one, not effective at all;
two, somewhat effective; three, moderately effective; four, very effective; and five
extremely effective. We’ll give you a few seconds to get your answers
in. We’ve got some answers coming in right now. We’ll give you another few seconds and then
we’ll go ahead and close that poll. Okay, looks like our answers are dying down. We’ll go ahead and close that poll and share
the results. All right, looks like we had 12% say not effective
at all, 29% said somewhat effective, 38% said moderately effective, 14% said very effective,
and 8% said extremely effective. Wow. Pretty bell shape-ish but I’m impressed that
there’s 8% at the extremely effective. We want to hear from you. Scott, what do you think about that data? No, I agree. I think there’s opportunity for a significant
improvement to get more people up to extremely effective and very effective. Yeah, very interesting data there. During the Q&A, please everybody speak up
and share more about your perception. All right. I think we have one more, right? We’re going to go ahead and launch our next
poll question. On this question, we would like to know, in
your opinion, what is the greatest barrier to better clinical decision support? Your options are technology of EHRs, uncertainties
and evidence-based medicine, clinician cultural resistance, fundamentally poor data quality
in healthcare or other. We’ll give you a few seconds to answer that
question. We’ve got lots of votes coming in. Give you another couple seconds. It looks like it’s leveling off. We’ll go ahead and close that poll and share
the results. All right, we had 20% said technology of EHRs
is the greatest barrier, 12% said uncertainties in evidence-based medicine, 31% said clinician
cultural resistance, 30% said fundamentally poor data quality in healthcare, and 6% said
other. What do you think about that one, Dale? Very interesting, that’s interesting. I think I aligned with the feelings here,
the audience and it also, I think it plays well at 30% poor data quality and healthcare. That’s going to play well into what we’re
going to discuss here today. Yeah, two very close answers. Scott, any thoughts there from you? No, I agree. The findings are not that surprising and I
think we’re going to address a number of the barriers today. Yeah, I have a personal bias that I think
the uncertainties in evidence-based medicine also, I personally believe that plays a bigger
role, but let’s dig in. Okay. All right. Here we go. I want to take a minute before we dive into
things just to talk about solving problems and building systems because lately I’ve seen
some misses in the approach to solving problems building systems. This is kind of the teacher in me coming out
here. What I emphasize when I do teach classes is
spend lots of time getting the concepts right. Then, explore options for implementation and
I’m not talking about requirements, I’m talking about your concepts; concepts or operations,
your conceptual economic model, your decision making models. There are usually only a few good concepts
that you can apply towards solving a problem, but there are usually lots of options for
implementation. What quite often I see is people moving too
quickly into implementation on shaky concepts and if you don’t nail the concepts, your implementation
will forever be underperforming or fail. Here’s the example that I like to use in this
context. Historically, the conceptual center of EHR
design was the encounter, but if you could roll the clock back, it should have been the
patient, that should have been the conceptual design of the EHR and that conceptual miss
early on has dogged all of us for years, forcing all sorts of workarounds and software data
and workflow, including the ability to implement decision support. Think about that. Happy to address further questions about what
I mean in this movement from concept to implementation that encouraged everyone to be very deliberate
about that thinking. The other is, when teaching and forming the
human mind, I’m no cognitive scientist but I’ve been around for a while and I’ve listened
to a lot of cognitive scientists and what I’ve concluded is that the human mind works
like a filing system. Give that human mind general file folders,
then fill those file folders with specifics. In the military, we call this gen spec learning. That’s the reason that you can take 18-year-old
kids and teach them how to run multimillion dollar, highly complex weapons and aerospace
systems with a very few mistakes. What you’ll see today is a reflection of this
philosophy and that is, I’m going to talk about some concepts that I’ve spent a lot
of time thinking about as it relates to healthcare decision support and decision support in general. Then, Scott is going to provide very interesting,
fascinating specifics about the implementation of those concepts. Okay? Let’s roll along here. The agenda, 20 minutes from me talking about
concepts, current and future state of data in US healthcare; 20 minutes for Scott, the
opportunities and potential for better decision support, examples of decision support, really
specific great examples and then we’ll have time for Q&A. Back to concepts. This was my life prior to healthcare. I started healthcare formerly as an employee
of Intermountain Healthcare in 1997, dabbled in healthcare before that, but most of my
formative years in decision support and analytics came from nuclear warfare planning and execution. This is the underground command center at
Offutt Air Force Base, Omaha where I worked the first part of my career. The second part of my career, I moved from
the ground up into airborne command centers in these doomsday planes, very data intense,
very time and life critical decisions. It also formed for me not just the technology
of decision support but the psychology of decision support. You can imagine the sensitivities associated
with the psychology of decision support as it related to nuclear warfare and nuclear
weapons. It was incredibly complicated and incredibly
informative for me early in my life to learn from all of that. All of that influenced my approach to healthcare
data and decision making. Since 1997, I’ve been trying to bridge that
gap between what I learned in military data and technology and psychology and bring that
to the benefit of healthcare. Just one more sort of human interest diagram,
sort of creepy, also, this is a splashdown of inert warheads at the Kwajalein Atoll. I worked on the launch control program. I was a software engineer after I got out
of the air force on that Peacekeeper intercontinental ballistic missile. You can imagine the amount of telemetry that
was involved in these test flights. The peacekeeper missile had 10 nuclear warheads
on it and I think on this slide it actually shows only nine, but this was 4,000 miles
away and launch from Vandenberg Air Force Base, bathed in sensors and data and we would
create what we would call a data profile for every flight and that would drive the kind
of sensors that we would use. We would drive the data collection and the
analysis afterwards. Again, a big influence on my life came from
aerospace and the telemetry of nuclear weapon systems. I would caution everyone to employ decision
support cautiously. Going back to my earlier biased question or
biased observation that healthcare data quality is pretty poor, but I’m also biased to believe
that the evidence-based medicine that we have in healthcare is actually pretty shaky as
well. There’s quite a bit of error in randomized
clinical trials. Just be cautious. I’m not saying don’t implement data-driven
decision support. Just be cautious about declaring what you
think is the truth. This curve actually represents what I think
was a pretty significant leap beginning in 2008 under the HITECH act when we, as a country,
committed to the deployment of EHRs. That raised our data quality considerably. It didn’t reach perfection but it raised our
data quality and over time there will be similar jumps. I’m not quite sure where they’re going to
come from in terms of data quality and data breadth in healthcare. I’m hopeful that bio integrated sensors play
a part in that. I’m hopeful that we’ll get better at utilizing
data that’s already in existence and making more sense of it as it relates to health and
life sciences and in healthcare but there’s no doubt that the quality, the breadth and
depth of data in healthcare and life sciences will continue to go up. I also like to emphasize that you should choose
your use cases carefully. The incredibly unfortunate disasters associated
Boeing 737 max is a good example of that. That was the over application of decision
support and software to fix what was fundamentally a hardware problem in the design of that aircraft. I think there’s room for more discussion about
that how that applies to healthcare, but just be cautious about when and how you choose
used cases for decision support and make sure that you’re doing the right thing with decision
support, not actually trying to cover up what is more fundamental problem. In the case of the Boeing situation, it was
an aircraft design issue. Okay. Concepts and frameworks in healthcare for
decision support, with great friends and colleagues of mine in Canada, we published a paper a
few years ago called A Model for Developing Clinical Analytics Capacity: Closing the Loops
on Outcomes to Optimize Quality. Conceptually, what I would suggest is that
decision making in healthcare occurs in three closed loops. The top loop is the decisions that we make
around populations. The middle loop is decisions we make about
protocols, and the final loop is the decisions we make about patients. Moving from decisions that affect millions
of people in a population health, public health setting down to subsets of people in general,
what are the protocols we should be using to treat patients of this type, then down
to the very tailored and personalized care that we deliver to patients. I think we’re getting better as an industry
at populations and protocols in terms of our decision support but the last mile is still
at the patient level and being very specific. That’s one of the reasons I’m really excited
that Scott is here today to talk about what Stanson Health is doing to get to that last
mile. By the way, I want to mention that we have
no financial relationship with Stanson Health at all. Scott is just a good friend and I believe
in what Stanson Health is doing. We don’t benefit by advertising Stanson Health. We just think it’s a good idea to bring up
good ideas and good people and highlight those to the industry. I just want to make that very clear. A little more on this, the populations, again,
we’re dealing with years and decades, the mean time to improvement, the span of a population
affected is millions to several hundred thousands, your analytic consumers tend to be board of
directors, executive leadership, strategic plans and policy. Then, you start narrowing that down to protocols. Weeks and months is the time frame for decision
making and improvement, subsets of patients down to the hundreds and thousands. You’re delivering analytic and decision support
to care improvement teams, clinical service lines who are defining and helping refine
protocols, researchers. Then, finally down to specifics, the patients. Minutes and hours and seconds is the decision
making timeframe there. Individual patients is the span of those affected
and you’re delivering this to physicians and patients at the point of care. This is a model that, again, I published originally,
oh gosh, back in 2011 I think, maybe 2010 with a friend Dennis Prady in Canada in our
attempt to describe the progression of analytics adoption and decision support in the industry. There’s plenty of detail behind this. You can go Google it and look into it. I would ask yourself kind of rate where your
organization is and maybe let’s talk about that in the Q&A. I would caution that when we apply this to
the industry, when we apply this to clients, what I see quite often is a love affair with
the higher levels and achievement and activity in the higher levels of this model, but organizations
are not solving fundamental infrastructure and foundations beneath those higher levels. This higher levels at that level six, seven
and eight all very appealing, very attractive. We all want to be associated with that, but
the reality is if you don’t take care of levels one, two, three and four, you’re going to
spin your wheels. You’ll never be completely effective at those
upper levels. In fact, what I see more and more thanks to
the overwhelming number of quality measures and value-based contracts we have. The oxygen is being literally sucked out of
the room. No one has time nor the resources to address
those higher levels because it’s all being spent on level three and level four and by
the way, inefficiently at level three and level four. We should all work more closely together to
minimize the impact that those level three and level four reports are having on the industry
so that we can all move up faster to the more differentiating and value-based areas. One of the patterns that we should all be
creating the concepts in healthcare are the notion of the digital twin. The three patterns that we are slowly stitching
together at Health Catalyst are the three in the circles indicated on this slide. Patients like me, is one form of that pattern;
who were treated like this, is another form of that pattern; and had outcomes like this. The first circle is actually relatively easy
to create. That’s the fundamental… People are calling it the digital phenotype
now, patients like this, patients like me. Who were treated like this is a little more
complicated pattern to add to that, but it’s just rounding out the notion of the digital
twin, right? It’s just more attributes on the digital twin. Then finally, outcomes is the last piece of
that. Our data quality in that outcome space is
very poor right now, but we’re chipping away at the other two and I would suggest that
we start thinking more about patterns in AI than we do in predictions. We’re sort of obsessed right now with predictions
in healthcare and I think we should be spending more time on patterns that expose information
to the human brain that we otherwise wouldn’t see, the patterns in data that we otherwise
couldn’t see as a human. Less predictions, more patterns is what I
would suggest. That’s the transition to this slide, which
is predictions without interventions are a liability to the decision maker, not an asset. Let’s all of us kind of relax our attention
and our obsession with predictions, unless we can tie those to interventions and start
thinking more about pattern recognition, the power of AI to reveal patterns in data that
the human brain would not otherwise be able to see. All right. What are the things that we need to digitize
in healthcare based on aerospace and automotive role models? In the aerospace and automotive arenas, highly
digitized since the 1990s they set out to do that, you have to digitize your assets
and you have to digitize your operations. In airplanes, it’s digitizing the aircraft,
air traffic control, baggage handling, ticketing, maintenance, manufacturing. For us, it’s the patient registration, scheduling,
encounters, diagnosis, orders, billings and claims. How do we continue pushing that along? Well, data volume is key. There was a paper that was published out of
IEEE and it concluded that invariably simple AI models and a lot of data trump more elaborate
models based on less data. What I’m saying here and is validated by the
authors in this study in IEEE, data volume is critically important and we actually have
very little data volume in healthcare. If we’re going to reach highly accurate, precise
decisions about patients, we have to have more granular data. Vehicle health monitoring is another example
of very data-intense environment, right? Tesla collects one million miles of driving
data on every 10 hours of driving, 25 gigabytes of data per hour per car. Elon is all about digitization of those assets
and what it can do to extend the life and the quality and the safety of the automotive,
the products that they produce in the automotive industry. Another example that I use in space operations
is the data that’s collected as a consequence of satellites. I worked in the space operations field when
I was in the military and NSA. We were dealing with overhead assets all the
time. The type of data associated with those assets
is very similar to healthcare. It’s highly dimensional. It’s multi-modal, heterogeneous. It has a temporal dependence. In other words that the time of the day or
the clock in which you collect data matters to how you analyze the data and missing data
means something in those contexts. The message here is we could learn a lot from
aerospace and automotive engineers in the further digitization and decision support
of healthcare. The aircraft industry set out in the early
nineties to collect a lot of data very purposely and now we see that there’s 10,000 more times
data being collected in the airline industry than in the early 2000s, five to eight terabytes
of data per flight. Now, let’s contrast that against the current
state in healthcare for data and decision support. I would suggest that this is my life on the
left and this is actually healthcare’s digital view of my life on the right. It’s highly pixilated and the only way we
can increase that is to increase the data collection that we collect about me and my
life and round that out. I’ve had to calculate this data a number of
times in determining storage requirements. We only collected about a hundred megabytes
of data per patient per year, whereas cars 30 terabytes per eight hours, five day terabytes
per four-hour flight in those very data-intense environments. The human health data ecosystem, this is a
cartoon that I believe we have to round out as an industry and at each of our healthcare
organizations. If we truly, truly want to understand that
patient at the center, we have to collect data in each one of the bubbles surrounding
that patient. We’re only collecting data on patients right
now that are seeking care. We’re not collecting any data on patients
who are not seeking care. Let’s all put these bubbles on a timeline
for strategic data acquisition and map out a plan for the industry over the next 5 to
10 years to increase the precision of our understanding of patients. A great paper from the folks at Toronto about
the quality of data and the challenges that the quality of data in healthcare presents
to AI and this quote is very telling, “Diseases in EHRs are poorly labeled, conditions can
encompass multiple underlying endotypes and healthy individuals are underrepresented. This article serves as a primer to illuminate
these challenges and highlights opportunities for members of the machine learning community
to contribute to health care.” Again, we should take advantage of the data
but be cautious about the interpretation of the data. Another interesting thing here is that in
a typical note, 18% of the text was manually entered, 46% copied and 36% imported. There’s this tendency to believe that there’s
all this knowledge tied up in texts notes that we should be taking advantage of but
in reality, there’s a lot of data quality problems in those texts notes as well. Should we take advantage of the data? Sure but should we be cautious about the conclusions
from that data? Absolutely. Yet another, this came out in JAMA not too
long ago actually. This was in September. They put observers in the clinic to see if
what was discussed with the patient in the clinic actually made it into the EHR. The conclusion of the study is as follows:
38% of the review of systems data were confirmed which in contrast, that really means 61.5%
of the time the EHR data did not actually reflect what happened in the review systems. Same kind of thing with physical exams, 47%
of the time the EHR did not actually reflect what happened as a consequence of the physical
exam. This was, again, verified by human observers
watching and comparing the two environments. Be cautious as if we needed one more. Here’s another study from The BMJ that concluded
that 49% of randomized clinical trials were deemed high risk for wrong conclusions because
of missing or poor measurement of outcomes data. We clearly have to get better at collecting
outcomes data. If we believe that randomized clinical trials
are always accurate, there’s probably some falsehood in that belief. Future state, you know what, friends, this
is going to take too long. This is a statement I’ll send out later. It’s in the slides, but this statement hangs
on my office wall and it’s what we envision as the conversation between a physician and
their patient. We’re slowly chipping away at this. Scott is going to go into the details about
it. What I would suggest is that by 2030, every
citizen should possess at least 10,000 times more data borrowing from the progress of the
aerospace industry, coupled with analytics and AI to support their health optimization
that exists in 2020. I would suggest that the next head of health
and human services should make this a national goal. In the US, that means going from a hundred
megabytes of data per year per patient to one terabyte. It’s really not that grand a vision. We could actually, hopefully collect more,
but at least get to that point. This is a great summary of work coming out
of Northwestern and John Rogers, the world of sensors, these bio-integrated sensors. Again, I’ll let you Google that and look into
it more but it’s fascinating work and I believe a great area in which we’ll start to see more
clinically validated coming into the decision support environment. Microns thin, one-inch pliable sensors with
integrated Bluetooth antennas, CPUs, physiologic monitors and a wireless power system all embedded
in these thin, I wouldn’t even call them wafers, they’re more like skins. I believe that there’s room for us to create
a new type of skillset, somewhat related to informatics but a little different and that
is what I call a digititian and this digititian would sit between the patient and the physician
and the care team and these digititians would develop digital profiles of patient types. For example, a diabetic patient should have
a different sort of digital profile, a different data package than with an L4-L5 fusion patient,
for example. You want to know different things about that
patient type than you do this patient type. It’s the digititian’s job to define those
data profiles for different patients’ hives. Then, work with the patient, work with the
care team to collect that data, analyze that data and highlight it for action amongst all
the parties involved. Okay. In closing, I just suggest that we should
be humble in healthcare. Look for role models, borrow concepts, hire
engineers and military, aerospace and automotive. The volume and the quality of healthcare data
is lower than what the hype would lead you to believe. Just be aware of that. The good news is we’ve got much left to achieve
and transformation is truly ahead of us. Phew. That was a mouthful. Thanks everyone for tolerating all of the
slides. Now, to the really interesting stuff with
Scott. Okay. Thank you very much, Dale. You are a great speaker, a visionary in healthcare
and analytics and have been very influential as I tried to learn more about analytics and
decision support. Let’s talk about where we are today, the opportunity
ahead of us. The Institute of Medicine, renamed the National
Academy of Medicine, has said that there’s a 17-year gap between the discovery of important
potentially lifesaving information and widespread translation into practice. What can we do to minimize that gap? A couple of studies in the New England Journal
of Medicine that discovered that patients are treated with care consistent with the
evidence, scientific evidence about 50% of the time. The study was first performed in adults and
the results were replicated in children. A recent study showed that Medicare beneficiaries
who are treated by female physicians have lower mortality rates than Medicare beneficiaries
treated by men physicians. The question is, what is actionable out of
that? Not all Medicare beneficiaries could switch
to women physicians from men physicians. What do we do about that? In the discussion section, people hypothesize
that might be because female physicians practice is more consistent with evidence-based medicine
than men physicians. Studies in JAMA showing that about a third
of all healthcare costs are waste, 10% of healthcare is overtreatment where the harm
exceeds the benefit, is there an opportunity to improve affordability? If we cut through all the numbers, if we believe
the published information during the hour that Dale and I are spending with you today,
there will be about 28 deaths in the United States because of medical errors and $22 million
spent on medical overtreatment and clearly we can do better than that. The good news is we’ve invested heavily in
medical research and there are tremendous benefits as a result of that investment, NIH,
other medical research, and the output of that research is a lot of articles, a lot
of information. 6,000 articles published every day, an article
every 30 seconds, 75,000 lab tests and it’s been said that the doubling time of medical
information in 1950 was about 50 years. This year, in 2020, people estimate that the
doubling time of medical information will be 73 days. Clearly, no matter how smart any healthcare
provider is, they will not be able to keep up with all of the information that could
potentially benefit patients but the cloud can store voluminous amounts of information
that could be potentially helpful and improve care for individual patients. If we look at the published literature, we
find that there is an approach that is proven to be effective for improving patient care. Dr. Kawamoto of University of Utah looked
at all the studies that had been previously published. When interventions work to improve quality
of care, what did the investigators do? When interventions failed to improve quality
of care, what did the investigators do? In the systematic review, meta-analysis, meta-regression,
Dr. Kawamoto found and published that if you automatically provide decision support is
part of the workflow, you are 112 times more likely to improve care than if you do not. If we think about this in the context of a
molecule, if a drug came along that improves survival for patients with heart failure,
that was 112 times more effective than drugs that have been in the market and was safe
and affordable, that drug would be a blockbuster. I’m having trouble advancing the slides. Did they lock up? It did. My apologies. Okay. No worries. We can switch back. We can switch and run the slides from here,
Scott. Yeah, we’ll have Dale run the slides for you. Okay, thank you, Dale. Yup, no worries. Okay. Let me jump in. Let’s look at current generation clinical
decision support using native rules engines. Here’s an example in front of you that looks
at choosing wisely recommendations and when vitamin D should not be ordered as part of
a population initiative. This is not in the cloud. We find recommendations and supporting information
from the scientific societies about when a test should not be ordered. We also have patient information available
so that patients could understand what could potentially help them and harm them. Next slide. You see or I’m sorry if you go back one. Thank you very much, Brooke. Here’s another example of using a native rule
engine and this is a study or an intervention from a prestigious health system in Pennsylvania,
which implemented for clinical decision support alerts for high cost labs. Many times patients are in the hospital and
perhaps progressing and we order serologies or tests that may not be available for a week
or two and they’re shipped to send out labs. The patient is discharged, doing quite well,
back at work before we get the results. Therefore, the results do not impact that
patient’s care. This particular health system found by implementing
these four alerts, they estimate that they’re going to save over $400,000 per year. Next slide please. Dale shared some examples from the airline
industry and let me just share one more. Last night, I flew home and I looked at some
of the design features. When I’m flying, the airline industry makes
it really easy for me to turn on a lamp at night for me to read. I just push a button that’s directly overhead. However, I assume they make it very difficult
for me to open the exit door, midflight. Then, finally when I’m in the airplane bathroom
and the first thing I want to do is turn on the light and to turn on the light, I lock
the door. That’s a design feature where they want to
make sure people lock the door. This is deliberate, intentional, making it
easier to do the right thing, harder to do the wrong thing. Next slide please. We can design order sets and preference lists
to promote the right thing and discourage the wrong thing. Now, order sets were created to prevent under
use their checklist, but a side effect that has been discovered of order sets and preference
lists, it can inadvertently promote overuse. An example and this actually comes from Cedar
Sinai where a group removed cardiac monitoring orders from order sets that the cardiologist
felt should not contain cardiac monitoring. Let’s say general admission order sets for
internal medicine or general surgery admission order sets and one could still order cardiac
monitoring and they also implemented the American Heart Association Telemetry Guidelines. As a result of this initiative, there were
savings of approximately $3.7 million a year and what the administrators called hard green
dollar savings and they looked at quality and there was no change in mortality, use
of rapid response teams or in code blue, significant reductions in costs, maintaining high level
of quality. Next slide please. Another example, opioids. We have a problem, a significant problem,
a tragic problem with overuse of opioids. We, perhaps, and our biggest concern is overuse,
which is a much bigger concern than underuse. Many organizations are looking at their order
sets and preference lists to remove opioids when perhaps one could still order an opioid,
but we don’t necessarily want to encourage its use as part of a checklist. Next slide please. Then finally, as we discussed earlier, accelerating
evidence or the translation of evidence in the practice, a recent study showed that antidepressants
and antipsychotics with high anticholinergic properties can increase the risk of dementia
by 50%, 50% when taken over a three-year period. We see some of these antidepressants and antipsychotics
on order sets and preference lists when there are antidepressants and antipsychotics with
fewer anticholinergic effects and therefore this is an opportunity to reduce preventable
dementia and opportunity to improve the health of communities and all of the burden and morbidity
associated with dementia. Next slide please. One can look and try to model using the clinical
epidemiology literature the benefits of using decision support to reduce overuse. Benzodiazepines, drugs like Ativan, Ambien
in elderly individuals are associated with falls, dementia and hip fracture and by reducing
their use through reminders, we can determine the how we’ve reduced the frequency of benzodiazepine
usage and we can model this type of information that projected impact on outcome and I found
this type of very useful, for example, when I would present to the board of directors
at Cedar Sinai about the potential impact of this work on the quality of care. Next slide please. Medical feedback. Peer comparison feedback has been shown in
randomized controlled clinical trials, not randomizing patients but randomizing physicians. When you provide peer comparison feedback,
you significantly reduce overuse. This example overuse or antibiotics for upper
respiratory tract infections. Drive antibiotics for upper respiratory tract
infections down to 3.7%. Next slide please. Here’s an example of an organization in Southern
California where a physician ordered Lyme disease tests in six patients over a three-day
period. You can see the incidence of Lyme disease
in the map of the United States. You do not see or you rarely see Lyme disease
in patients who live in Southern California and who do not leave Southern California. You are more likely to be bit by a shark or
struck by lightning than to live in Southern California and contract a confirmed case of
Lyme disease if you do not travel to an area where Lyme disease is endemic. Next slide please. Fascinating. Another example from the same organization
about potentially inappropriate pap smears or cervical cytology screening in women between
the age of 30 and 65 and this was in an IPA of 47 obstetricians were of the 47, if we’re
evenly distributed, you’d expect each physician to contribute about 2% of the inappropriate
pap smears but in this case, one of the 47 physicians contributed 62% of the potentially
inappropriate pap smears. Interesting information to provide feedback
to this particular gynecologist. Next slide please. Let’s talk about the future and CDS in the
cloud and this is what gets me really excited about the potential to reduce alert, fatigue
and increase the impact, the improvements on quality, safety, and affordability of care. Next slide please. Dale shared some examples outside the healthcare
industry, including automobiles. If you think about autonomous vehicles or
self-driving vehicles, it’s really a chassis with an engine and your steering wheel and
you can turn right and you can turn left, you can accelerate and you can break and it
reacts to sensor data provides decision support. Where are the other cars? What do the stoplight show? Where is there the stop sign and stop signs
will have different levels of clarity. Is there a dog walking in the street? Did a ball roll into the street and a child
might follow? Using this type of information sensors, how
can we improve healthcare? Next slide please. Pattern recognition is the sweet spot for
artificial intelligence and artificial intelligence is performing extremely well. I could go through many examples. I won’t. One example is retinal fundus photographs
looking for diabetic retinopathy, artificial intelligence does as well as ophthalmologist. Next slide please. As we think about pattern recognition, reading
slides or the Java pathologists is a type of pattern recognition. Another study demonstrated that if we look
for lymph node metastases in women with breast cancer, that artificial intelligence does
as well is excellent pathologists. Next slide please. Dale mentioned this earlier and the data that’s
potentially available to us could be better. He talked about how data is incomplete, fragmented,
sometimes erroneous but if you look at the clinically-rich clinically-valuable information,
it’s been said that about a third of it is captured in a structured format or discrete
data and two thirds of it in an unstructured texts. It’s like reading a book. If you look on the right side, were two thirds
of the words are crossed out. Sometimes you’re going to understand the context,
but many times you won’t. I believe more effective clinical decision
support will require reviewing unstructured data using NLP machine learning and AI and
we’re at the beginning of this process now. Next slide please. Take an Intermountain Low Back Pain Guideline. I think, Dale, you mentioned you started your
healthcare career at Intermountain. This is for imaging patients with low back
pain doing advanced imaging studies. In order to determine whether it’s appropriate
or inappropriate, we have to know the duration of low back pain greater than or equal to
three months and we’re not going to find that it’s a discrete DNA element. We need to look at the free text information
in EHR and whether the patient failed conservative treatment such as physical therapy. Well, physical therapy is often performed
close to a patient’s home. That’s going to require, in many cases, review
of the free text. A guideline like this is not that helpful
unless one can look at the unstructured free text information. Next slide please. We need to look at what facts are available
to us that we can extract and when they’re not available and we’re looking at free texts,
we need to the best of our ability and we do not get to 100% accuracy, try to infer
what we think the facts are based on the data that’s available to us in the unstructured
information. Next slide please. An example would be a 65-year-old male patient
with worsening low back pain and sciatica. He has tried physical therapy for three weeks,
reports no pain radiating to the left hip. We need to parse the information and try to
understand it to the best of our ability, put it in different categories. Understanding PT at the beginning of the sentence
is patient. PT in the middle of the sentence is physical
therapy. We really need the context and we have to
understand how certain we are that the information is actually correct. Next slide please. I’m encouraged about the potential of AI for
a lot of reasons. I think we’re at the beginning. I think in our lifetimes it will have tremendous
benefit. I think it will contribute to better patient
care. I actually looked in Google and right or wrong,
I read that the average two-year-old toddler is approximately as intelligent as a smart
dog. Here on this screen we have what I believe
or could potentially be a very smart dog, certainly a very cute dog and a very cute
toddler. Now, what’s different here is that dog has
probably achieved its entire upside in terms of intelligence. This girl, in addition to being very cute,
also is at the beginning and has tremendous upside. She could discover a cure to cancer, enable
us to put a person on Mars or become president of the United States. The human brain, in many cases, there isn’t
tremendous more upside from seeing more cases. There are some, but AI with additional training,
potentially has much greater upside. Next slide please. There are limitations. If you look at AI on the left side, you can
see other Labradoodles and fried chicken and it turns out AI struggles to differentiate
Labradoodles from fried chicken. On the right side, you see Chihuahuas and
blueberry muffin. A lot still needs to be worked out. AI has trouble differentiating blueberry muffins
from Chihuahuas. A lot of work underway on topics like these
so that AI will be better in 2021 than it was in 2020. Next slide please. As we evolve and I think Dale shared this,
that we can expect the inputs or the sensors not only to include variables that we reviewed
today, lab images but over time, patient preferences, social determinants of care, genetic information,
proteomic information, microbiome information, precision medicine that will enable us to
provide even better guidance on what will help patients and what will not. Next slide please. There’s a tremendous effort underway to make
the EHR more usable using voice recognition and a number of companies are working on ambient
listening devices and virtual assistants to listen in on provider-patient conversations,
structure the information and enable the information to be used to provide guidance to not only
the healthcare provider, remember to order a cholesterol level or you might want to consider
an ACE inhibitor for a Mr. Jones who happens to have hypertension and diabetes, but also
later to provide guidance to the patient at home. “Mr. Jones, it might be a good idea for you
to go out and walk a couple of miles today given that you have hypertension and you have
not walked in a week.” Next slide please. To finish up, I am quite optimistic that through
advances in technology, data science, decision support, we’re going to be able to solve a
number of the problems that I started with. Final slide place. As you might have gathered, this is a football
player and my guess is I’m one of very few UCLA football fans on this webinar today. Many of you may not know who this is, but
this is Derrick Coleman and Derrick Coleman actually his deaf and his passion is football. He tried out for his high school football
team and everyone told him, “You’re deaf. You can’t play football.” He’s an offensive player, was an offensive
player, couldn’t hear the play, called in the huddle, couldn’t hear the defensive players
footsteps, couldn’t hear the play, called dead. He was the star of his high school football
team. Went on to the UCLA football team where he
was also one of the stars of the football team. Then, people said, “Well, that’s okay. That’s high school and college. You cannot play in the NFL because you’re
deaf.” He ended up playing for the Seattle Seahawks
and he’s wearing what I would guess none of us are wearing on a ring finger, a Super Bowl
ring. He was interviewed later and he was asked,
he said, “Derrick Coleman, you were your whole life you can’t play football because you’re
deaf. What did you think when everyone told you
this and now you’re wearing a Super Bowl ring?” He smiled and he said, “Yes, they did say
that, but I couldn’t hear them.” One of the things that I liked about Derrick
Coleman, he would go to children schools with children who are deaf and say, “What is your
passion? Mine was football. I played in the Super Bowl. What are your dreams? Pursue your dreams.” I think that would be highly incredible to
children who are deaf. The reason I bring this up, a lot of us talk
about, myself included, the challenges in healthcare, how they’re seemingly unsurmountable,
but I’m absolutely convinced that over time we’re going to get better. We’re going to reduce the number of deaths
from medical errors. We’re going to improve the affordability of
healthcare and healthcare will transition into the kind of system that we’re proud of
in the future. Thank you for your time. Great. Thank you, Scott. That was awesome. Fascinating as I knew it would be. Brooke, what’s next? All right, we have one more poll question
before we dive into the Q&A. Let me go ahead and launch that. While today’s webinar was focused on the important
role, clinical decision support can play in improving the quality, safety, and value of
care. Some of you may want to learn more about the
work that Health Catalyst is doing in this space or maybe you’d like to learn about other
products and professional services. If you’d like to learn more, please answer
this poll question. We’ll go ahead and leave that poll open for
a moment as we begin the Q&A. We should probably give a plug to Stanson
Health too. Yup. Stanson Health is absolutely worth paying
attention too, friends, in terms of their contributions to better decision support,
reducing waste, all of the things that Scott talked about. We didn’t put a poll question for Stanson
Health but you can find them on the web for sure and Scott’s the CEO. All righty. Let’s go to some questions. We have time. Scott and I both said that we could stay over
allotted time. We’re happy to stick around as long as there’s
an active group of questions and participants. Let me pull up the panel here to read these
questions. Some of these like poll respondents and Brooke,
you shared those, Yeah, we had around 300 on it, peak, yeah. 300? About 330 people. Okay, yeah. It’s a pretty good sample size ride. Rob Dolan Meyer asks, “Isn’t poor clinical
data quality largely due to horrible EHR user interfaces?” I’m not so sure I would agree with that. I mean, user interfaces on the EHRs aren’t
great, but I don’t think they are the sole reason that the data quality coming out of
those is poor. Scott, what are your thoughts there? I completely agree. I think the reasons why data quality is not
what it could be are multifactorial and it’s certainly well beyond user interfaces of some
of the available EHRs. Yeah, I mean, we’ve kind of forced the EHRs
to collect data all about encounters for billing and it gets back to that fundamental conceptual
miss we should have designed the EHR around the patient and we’re still kind of stuck
in that paradigm, but we’re getting out of it. I mean, I give credit to the EHR vendors. They’re moving towards a patient-centered
design. Let’s see here, D. Sams asked, “Where would
you see pharmacists playing in the role of digitization?” Scott, I think I want to give you that question
because I haven’t given enough thought to it. Yeah, digitization and clinical decision support
is a team support. It’s not an individual support. Pharmacists have an incredible wealth of knowledge
as we think about medications and appropriate use of medications and I think pharmacists
play an absolutely critical role in our journey to digitize and gain greater benefit from
the digitization of medicine. Yeah, now that I’ve thought about it a little
bit more, I should call out Stan Pestotnik is a pharmacist at Health Catalyst, also is
a colleague of mine at Intermountain Healthcare. He’s one of the best, most knowledgeable people
around clinical decision support in the country, probably in the world actually. We see the benefit of Stan’s knowledge as
a pharmacist as it relates to the decision support products that we have around patient
safety surveillance. If we follow the notion that gee, at least
we should do no harm to patients, it should be safe care even if it’s not entirely correct
care, it should be safe, the work that Stan and his team are doing and his pharmacy background
relative to adverse events is just fundamentally important. Yeah, lots of room for pharmacists. Okay, next question. What strategies do you find work best to foster
buy-in from clinical staff for enhancing data quality at point of entry in the EMR? Let me take a shot at that and then Scott,
I’d love to hear your thoughts. I had to go through this a fair amount at
Northwestern and also again in the Cayman Islands when I was operationally responsible
for decision support and healthcare systems. When I landed at Northwestern, we were using
Epic and Cerner as little more than word processors, very little or almost no order entry out of
general internal medicine outside of that area, didn’t have any order sets, didn’t have
any kind of BPAs or anything like that. I just said, “Look, physicians,” it was actually
the beginning of Meaningful Use. I’m kind of almost embarrassed to say that
because Meaningful Use took off in a way that I never intended but in 2006, thanks to Sarah
Miller and others on my team there, we put together a dashboard around the way Cerner
and Epic were being used by our clinician. It was the beginning of a Meaningful Use Dashboard
essentially. We gave that data back to the physicians and
we said, “Look, if you use these EHRs for more than just a clinical text notes word
processor, you can get more value out of this very expensive investment you’ve made and
we can help improve the efficiency of your work. We can improve care.” We gave them utilization data about Cerner
and Epic and I just kind of let them adjust to it and then we track the number of pharmacy
pads that were being used and bought and things like that to track against the adoption of
order entry. I think, I’ve said several times in the last
few months that I’m half technologist and half psychologist when it comes to the use
of data. If you don’t appeal to clinician’s sense of
mastery, autonomy and purpose per Daniel Pink, if you don’t help them see how better data
quality helps that clinician’s sense of mastery, autonomy and purpose, they’re unlikely to
engage in better data quality improvement. In fact, I think, one of the things that we’re
doing wrong in that regard is that we are over measuring physicians right now. If you’re going to be over measured and your
autonomy is going to be stripped away from you as a clinical decision maker, do you really
think you’re going to be passionate about improving the data quality that goes into
it? Probably not. It’s a punitive environment right now. There’s a few thoughts. Scott, what do you think? Dale, I agree with all of your comments and
also I think it’s key we need to respect the physicians and the other healthcare providers,
their time and in many cases we need to meet the providers where they are. It’s sometimes not reasonable to ask healthcare
providers to significantly change their workflow. Let me give you an example of that. If they have documented information that’s
clinically relevant in unstructured format or in a text note, I think it’s unreasonable
in many cases asking them to document it a second time. It’s a discrete data element. I think overtime, we have to get better at
interpreting the information that they’ve already provided us, even though that will
require perhaps more advanced technology than we’ve used in the past. I think the quality of the data will get better
overtime. Yeah and I just encourage, one last thought
on that, the soft side of data is as important as the technology data, the human side of
data and how you engage literally at the human level so that clinicians especially feel like
you’re empowering them with data rather than stripping them of their time and autonomy
is just fundamentally critical. Be a psychologist as well as a technologist. From Tammy Gray for Dale, “What if patients
don’t want you to use their one terabyte of data for anything other than provider reference
for their own health history? How will they opt out?” Well, that’s a great point, Tammy. I mean, what I really want to champion before
I go over the hill in healthcare, is I want patients to control 100% of their data. I want vendors, I want the healthcare system
to move away from this sense that we own and control the data. I think that’s completely inappropriate. I think overtime, kind of following the GD,
the General Guidelines out for data production out of Europe, I think that’s what the US
is going to end up looking more and more like. My goal in the future is that and we’ll have
to be careful because not all patients have the cognitive skills, the capabilities to
actually engage and manage their own data, which again, one of the reasons that I advocate
this role of a digititian is to play that role for patients who are not capable of doing
that, to properly own and control their data. Then, I want clinicians to have the ability
to shop that data around to healthcare systems that provide the best care and also vendors
like Health Catalyst and Stanson that can provide direct-to-patient analytics and AI
based upon the data that patients’ control. I actually want to get away from the notion
of opt in and opt out. I think patients should own their data from
the beginning and it’s weather and the opt in, opt out is share in or share out and it’s
the patient making that decision with the care provider and vendors. Let’s go to… There’s one specifically here that I’d love
Scott to address from Dr. Becket Mahnke up at MultiCare. Becket says, “Great talk. I’d love to hear more about CVS governance
as each of us has a different threshold for risk and when to intervene even if the CDS
is perfect.” Scott, what are your thoughts on CVS governance? Yes. I think having a process for CDS governance
is absolutely critical to the success of CDS. I think bringing together a broad group of
stakeholders, depending on the particular organization and physicians throughout the
organizations and pluralistic organizations like Cedar’s, for example, would include faculty,
physicians, medical group, IPA, private practice, nurses, pharmacists, other healthcare providers,
informaticians, social workers, in some cases case managers and having the governance process
to review all of the CDS that someone has proposed for implementation to review it at
regular intervals, because all of the information is perishable to determine when it should
be updated, when there’s significant changes in guidelines or new studies published in
the literature to determine whether it should change the CDS. The CDS governance group should get feedback
on a continuous basis on how well the CDS is working. For example, how many times does it fire? How many times does it need to fire before
a provider changes an order? For example, what are the impact on patient
outcomes? I think providing an informed CDS group with
this type of information to add new CDS, modify existing CDS or remove part of the CDS is
essential if one’s efforts in CDS will be effective. Yeah. I’ll share a little bit about how we govern
CDS at Intermountain. Intermountain had a pretty rich history going
back to the ’60s with clinical decision support. I’m not… I’ll share the observations. I don’t know if there’s a role model in here
or not, but just an observation. Most of those clinical decision support modules
that were developed and maintained at Intermountain started out as Informatics Graduate Programs. A master student or PhD student would work
with the likes of Homer Warner, Al Pryor, Reed Gardner, Stan Huff, others and they would
decide, pick out a good viable target for clinical decision support and then that grad
student would work with advisers and other members of the staff to build the modules
out and things like that. The governance started kind of at that level
sort of grassroots, graduate school level. When I had the great fortune of sitting in
the seat of Chair of Medical Informatics at LDS Hospital, Reed Gardner and I worked together
and we said, “You know what, let’s put a little bit more structure around this because we
actually had some adverse events occur as a consequence of mistakes in the CDS.” We said, “Look, you have nothing else. Let’s put together a software decision support
oversight committee and let’s put a little bit of structure around the way the rules
for decisions supported or developed, who’s involved in the definition of those rules,
what kind of rigor is around the software programming associated with those?” It was kind of an interesting way for me to
take some of my software safety work that came from space and defense and apply some
of that to what we were doing at Intermountain but I would say in conclusion, I think there’s
a lot of room for all of us to determine what are the best practices for clinical decision
support and how do you combine enough grassroots innovation with a limited but safe amount
of oversight from the top down and who should be on those committees and how often should
you meet and to what level should those committees review code and processes and things like
that to make sure the CDS is effective. Then, even above that, how do you choose which
topics to implement in CDS and how do you estimate the costs and the support and all
that. Okay. Oh my gosh, there’s a lot of good questions. Scott, how much more time can you spend, friend? As much as you can, Dale. I’m fine. Really? Okay. I can go over for a while. These are good questions. From Tom Kampfrath, “What is your expectation
for vendors of laboratory tests? How do you think they can assist with improved
clinical decision making vendors of laboratory tests?” Scott, does that immediately… I don’t… Thomas, I’m not exactly sure I understand
your question. Maybe Scott, do you have thoughts there? Yeah, I think, as we think about laboratory
tests and let’s see, there are new laboratory tests that are available. Some of the tests, in the future, we’re going
to see more laboratory tests for molecular diagnostics, genomics, proteomics, microbiome
information, and I think collecting evidence or publishing the evidence of when the laboratory
tests are effective and can benefit patients and when they may not benefit patients, which
patients would benefit from having a particular laboratory test I think is important not only
because of the expense of potential laboratory tests, but because all laboratory tests and
I’m aware, I’ve had false positives that can cause diagnostic misadventures as you’re trying
to work up the false positive tests. Providing that kind of evidence through research
and other investigation, to enable CDS to be developed, to guide the appropriate use
of laboratory tests. Yeah, thank you, Scott. Let me jump down here kind of related to a
question about clinical decision support and governance, “How long should a CDS be tested
before use? What is the standard?” Scott, you go first. It’s a good question. I’d love to hear it from the audience and
learn from you. I’m not sure that there is a standard. It also probably depends on the consequences
with the potential benefits of the CDS, but I think testing CDS is important and the analogy
I use is that if you think about a molecule, let’s say we were to develop a new molecule
that we wanted to treat heart failure patients with, we would do in vitro testing first in
a test tube before we tried it in humans, in vivo testing, then it would be evaluated
by the FDA, but I think that enough data needs to be gathered often testing CDS silently
or through other approaches to make sure that the potential benefits of the CDS merit turning
it on and that there aren’t going to be potentially unintended consequences. I think it’ll depend CDS, but I think the
question is an excellent one. It needs to be tested first because there
are going to be many examples if without proper testing of CDS and Dale used one example for
the Max 737 where CDS can provide no benefit or potentially worse. Yeah, clearly it wasn’t tested enough. The way that we originated CDS at Intermountain
and Northwestern was essentially from randomized clinical trials kind of laying the foundation
that would then justify the development of a CDS, right? We’d either conduct our own randomized clinical
trial internally or we’d look at literature and we’d say, “You know what, this is probably
a good opportunity for us to embed this sort of decision support. We think there’s enough value in terms of
either dollars, patient safety or patient benefit to implementing this RCT that was
published in literature and codifying that in the software decision support.” There’s a timeline associated with that. Again, not to say that that’s the way to go
about it, but traditionally it bubbles up from randomized clinical trials. Then, there’s a development effort, the traditional
software development effort that says what kind of design reviews do you go through? What kind of code reviews do you go through? What kind of a failsafe and other tests do
you go through in the software? One of the things that I brought with me from
the military and space and defense sector was a very rigorous approach to software engineering
in that regard. What I didn’t bring with me was much understanding
of clinical decision support. I think overtime, what you’ll see is those
two worlds coming together, where you sit down as a governance body, you identify new
literature, new RCTs that should be implemented at your organization. Then, you sit down with your software developers
or your vendors and you talk about, “Okay, what are the three loops of decision making
we want to implement according to this evidence-based care? What are we going to do at the population
level? What are we going to do with the protocol
level? Finally, how are we going to deliver this
right at the point of care between the patient and the physician? Okay. Golly, there’s a lot of good questions here. Let me see here. Let’s go to this one. Kathy Hoke as kind of a follow on, “For dosing
some drugs, don’t you need the results of certain laboratory tests? How can these test results be incorporated
into the patient profile?” Well, for the most part, they are Kathy, right? That’s one good news. We do have a pretty good closed loop process
now in Electronic Health Records where the order that’s initiated from the clinician
goes out to the lab, the lab result comes back and is filed back in the patient chart. Scott, any exceptions to that that you want
to note or any flaws in what I just described? No, I agree. I think that is a very important feature of
some of the Electronic Health Records. Yeah. Okay. Here’s a question. Peter asks, “My question, how to start softly? Is there open source sandbox-like platform
tool one can experiment with data and integrating mocking up to workflows?” Scott, you probably have seen this with Stanson,
I would imagine with the EHR vendors, especially Epic? Yes. I apologize. I’m not aware of the sandbox and that is available. Dale, I’ll defer to you on this one. Okay. Okay. Sorry, friend. Yeah, Epic has the app orchard and they’re
building out frameworks for developers to test various apps in that sandbox environment. The Cerner, I’m not aware of. A similar kind of thing is Cerner or Athena
and Meditech, I will say as an operational CIO, we had test environments that we could
use internally at Intermountain, Northwestern, Cayman Islands, et cetera. A lot of organizations have their internal
test environments around EHRs where these systems can be developed and tested before
they go live. I’m not aware of a very robust environment
where a completely independent software developer could work in a sandbox environment. Okay. Paul Salmon asks, “Can you comment on the
lack of patient-clinician relationship tracking in EHRs limiting the analytics and quality
reporting capabilities?” Can you comment on the lack of patient clinician
relationship tracking in EHRs limiting the analytics and quality reporting capabilities? I’ll take a shot at that, Scott. Well, I mean if you’re talking about attribution
in sort of a value-based care setting and accountable care, we have fairly robust, I
would say not perfect algorithms in the industry for patient-clinician attribution in that
context. Frankly, it’s more driven around quality measure
performance and value-based care reimbursement models than it is around a kind of a clinical
best of care relationship. There are those attribution models out there. Through analytics, you can stitch together
the patient journey from the data in the EHR as they move between clinicians and encounters. I mean, I don’t see a giant lack of or inability
to track patient-clinician relationships in EHRs that stands in the way of our analytics
or quality reporting. I mean, again, there’s imperfections in the
attribution models and things like that but generally speaking, I think we’ve got our
hands around that problem. Scott, do you have any response to that? No, I completely agree with you, Dale. There are a number of attribution formulas
out there and attribution models. To your point, not perfect. They’re going to be wrong some of the time
and we just have to be aware of that, but at the same time, if they’re correct, a large
percentage of the time, we still are able to get meaningful data in most cases. Yeah. Here’s one, just both Scott and I just came
back from the JP Morgan conference in San Francisco, which is has a big dimension of
pharma and biotech. This question comes from Jean Marie,
“How can pharma, if at all, support improving data quality CDS and improve patient health?” I’ll take a very quick shot at that and then
I’m sure you’ll have better thoughts than I will on this, Scott. There clearly has to be a better relationship
between pharma and providers sitting at the table together and determining the best decision
support that’s tailored to the needs of individual patients. There’s still too much separation between
those two worlds. Having a more collaborative environment in
which pharma is actually members of pharma are actually sitting in meetings with care
delivery organizations. Helping define and implement and tailor CDS,
I think, is critically important. I can’t think of a time when I’ve ever had
a member of the pharmaceutical industry sitting in a CDS decision making and the governance
body at Intermountain, Northwestern or the Cayman Islands. One of the things that I opine about quite
a lot is of course there’s a lot of clinical trials data that never and I think is underutilized
in better understanding patient safety and patient care. What really went on in that clinical trial
and how do I take those learnings, those analytical learnings from the clinical trial data and
inform what I do in care delivery. There’s too big a gap between those two worlds. The other thing, this is something I’ve been
coming in on here lately is that if you look at the entire supply chain of pharmaceutical,
drugs, it’s a bell-shaped supply chain. By that, I mean, there’s an RCT and that RCD
sort of determines in general what’s the best dosing and route and period of dosing associated
with patients of a certain type with a certain condition. On the rare occasion that I need to take medicine,
I’m always kind of surprised that my dosing, the frequency of the dosing and my dosing
doesn’t vary much from… I weigh 165 pounds. I am a white male of 60 years old and that
same prescription is being handed out to a very different patient type than I am. The ability to move pharma from sort of this
bell-caved supply chain where they produce and then manufacture drugs to support a very
bell-shaped curve, not of personalized curve, even by simple demographics about patient
types, not to mention pharmacogenetics, we’ve got to figure out how to affect that supply
chain because it starts all the way up in the RCT and it goes right into the manufacturing
process. The ability to actually deliver specific products
to patients from the pharma supply chain, even if we knew how is constrained. I’ll stop there. Scott, what do you think? What can pharma do in this space? Yeah, I think a pharma can be very helpful
in terms of funding and contributing evidence on the appropriate use of medications, information,
scientific evidence on efficacy, effectiveness, safety of medications that can be reviewed
and evaluated by teams of people who are creating clinical decision support to provide real
time education to individuals who were thinking, “Gee, would this drug potentially help this
patient or would drug A or drug B be better for this patient?” By providing information that I think will
be very valuable in informing the future creation, updating and updates for clinical decision
support. Yeah. Great. Here’s one that’ll give you a chance to talk
a little bit about Stanson Health. This comes from Omar Shakir, “What is Stanson
Health’s focus now and what kind of partnerships would you be looking to do post premier acquisition?” Got it. Well, Stanson’s mission has been to inform
clinical decisions at the point of care to bring about measurable improvements in quality,
safety and affordability of care. That was Stanson’s focus at the beginning. It is today and it’s really getting more sophisticated
about using luminous amounts of information that’s available in the Electronic Health
Record to guide decision making or provide evidence that the provider will evaluate in
some cases in discussion with the patient to determine what will lead to the best care. That’s improving the value of care across
the continuum. What will lead to the best quality at the
lowest cost? It’s automating a prior authorization. It’s providing CDS for the appropriateness
of imaging studies. That’s what Stanson does and Stanson did that
before or that was the goal and the journey before it was acquired by Premier and remains
the goal and our objective after the acquisition by Premier. Great. Thanks, Scott. Great work. Thank you. We still have 137 people, 135 people on the
call. If you can stay, I can stay a little longer,
Scott. I certainly can. Great questions. Dale Sanders Okay. “Hi Dale, Scott,” this comes from… I’m not sure who this comes from, an email
address, “Hi Dale, Scott. One of the major concerns is the flu. Is there a way AI can help?” Well yeah, we can certainly spot flu outbreaks
with AI. That’s classic pattern recognition and there’s
all sorts of apps and organizations already doing that. Intermountain is a great example of that. I bet Cedar’s probably has its own tools for
seeing those kinds of outbreaks. If you really roll it back, the problem we
faced with the flu now is the mutations of the virus. We can’t keep up with the mutations. As soon as you think you’ve got a vaccine
that’s going to work, a new variant emerges. What I, and I’m careful not to opine too much
on this, not being a deep expert in the space, but I do see some progress in the pharma industry
around using AI to predict mutations and then what that might mean to the development of
not just one or two vaccines, but a whole bandolier of vaccines to address the various
mutations. But then of course you need molecular tests
and data and understand which type of mutation a patient has. Those are my thoughts. It’s biological warfare with the virus that’s
the challenge. AI can definitely play a role and it’s going
to keep getting better. Scott, what do you think? I agree. I think for helping to understand the influenza
outbreak, AI can be very helpful. I saw data a long time ago about how Google
searches of symptoms of influenza can correlate with influenza outbreak and can be predictive
of it before we get back flu viral cultures that confirm it. Actually, the internet can be very helpful
but also as we talked about earlier, having a decision support model to identify patients
who will potentially benefit from the influence of vaccines. Hopefully, minimize the impact of the outbreak
or as in this year, when the match between the vaccine and the virus is not as good as
it could be particularly among children. Alert both providers about that and also patients
about that as well. Yeah. I don’t know if the relationship still exists,
but for years, Walmart has monitored sale of over-the-counter cold and flu medications. This goes back to 1999. I had a relationship with the folks that ran
the data warehouse at Walmart and had the opportunity to see one of their visualizations
that showed the outbreak of the flu in real time moving down the East Coast then jumping
over to the West Coast and moving to the middle of the country. The CDC at one time had a pretty substantial
relationship with Walmart in that regard. I don’t know if they still do or not, but
certainly contract the outbreaks. The problem is responding and intervening,
right? Like I said, predictions are one thing. Interventions are another. Okay. Let’s see here. That’s a Rob Dolan Meyer asked another question,
“If EHR UIs are so good,” I don’t think either one of us said EHR UIs are so good, Rob. Let’s just clarify that right now. “Why do more and doctors depend on scribes
even the documentation process is often highly repetitive, but the systems human…” Yeah. Well, yeah. I’m not defending the UI of EHR. The question was what role does the UI play
in data quality? Again, most of the data collection that UIs
in EHRs were designed to support was about billing. I will say this, I think in our pursuit of
digitizing the patient, in digitizing the encounter, we are turning clinicians into
digital samplers with clicks of the mouse. Again, is it the fault of the EHR or is it
the fault of the way we engage the EHR in data collection? The reality is we have to stop putting the
burden of digitizing the patient on the mouse clicks of the clinician. That’s what we have to stop doing. Again, in part, what I’m suggesting, maybe
you could call it scribe, but this digitization that I advocate is that person. Rather than the physician sitting there collecting
these data points, I want an engineer with a digital background responsible for working
with the patient, working with the physician to define the data elements that are required
to specifically address this patient’s care and that becomes the data profile that you
pursue and you don’t put that on the mouse clicks of the physician to do any more. Any more thoughts on that, Scott? Yeah. Scribes can be very helpful in certain cases,
but they also come with a downside and that is certain patients when they’re discussing
sensitive subjects, may not want a scribe or someone they don’t know in the room to
listen to those discussions. Then also, scribes albeit they can be helpful. They come at a cost and right now it’s a society
we’re working very hard to address the affordability of care, although I think the same sensitivity
issues arise. I think what the greatest investments will
be in what I described earlier, the virtual scribe or the virtual assistance, the Alexa
like Google home-type device that will perform a similar function and enable providers to
spend more time talking to patients and really understanding patient’s concerns and less
time documenting. A lot of efforts underway by EHR vendors to
make this a reality. Yup. I’ll give a plug to a company that I met with
out in San Francisco, JP Morgan this week. It’s a company called Evidation. It’s kind of an odd spelling. I’ll spell it for everyone, E-V-I-D-A-T-I-O-N,
Evidation. They’re doing a really good job of passively
collecting data that’s already being collected through a smartphone and then making sense
of that clinically and behaviorally for the patient. The patient actually provides almost no data
entry themselves. They look at the way that the patient is interacting
with the apps and the OS on their phone and then from that, they have some interesting
conclusions about the health and the mental, physical, spiritual health of the patients. Fascinating work, fascinating work. In fact, it looks like the work that I was
involved in back in the space and defense sector. They have 40 to 50 data streams that they
pull in and then they correlate events on those data streams. It looks just like, I suggested they go down
and talk to the folks at the Jet Propulsion Laboratory because what they’re doing and
what a place like JPL does when it tests the rocket engine is exactly the same kind of
data stream and concepts. That’s the sort of thing that I think will
eventually those concepts will relieve the clicks that physicians are currently obligated
to follow in current US healthcare. Now, I’m not quite sure of the clinical and
the total value of companies like Evidation yet, but conceptually I think it’s fascinating
and would encourage everyone to study on those. Okay. Gosh, I think we still have 110 people on. Here’s one from Danielle Carnice. Let’s see here, “What’s your feeling about
the importance of organizational culture in changing care delivery practices and what
advice do you have on how to do that effectively?” Well, Scott hit on it. You have to make it easy to do the right thing,
first of all, and make it hard to do the wrong thing. I get… I know the cultural change is important, but
I think sometimes we tend to blame culture for systemic problems that reside actually
above the culture. The culture resists as a consequence of the
ecosystem we put it in and if you look at the stress and the pressure, for example,
that we’re putting on physicians right now. Then, you come in with the notion that it’s
culture that has to change, I think you’re going to see resistance. I think you’re going to feel resistance. We have to take some of the pressure and the
dissatisfaction and the bureaucracy of the ecosystem off of patients. I think the culture then will sort of emerge
from that but until we do that, I think what we describe as resistance from clinicians
is actually just a symptom of a root cause of the greater system. When you’re engaging with data and things
like that, just make sure that you’re not doing anything more to continue to burn physicians
out and thus create more resistance to cultural change. Again, study behavioral psychology and there’s
a great author, Tali Sharot, T-A-L-I S-H-A-R-O-T. She’s an and Israeli researcher in the space
of cognitive decision making and sort of the oddities of human decision making and change
management. If you want to be a change agent in healthcare,
it can’t come from a methodology. It can’t come from some cookbook approach. It has to be a very tailored, specific, individualized
approach and you have to do something to take the pressure off of clinicians and not blame
it on their culture. Okay. I’ll quit preaching a little bit. Scott, what do you think about that? I agree, Dale. As it relates to clinical decision support,
what we’ve found effective and just the approach, at Cedar Sinai, certainly there was the CDS
governance and physicians, other clinicians and the CDS governance team will be involved,
guide and determine what CDS is used and not used. In almost all cases, there’ll be evidence
to suggest whatever is being proposed will lead to better patient care because the North
Star is really the patient and this is about for CDS, improving patient care that CDS would
include only those things that are important, meaningful to patient care and would not include
things that are not helpful for patient care or and not make it more convenient for providers
so that they’re more efficient in their care and then finally, doing our best to collect
data to determine whether the clinical decision support was helpful in providing that information
back to physicians and other healthcare providers because you’d go into some things saying,
“Gee, I think this is going to improve patient care,” but we’ll know after we collect data
and if it does, that’s great. We hope that we continue it and in virtually
all cases it is continued but if it’s not improving patient care, then you want to improve
it or you want to stop it. Yeah, exactly. Okay. Priyanka asks, “How do you try to align fragmented
institutions that hold pieces of patient data and including how vendors stored it? How do you align fragmented institutions?” Well, I mean, this is sort of the space that
Health Catalyst operates in. I mean, this is the whole reason that we exist
is because of this fragmentation of not just institutions, but where patients seek care
and how to make that all into a coherent picture of that patient’s journey through the healthcare
system. We have the technology for that. Health Catalyst is clearly very good at pulling
together fragmented data from all sorts of institutions. The challenge that I think we’re overcoming
in healthcare institutionally is there’s a growing dissatisfaction with fragmented care
in a region, especially. Even if you’re moving between, say, the University
of Utah Health System and Intermountain Healthcare, the attempts of health information exchanges
and others to share data across those institutions technically is getting better. We’re still a long ways away, but we’re getting
better at it. Now, there’s an expectation among care providers
in those regions that they’re obligated for the benefit of patients to participate. I think, again, at the ecosystem level of
US healthcare institutions are being held to a higher standard now than they used to
be around sharing data between those organizations. The biggest problem is still probably the
lack of infrastructure and sort of design in the interoperability space, moving data
from point A to point B and making it useful to patients and clinicians. I don’t see as much resistance culturally
anymore to this is I do see barriers technically at the interoperable layer, at the transaction
level. Backing up to what Health Catalyst does very
well, we can pull data in from any institution that’s willing to participate in a data sharing
environment and we can make analytic sense of that and feed that back to those institutions,
feed it back to patients and to clinicians. Do you have anything further to add to that,
Scott? I do not, Dale. Okay. Here’s one for you, “Regarding data quality
improvement incentives, what economic incentives exist or might be brought into ecosystem for
providers to pay more attention to data quality at source?” What do you think about that, Scott, economic
incentives to improve data quality? Great question. I’ve never heard that asked before, but I
love the question. I’m not familiar with any organization providing
economic incentives for data quality. However, there are economic incentives for
demonstrating high quality. When we’re unable to demonstrate high quality,
that might be because the data quality is poor or it might be because quality’s poor. That’s an indirect economic incentive for
providing high data quality. I am familiar with organizations that do provide
economic incentives for accurate HCC coding, so that might be the one exception to what
we’ve talked about. But to really measure a provider’s quality,
it requires accurate data and that might be a sufficient incentive for certain providers
to really focus on data quality improvement initiatives. Yeah, there’s mixed studies on whether providing
economic incentives for data quality work. There have been a few attempts at that. One thing that I did, kind of unusual, I think,
when I was at Intermountain and Northwestern is if you think about the data quality journey
in healthcare, it starts with the proper identification of the patient, the patient’s identity at
the registration function. If you get the patient identity, you create
a duplicate or especially duplicates at patient registration, you otherwise add wrong data
to that patient data registration, it cascades into all sorts of downstream data quality
problems as well. Out of my CIO budget, I would, and we could
run reports to identify the registrars who had the lowest duplication rates, for example. I’d hold some money out every year and I would
give a Christmas bonus to the registrars who had the highest or rather the lowest number
of duplicates created that year. It was kind of fun. What it really did is it just added to them
the importance of their job in data quality. It kind of raised their sense of belonging
and contribution to the company and it gave them a little bit of money to feel good about
what they were doing. I don’t know if the money really mattered
that much. I think what mattered more was they felt like
they were actually part of something very important and we were helping them master
their skill. Okay. We’re down to 87 people. I say when we get to 75 shall we shut it off
friends? What do you think? Sure. We’re going to get kicked out of this room
pretty soon, I bet. Okay. Zachary Clare-Salzler asks, “Do you think
more data is necessarily better? If data quality and healthcare is currently
so poor, why wouldn’t more data just produce more bad data?” Well, that’s a great question. Generally speaking, you’re right. I mean, you don’t want more bad data. What you want is more good data. Our current data quality in healthcare is
kind of poor, again, because we’ve always been so encounter-based around what we document. Also, we’re putting more and more burdens
on clinicians that are already overtaxed. The other thing is we collect categorical
data about patients. We call it structured and discrete, like an
ICD or a CPT or even national drug code in an order or something like that. Those are not discrete data elements like
an engineer would describe discrete data elements. A discrete data elements is something like
a lab result where you’re measuring hemoglobin A1C levels, for example. That’s a discrete computable lab result from
an engineering perspective. But an ICD, a CPT, those are categorical data
elements. They’re computable, they’re structured, but
they’re categorical. They’re subjective. They’re really our interpretation of a patient’s
condition and ICD and then we’re assigning that to some general category of this type. What we’ve got to get away from and we’ve
got to start to apply, is this mentality that electrical engineers have, an aerospace engineers
have that you want less categorical data, you want more true discrete, computable data
that comes off as sensors and originates from something other than a human being clicking
on a box. Okay. Generally speaking, across analytics, statistics,
AI, more data generally ends up being better. It’s kind of interesting, even more bad data
you can make conclusions about better than you can small datasets of bad data. There’s some irony there. More data is generally better. Anyway, Scott, any other thoughts there? I do not have other thoughts. Okay. Let’s see here. Here’s one, Al Strohmetz asks, “Does Health
Catalyst have an ROI model in days? We’re starting small CDS AI projects.” ROI model in days, I guess this is asking
time to value. How many days would it take to see value from
a CDS maybe? That actually depends on the used case and
the patient type that you’re addressing. For example, if you’re implementing a CDS
to support low back pain management, like the example that Scott gave and you implement
that CDS to suggest that MRIs should not be implemented immediately after the patient
reports a complaint that you just go through some other lower costs thresholds first. You can see financial benefit from that right
away. If you’re talking about CDS that is intended
to affect the incidence of CHF in a population, you may not see that for 5 or 10 years. The temporal dimension of value associated
with CDS depends highly on the nature of the used case that you embrace. The best organizations actually and I would
hold up Geisinger as an example of this. I worked with Geisinger a few years ago, several
years ago, more like 10 years ago about building out a timeline for analytic decision making. Not really clinical decision making at the
point of care, but at the protocol and the population level saying let’s track things
that we can monitor over three years, one year, six months, three months and two weeks. We literally picked the analytic decision
support projects according to what we thought we could watch and monitor along those timeframes. One of the reasons we did that was to sort
of acknowledge that we needed near term victories with analytics, but we also had to engage
in long-term population health. It was as much a psychology strategy for near
term victories around analytics as it was anything else but it also rolled up to great
clinical value as well. Okay. Well, we hit our lower threshold. We’re down to 75. I think we’re going to drop off now. Everyone, thank you so much for staying with
us and thanks for the awesome questions. Scott, I can’t tell you how much I appreciate
you as a friend and a colleague and you’re a role model human being in every way. Thank you.

Leave a Reply

Your email address will not be published. Required fields are marked *