In this episode, Pete Durlach, the Chief Strategy Officer at Nuance, joins Renee at The Table. They discuss what true AI is, qualities of the most successful health systems, and augmenting instead of replacing or substituting human labor.
As the head of corporate strategy, Peter is responsible for advancing Nuance’s overall strategic direction and portfolio in line with emerging trends across our key vertical markets: healthcare, financial services, telecommunications, retail, and government. Prior to Nuance, Peter served as founder and president of Unveil Technologies, Inc. and was a founder of Articulate Systems. He also served as an entrepreneur in residence at the University of Pittsburgh Medical Center.
↓ scroll ↓
Renee DeSilva 0:07
Welcome back to The Academy Table. I’m Rene DeSilva, CEO of The Academy and your host. In this episode, I welcome Pete Durlach, the Chief Strategy Officer at Nuance. Pete is a true expert on all things AI both in and out of healthcare. And it was a pleasure to learn so much from him on where AI is today, and where we’re headed in the coming decades. Here are my takeaways from our conversation.
First, Pete started with a useful level set on what true AI is, and the maturity curves within the clinical, financial, and administrative areas. The goal underpinning much of AI today is to improve human experience and productivity by drawing patterns from data that help execute tasks in new contexts.
Next, in Pete’s eyes, the most successful health systems in this space share a few characteristics on the leadership side, they have context-specific internal leaders who champion the solution. They are laser-focused on the metrics they’re trying to improve. And they select meaningful projects with the appropriate C-suite buy-in.
And finally, Pete is a firm believer in AI as augmenting not replacing or substituting human labor. And the back half of our conversation, listen to Pete’s thoughts on the future potential of the human-machine dyad and clinical applications, creating actionable datasets for clinicians, and improving remote patient monitoring programs, among others. So with that, let’s head to the table.
Good morning, Pete. Welcome to the table.
Pete Durlach 2:02
Hi, Rene. Thanks so much. Great to be with you today. Thank you so much for the invitation.
Renee DeSilva 2:06
You as well. So I’m struck by your healthcare career path, and just sort of how broadly, it is anchored really fundamentally on technology. But before we get into that, tell me a little bit about some of the early forces that shaped your career interests.
Pete Durlach 2:23
Yeah, that’s sort of interesting. So I got into computers, sort of surreptitiously, my dad ran a research group at MIT, in the EE department on speech and language mainly on speech and hearing. I kind of grew up running around there times. And my brother and I happen to learn how to do programming then. So I actually was really very early in my life. I was into video games for a while, I kind of grew out of that. But I did some programming early on, and these old PDP 11 and Vax machine, so I got into computers kind of very early in my career. And then on the healthcare side, I got into it in sort of two places. Number one is I was searching for what to do with my life. And I wanted to really be involved in building businesses. And I found an early-stage startup a long time ago that I eventually helped get into the healthcare business. And I found it just sort of be a magical place. This was in the late 80s, early 90s, that combine my interest in technology and computers with the ability to really have an impact on people’s lives. And I can give more color to that later. But so those are sort of the two critical backgrounds for me on both the sort of the technology side and, however, and eventually got into the healthcare business.
Renee DeSilva 3:39
So on a personal note, I’m going to take this away as a good news for me. My son is right now very hard to pull away from video games, so your career path tells me that there’s some hope there for my 14-year-old son.
Pete Durlach 3:52
Yes. The bad news, Renee is the video games when I was growing up, were not nearly as cool as the ones today. So hopefully, he’ll have the same ability to well, yes to think they’re they’re very enticing today. There’s there was definitely no 3d. I had like Pong and a couple of things. Pac-Man, I think I can remember. So yeah. So hopefully that will be happening. But I was definitely not as interesting back then for sure.
Renee DeSilva 4:15
Indeed, indeed. Well, I know that despite your current focus in healthcare, you have broader industry awareness and knowledge to your role at nuance. And so I’d love to talk about that a little bit. How do you think about channeling what you’re seeing in other industries to help maybe accelerate how we as a healthcare industry, think about all of these forces animating around technology and AI?
Pete Durlach 4:38
Yeah, it’s interesting. I spent a number of years outside of healthcare across a number of different industries, financial services, telco retail, I was in the call center, contact center business for a while. I was also in the retail sort of technology business a long time ago, but I think on the part that’s most relevant to healthcare is the context and our business. And I would just say, I think there are lots of things to learn from outside of healthcare, I think a lot of things that other industries actually could learn from healthcare. But I’d say the biggest thing is really to focus on business outcomes. I learned that early on in these other spaces and healthcare fills is today. You have limited resources, limited time, lots of distractions really where you should spend your time, and how to make the most impact is really sort of being rigorous around focusing on problems that are meaningful enough and can be solved. And healthcare obviously has a lot of confounding issues around things around clinical care and other areas that are not as relevant in other industries, which is sort of more hard cold around the business. But at the end of the day, this focus on business problems that can be solved and what outcomes you can generate, I think, is really the biggest learning and I think it’ll have a lot to do with our the rest of our conversation today about AI.
Renee DeSilva 5:55
I think that’s right. When I spend time with our members, and many of these folks, too, they often would describe their early innings with AI as both fruitful and frustrating. And so maybe just with that, as a kickoff point, talk a little bit about how you describe the landscape of AI in healthcare today?
Pete Durlach 6:16
Yeah, it’s a great question. I would say it’s sort of all over the map, as many new technologies go through similar periods of time, where you’ve got examples of highly performing AI solutions that are at scale, driving meaningful, meaningful outcomes, all the way to very early stage, exploratory things. And part of the reason for that is AI is a very general category. Lots of people misuse the terminology and to cover lots of things. And many folks in the technologies space wouldn’t consider AI. And then because it’s a course on a platform technology, the use cases are all over the place. So it’s hard to put AI in a single bucket because it’s like saying, Well, what can the internet be used for? Well, it can be used lots of things in the early days, email was the killer app for the Internet, in many cases, and a lot of other stuff was very immature on the web and has over time that email has stayed very important. But obviously, there are many, many other amazing applications that have come. So you have this really wide array of cases. And again, not to be myopic, but the original use case of AI in healthcare was, was in speech recognition. You and I joked a little bit before about being in the space for a long time speech was sort of the initial application of AI in healthcare. And the reason that was is because of the big problem, it was focused on around burnout and dictation and transcription costs. And so, in one end of the extreme, you have things where doing clinical documentation by voice, which happens to be obviously an area that we’re focused on, is an extremely mature technology. It’s not perfect, there’s lots of evolution happening. But 60 to 70% of all clinicians in the US use AI-based speech technology day to day and their work. And then at the other extreme, you have very cutting edge stuff, which we’ll get into later, that are really still in the very early stages of, of use. And there’s everything in between where things are incredibly powerful and produce real outcomes, to things that are just not ready for primetime.
Renee DeSilva 8:17
So before we go into maybe this maturity curve, I think you raised a good point that even the vocabulary and nomenclature is often confusing. If you could anchor on a definition of AI today, what would your personal definition be?
Pete Durlach 8:34
There are many people have various different definitions, the one I would really use is It’s technology that uses data to determine patterns, that technology can then look to execute much like a human would. So there’s lots of AI technology that’s called AI, which can fall into categories where it’s very rule-based, where you’re specifically defining, well, if someone does this, do that, and you sort of deterministically just trying to describe what the flow of the technology should do true AI is all data-driven. As you may have heard from people quite well known in the industry, they talk about data as the new source code. So if the technology is not built by taking examples of data, that you can extract patterns from that then try to do something a human would try to do or help make a human more productive. I would not call it say AI. So it’s really this data-driven technology that’s doing pattern matching is really what we would we in the industry think of as AI.
Renee DeSilva 9:36
So let’s stay on that. So this playing it back data as source code with the ability to extract patterns that make maybe the human experience more productive. Let’s do that lens, then think about some of the use cases whether they be clinical financial or administrative. You talked about pretty mature space as relates to speech recognition. How would you then maybe articulate where we on some of these other dimensions in terms of maturity on the common use cases?
Pete Durlach 10:04
Yeah, so let me start again and try to put in sort of a categorization of where there are some exceptions to this. But in general where the maturity curve is, so, AI, again, since it’s a horizontal platform effectively, that can be applied to different problem statements or use cases can interweave again, as you mentioned, clinical administrative or financial use cases, the most mature generally fall into the categories, where you’re not providing guidance to clinicians about how to take care of the patient that’s coming. But that tends to be less mature because the bar is much higher there for the technology. And I’ll get to that in a moment. So the places you tend to see the most maturity are augmenting humans doing tasks that are required in healthcare, but DOT but are not directly, in most cases, patient-facing with a few exceptions, which I’ll get to in a minute. So in the quote, unquote, administrative area, again, some of these use cases overlap between clinical, administrative, and financial. So they’re not always clean these things around documentation. Again, not just because I work for nuance, but because tradition, this has been the leading use case, very mature set of solutions, there’s some new emerging technology in the space, but from a generic point of view, things that are around taking what clinicians want to document about patient care and putting them into the chart, a quite mature, there are use cases on the revenue cycle, which are also getting quite mature. So this is technology using patterns to help things starting to around payment integrity denials, management, real times, claims adjudication, prior authors, some real amazing emerging technology happening in that space, not as far along as the speech components, but stuff going very well. And the part of the reason there is you don’t have to be 100% perfect to really provide value, there’s so much human labor, in those processes anywhere you can take out human labor, not necessarily replacing people, but automating punch of the task, so that the people that you do have can focus on the more complex thing. So less financial and administrative really overlap. Another area, which is patient-facing, which is relatively mature, not as mature as a speech component, but doing well is on the whole digital front door area. So this is around sort of virtual assistants on sort of what’s called sort of omni channel patient experience. So these are things around voice-enabled IVR, and the patient access center, chatbots, automated email and text. And so these communication capabilities are pretty mature, not just in healthcare. But outside of that’s sort of one of the big applications of AI outside of healthcare is in these digital contact centers, sort of omni channel, infrastructures there and I and the place that sort of least mature generically, although there are examples, where this is not true, are in the hardcore sort of diagnostic facing areas where you’re really trying to help clinicians make clinical decisions. And the reason that those are the least mature is the bar for performance is the highest, right? You if you make mistakes there, two things happen. Really bad things can happen to people’s health if someone follows the wrong advice. And two is clinicians have to trust the stuff, you’re deeply to trust that to help them make a clinical decision, if you’re documenting something by speech, and it makes one or two errors out of 100. That’s okay, doing denials management, and it helps you improve your throughput on claims by 60%, but 10%, you still have to put humans involved, that’s okay because it’s all better. But on the clinical side, the bar is much higher for performance. So that is sort of the earliest part of the curve on maturity. Obviously, there’s also the whole FDA angle there in terms of what’s going to be regulated there at the end of the day for on the diagnostic side. So if you look at it sort of across a spectrum, the pure sort of hardcore diagnostic support clinical is sort of the least mature, then in the middle, I would say it’s more of these administrative around redcycle, and other things related to that. And then the next step would be these patient-facing sort of consumer experience. And at the far end would be these clinical documentation. He was confused. Again, there are other examples that sort of crossed those, but those are the big buckets.
Renee DeSilva 14:18
Now that’s interesting. So that’s the lens of the sort of the technology use cases. Let’s take that same question. But let me ask it slightly differently, which is, we see that there are a subset of leading health systems that are advancing faster than market on embedding the various use cases that you just mentioned into their processes. Talk a little bit about what you see in those health systems, what characteristics what behaviors, what processes have they sort of hardwired that you think have maybe allowed them to advance faster than maybe what you might see in the broader market?
Pete Durlach 14:55
Yeah, it’s a great question. As you know and all of your members know if you took the same Technology from a commercial vendor and deployed at different sites, the effectiveness can be all over the map, even though it the technology is the same, really one of the key differences that we’ve seen in this is not just about AI in general, which is definitely the structure and leadership at the provider, in this case, the provider organization makes a huge difference. So the characteristics that we see around applications in general and certainly around the AI is since they have to have real champions, that have credibility with the users that are going to be using the technology. And this is no, this is no earth-shattering comment, I don’t think at all, for me, but it’s amazing, the difference you see you you’ve got to have people. So when we do, for example, in our speech products, if you don’t have the CIO and the clinical leadership bought into that, that can help manage that and help message that into the clinical world with their docks, the difference in performance, and uptake is going to be dramatically different across a spectrum. On the redcycle. side, same thing, if you’re trying to reimagine how sort of AI-powered clinical documentation and tributary clinical documentation improvement happens, your redcycle Folks, your CDI team need to have strong internal leaders around that. So that’s, I think, job number one, job number two that’s highly correlated to that. You have to know what metrics you’re trying to move again, pretty obvious, but you’d be surprised, or maybe not, sometimes the variance in organizations about how clear they are on what success looks like. Some organizations are maniacal about what exact metrics we’re going to move, who’s responsible for doing that, who’s the champion, and then also holding the commercial vendor accountable. And if they get more and more fuzzy, the alignment between internally to the provider and with the vendor starts to get diffuse. And that’s never a good sign for a successful deployment. And three is, again, these are all connected, is pick problems that really matter. Because if you don’t, every health system is overwhelmed with many projects, and too much going on. And if it’s not a meaningful problem that the C suite cares about. You also can get this problem where things get diffused because in every project, there are ups and downs. We’ve all been around long enough. Sometimes projects go perfectly from beginning to end. But generally there are ups and downs, sometimes the commercial vendor has to improve something sometimes the provider does, sometimes often it’s a combination. So having that focus about big problem to solve clear metrics with clear accountable owners on the Insight paired up with clear accountable owners on the outside, not to us as look like the best recipe for getting the value from the dollar expended.
Renee DeSilva 17:45
That’s great. So let’s go to your third point around picking a meaty problem as a real sort of sense of urgency to address let’s talk about workforce for a moment, which is coming up in all of our conversations just to challenge that really the demand-supply challenge on finding enough workers in the future starts to feel a little bit intractable for many organizations. And so we often talk about the promise of AI as it relates to getting greater efficiency from your team members. I’m curious here, we’ve been spending some time thinking about this notion of human-machine dyad models of care. So it’s not necessarily that implementing an AI solution would reduce headcount or replace jobs, but this sort of AI machine, or I’m sorry, these human-machine dyads. How do you think about the potential for that?
Pete Durlach 18:37
Yeah, no, it’s exactly how we think we talked about this is not artificial intelligence. But augmented intelligence was exactly that same idea, Rene, which is, although there are some things that can be automated, the primary focus is how do you provide support to the relative human who’s doing that tasks become more efficient and effective, and that’s true, whether it’s a doc, it’s a nurse someone in the revenue cycle, or it’s someone else in the organization. So again, this this this staffing problem to lay the workforce as we hear that as a top issue, as you do from every client. And the good news is there as I mentioned earlier, some of the AI that’s most mature actually goes right at the workforce problem. So clearly the number one area that it’s most advanced again, back to my early example, is on the clinician burnout side. We’re clearly everybody on this was listening this as well aware of that problem. So us and others are providing technology that dramatically improves doesn’t solve the problem holistically, completely by any stretch, but dramatically improves efficiency, quality of life. I mean, with our products with our dragon, medical and Docs products, for example, you’ll often see two to three hours savings per day per physician reduction, and after hours time, pajama time, more time with patients, et cetera, et cetera. Again, we can’t solve problems of pay or control of what kind of clinical care they give and all the problems that affect burnout. But as you know, clinical documentation, administrative overload is the number one cause that’s been consistently shown for burnout so that there are some real value that can be done there. Nursing, obviously, another huge priority. On the staffing side, tons of interest in again, using similar technology to attack some of the nursing problem, obviously, the workflows are quite different, a little less mature than the physician-facing solutions. But again, augmenting the time spent nurses doing this other work. So they can spend more time taking care of the patient specially with the overload that they all have. And then if you start to work towards the back of the house or the back office, quote, unquote, if you will, as I refer to earlier, there’s a lot of technology that augments people encoding, CDI utilization management, care management, there’s a whole set of technologies I kind of lumped into redcycle, but you could kind of put in redcycle, care management and utilization management, as sort of one sort of master bucket where there’s technology that’s really starting to help take out parts of the step about is the note complete? Do I have to look through the chart completely to generate the appropriate code for putting on the claim? Do I have to go through a chart to pull all the information manually to justify the prior off and the medical necessity? Do I have to go through the chart to manually abstract the data to put in the tumor registry for oncology screening, there’s a whole set of use cases here where you’re not getting rid of the CDI person, you’re not getting rid of the coder, you’re not getting rid of the UX specialist, you’re not getting rid of the quality abstractor. But you’re starting to chip away 20, 30, 40% of the manual effort that they need to do, you could now be picked up by technology so that you can handle either a wider variety of work, or the people you do have, you’re not just burying because they’re losing. So that’s actually fascinating. I’m actually optimistic that, that Bert, that sort of the staffing thing is one of it’s sort of intersects relatively well with the maturity of the technology. Last one I’ll mentioned real quick, as I mentioned earlier, on the patient access center, or the Patient Support Center, as every client can’t staff, their access, center, booking appointment, changing, rescheduling their patient support to do patient portal, activations, password reset, get ready for a virtual care visit, no one can keep up there. They’re all under staff. And so that technology around modernizing the digital front door with AI technology that does this omni channel interaction, that’s also you, we have data today and I live in, you can reduce between 30 and 45 50% of the labor time to handle these common interactions, whether it’s calling the voice over the voice channel and are coming up through a web chat or SMS. So there’s significant reduction in some of these staffing burdens that can be done across these use cases.
Renee DeSilva 23:01
Yeah, what I think will be really interesting to track and it’s something that we’re spending time on with a few of our executive groups right now, which is So what does all of that imply about the future of work? Right? So if you think about when this is fully realized, the nature of the work changes pretty fundamentally, the way that you might even think about staffing functions. I was doing some work with Ave Safavi at Accenture, and they’ve got a model where they basically say, in an org, the future might have five full-time employees, three gig workers, and like two highly specialized people that float across a pool of organizations, right, like the nature of the work changes when you start to think about all of this being fully realized. And so it’ll be interesting and exciting to I think track how all that plays out.
Pete Durlach 23:45
Yeah, I mean, just to add into that, I mean, the way to the way we think of and I know others do is you think of AI as producing a Virtual Employee in a way. Okay. So when you think about the staffing model, the way you really think about just the way you talk about full-time remote gig, you also think about AI as a “human” not really equivalent. It’s not equivalent, but effectively, you’ve got a series of virtual people that can do certain things. Okay. So in a way, that’s how you build the staffing model. I mean, in the contact center, you see this all the time you based on throughput of interactions calls, you try to staff at peak time, and what percentage can be handled by technology, and it’s exactly or if you’re on physicians, you want to think about how many patients you can see, well, if you’re using one of our products or others if you can increase their throughput by 20 to 30%. That increases time that time obviously should be given back to the physician for their quality of life and maybe take a percentage that time to see more patients. For a given patient who’s on a capitated model. Maybe you spend more time on closing care gaps that do drive better care and also provide better risk-adjusted performance on the revenue side. So they really think about this as a virtual person to do it and, and much like we talked about, you then allow the humans to work at the top of their life. patients. And I use that in a broader sense, not just from a clinical office to so these automated things are doing sort of the more routine more repetitive, but they take a lot of time so that the people can do the stuff that’s more complicated, and quote, unquote, more value added. So I think this really fits into this idea of sort of how you think about work and how you think of your employee pool and sort of a more in a broader way that includes the technology component.
Renee DeSilva 25:27
That’s right, that’s right, goes back to that definition of human-machine dyads. And so as you were chatting through that, and just laying out some of the metrics that move as you focus here, I wonder is that sort of the crux of how you personally think about measuring ROI on initiatives like this? Do you sort of go back to your earlier framework around being maniacal about the metrics that you’re moving, and it’s through watching those metrics move that you sort of get to the ROI of some of these investments? Or do you think about it differently?
Pete Durlach 25:56
Oh, 100%. Otherwise, it’s really hard. Like, it’s really hard to know, Are you doing a good job or you’re not? And no health system has extra money flying around to just throw it stuff? So how are they going to make decisions? So it’s an interesting story, when I, when myself and our CTO, Joe Petro came to nuance pretty much similar times back in 2006, we were sort of a classic tech company. So you’d show up and people would want to show you demos of stuff. And the demos would be really cool. But you’d say to them, okay, so what are the what, what business outcomes is that drive for a client, and people kind of look at their shoes, look behind making a joke, but effectively, making the point that it wasn’t really in the culture of the place. And I don’t mean that a negative just a lot of technology companies are like that. So we, along with others started this journey to turn nuance into if you can’t explain the outcome that a client cares about is willing to pay for it. Number one is, we probably shouldn’t be putting money into building it. And B is how do you talk to a client if you can’t talk to them about the outcomes they care about? Like if when I’m with a C’s people that are coming to HMA? You don’t want they don’t want to talk about like, most of them don’t want to just talk about a technology for technology’s sake. They, they’re underwater, they want to talk about how is this going to solve my burnout? How it’s going to help revenue recovery and financial integrity? Can I drive better patient care? Am I meeting a regulatory requirement? Am I cutting costs, if it’s not in one of those few buckets, they don’t have time. So this idea about outcomes is really to do two things it’s to align the commercial vendor with the customer around things that matter. So everyone’s pointing at solving the right problems. And to clearly there’s a, if you don’t do that, from a commercial point of view, you’re probably not going to sell anything because there’s no justification internally. And honestly, you don’t want clients honestly buying stuff from you that they don’t know what the metrics are. Because at the end of the day, someone’s going to say, Why am I doing this right spending time. And so it’s really an amazingly clarifying, sort of crystallizing forcing function for both us and the client and the alignment. And I found it to be one of the most simplest and most powerful things like we’ve ever done. And we’re not perfect at that. But I also find the clients really appreciate when you talk to them thinking in this language, because what it also allows you to do, Rene, and I’ve told our people internally this, that the best thing you can do with a client is also go to them and tell them that one of your products is not right for them, right? Because it’s not going to move about the outcome in a way that they’re expecting or they’re ready for. And because your job isn’t to sell the product. Your job is to deliver an outcome for the client because they’re trying to deliver care to their constituency. So it has so many values, both sort of culturally, alignment, success, fun, everything. It’s a really amazing thing, actually.
Renee DeSilva 29:01
I want to kind of keep pushing on this a bit. So let’s go a little bit more to looking to the future. So part of the academies and 10 of CO launching the AI collaborative with nuance in Microsoft was to accelerate progress. And to almost like dwell on the art of the possible in terms of where do we see the landscape shifting, we would have fast forward 10+ years. So when you are wearing your strategy hat and not as a dwelling in sort of current state, and not that we ever get predictions, right. But what do you think success looks like in 10 years or 20 years as we think about how all of this is unfolding so quickly.
Pete Durlach 29:40
Yeah, well, I will say and becoming part of Microsoft, we’ve got even more appreciation for this. The technology is the AI technology, this sophisticated stuff is advancing at an unbelievable speed. So you’ll hear about this in the press around things like large models, transformer model GPT-3 IE, Bert, open AI, these other buzzwords around some of these large models that are sort of built on billions of what are called parameters, which are tied to how much data you put in how many characteristics, they have to describe the patterns that I mentioned earlier. And the rate of improvement is off the charts right now. And what we there’s a guy named Peter Lee, who runs Microsoft Research, good friend of ours, who says there’s no end in sight currently on the rate of improvement. So if you look at the performance metrics of these, they’re just getting better and better. They’re one of the things that’s gotten a lot of press lately, these things called like the dally things that you can type text, and it builds us sort of realistic image of something by just putting a textual description, that’s an example of these things, these language translation where you can speak in one language, and have it pretty accurately come out in a different language. We’re doing some of this technology in our dragon ambient experience product. So the tech is just, it’s I don’t know if it’s on Moore’s Law was trajectory, but it’s sort of a version of that, in terms of improvement. So in the labs today, there’s really stunning stuff happening in terms of these, including, by the way them training themselves, one of the big problems with AI, is you have to build these training sets, and you have to annotate these training sets to teach the patterns the machine, okay. And the new technology is starting to build its own training sets automatically. And actually, the hard part of building AI is building the clean data in the training sets, not actually building the model. Once you have the training, it’s all about the quality of the data and how much data you have. So if the machines can start to build their own training sets, which they are, our expectation is that the curve is going to continue accelerate. So we’re incredibly optimistic. Understanding that there’s this maturity curve that I mentioned earlier about different use cases, that’s going to continue, that this stuff is really going to accelerate. This is one of the big reasons, among others, that we wanted to form this collaborative at this time, I will pick one example that sort of a hybrid, that’s real today. But we’re going to what I think is going to accelerate dramatically. So as you probably know, in the diagnostic imaging world, there are these very rich images that have amazing amounts of data. About inside the BI we have this idea of ambient AI that I think you’re familiar this idea of making the ability to listen and see and hear information sort of from senses ambiently, so that we can provide intelligence for clinicians without the clinician having to always ask for it. So the best example of that, for us that we know, is a solution we call the dragon ambient experience, which records with patient consent, these interactions between physicians and doctors, and then automatically creates a draft note for the physician without the physician having to manually dictate everything. So think of that as ambient. From a sensory point of view, we’re listening and watching what’s happening in the encounter. There’s another type of ambient technology that’s rapidly expanding called somatic, which is looking inside the body. So you have all these images that are taken every day, throughout the country in the world, CT Mr. Pet, and only a small fraction of that information can be acted on because a lot of it can’t be seen by the human eye. It’s volumetric, you have to, you can’t see it. So AI is starting to now bring these insights to bear and 80%, roughly 80%, all clinical encounters have a medical imaging intervention done early in the pathway. And about 23,000 Plus diseases are partially dependent on information from an image. So this role of imaging along with lab data is sort of critical to the diagnostic pathway in healthcare. So you’re seeing an explosion of AI models and look at the pixels to bring this data to bear. And it can affect lots of information about how to think about a disease, better patient screening, better diagnostic decisions, finding patients that maybe have a stroke earlier than you might have known and putting that case to the top of the work list for radiologists and neuro radiologists to look at. So this stuff is starting to come to fruition right now as we speak. But over the next few years, I think that’s going to explode because of these large models that can do incredible diagnostic interpretation of this pixel data. And once you unleash that, there are just amazing applications that that will come and I think you’ll start to see this in a bunch of other clinical areas. One of the one I mentioned is, on this remote patient monitoring area or hospital home, one of the big moves in that direction to support these non-traditional care settings is to monitor the patient at home with these connected devices. The problem with one of the problems with that, besides getting the device hooked up does the patient use and all that stuff is you’ve got all this data coming at you. The last thing the doc needs is 7,000 other data points from the right showing up in the EHR when they have no time to even review their inbox today. They’re just not going to use it. So big another a big explosion. How do you use these large AI models to take all this clinical data that’s coming from the remote patient monitoring devices, match it with the data that’s in the EHR and figure out which alert which data combinations justify an alert that’s clinically relevant that the doctor is going to pay attention to. And that’s critical, significant enough clinically, that they should. So this is basically a giant data matching problem. So if you had examples that you could teach a machine that says, When you see blood pressure, this weight gain of this and the patients this age with this condition, look out for why I think that’s another 510 years, you’re gonna see these massive AI models basically doing filtering on these remote patient monitoring to sort of filter out the noise from the signal. So that the really important stuff is the stuff that gets through to the clinicians and doesn’t bury them, and stuff they just don’t have time to look at.
Renee DeSilva 35:52
Yeah, that’s powerful. It also gets back to maybe the earlier thread in our conversation around the unchartered territories around really helping physicians diagnose more effectively. And I think as you lay that out, what I hope is realized through that, too, is just a much less or an experience for patients that has far less friction in it in that you can cut through the noise get to them answers more quickly have a glide path for them, how to navigate all the different experiences around that.
Pete Durlach 36:21
Yes, for sure. I think there are lots of examples on the patient side, too, that AI is going to bring to bear the challenge there, again, gets to this idea of augmented versus not right, so how much information can you go right back to the patient without a physician, or nurse or NP looking at it first? That’s right. So that’s where you get into this delicate ground of what’s called autonomous AI, where it’s not maybe making a diagnostic decision. But it’s, it’s, it’s providing information to the patient. But of course, when you provide information to the patient, that’s going to affect what decisions they make exactly, where’s the line between doing like, triage, that’s not FDA cleared. And diagnosis, which is And where’s the line as, as these AI, I think that’s still a thing to be determined, as these systems evolve, there’s clearly some things where you can collect information of how you’re feeling, and not provide any feedback and provide that information to the clinical team. But the real value is going to come by combining this data and then augmenting the human pathway around what clinical decisions they make. And that’s again, where you, as I mentioned directly to the clinical decision-making of the physician and augmenting them versus trying to do this in an autonomous way, which the technology is definitely not ready for today.
Renee DeSilva 37:39
Let me ask you one final question on this part of it before I do a wrap-up question with you, which is a little bit of a— you just mentioned, like this notion of figuring out where’s the line? How do we sort of navigate these things as they’re moving forward? Maybe you could add a thought around as you think about the role of board members in house systems to just be thoughtful about this to maybe to govern appropriately? And how we sort of think about all of the potential and potential risks associated with this in our house systems. Is there a piece of advice that you that you’d give from the perspective of a board member?
Pete Durlach 38:19
Well, I guess I think I guess there’s a couple sort of, I’ll try to answer your question directly, but also provide a little, maybe slightly orthogonal answer to which is I think there’s, if I was on a health system today, and I was talking about AI, I think I want them to know a couple things. One is back to your initial sort of questions. There’s a lot of hype in the industry, and noise about AI. And I think people just sort of, they sort of know it’s there. And it’s important, but they don’t really know what it is. And their eyes probably glaze over. I think showing this set of use case buckets that I talked about earlier, the maturity curve would be critical to the board, because you can’t you talk about AI as one general thing is, to me is meaningless. Unless they say, I just want to know, technically most board members that helps us I don’t think we do this, but they say just explain what AI technically is. But if they want understand what the value is for health system, where should I invest? Where should I not? What are the concerns I need to be there? What are the value for my patients and my community, which I think is how most board members think, on a health system, showing the maturity curve showing examples of use cases, and saying, Here are examples where we have a lot of proof points where AI can be very effective, we should invest here. And here’s the value statement. What’s the ROI to the health system? What’s the value to my employees? What’s my value to my patient here, over here and you can’t see my hands but we but you go to the less mature side. A lot of interesting value here. less mature. We’re going to try some things here. This is closer to being diagnostics supporting. We’re going to explore more here. That’s the way I I would talk about it to the board members. So having that framework, and then every time you bring, what it allows you to do, too, is when you go back later to the board, and you talk about what are you investing in, what’s your progress where you stumbled, you come back to that framework. So you say, we invested here and bucket a, which is in, let’s say, the administrative bucket, we did this and look at the value check example, over here, we did two pilots over here, with doing something in clinical decision support using AI, this worked really well here didn’t, Doc’s didn’t really buy it, because of blah, blah, blah, we’re going to try this over here. I think that’s a really powerful way to talk about it. And again, it’s also why, from an organizational point of view, you’ve got to make sure the organizational structure is such that you may have someone responsible, quote, unquote, for AI, but so much of it is tied to the specific use cases that intersect with specific operational train in the organization, that if you just have it at sort of the high level, sort of AI level, I’m not sure how valuable that is, unless you have the clinical and operational leaders also highly connected that because the use cases are generally going to apply to specific things. There is no such thing today. As a 100%, enterprise-wide AI.
Renee DeSilva 41:17
That’s right now, I think that’s a great point. And what I like about your framework, there, too, is on the parts of the maturity curve that are more settled, there’s probably less of a risk role that the board might have to take from a lens versus things that maybe are still nascent and evolving. So I think that’s, that’s very good advice. Pete, one more thing I want to cover, I think I’d be remiss in not mentioning this, but talk a little bit about the Microsoft and Nuance joining forces. That was announced earlier this year. Tell us a little bit about that story.
Pete Durlach 41:46
Yeah, I’ll give you the short version may give you a longer one over drinks. But the short version is Microsoft has been looking for a long time to become more and more relevant and meaningful in healthcare. As you know, they like many of the cloud vendors have had some stumbles in their distant past. And we’re looking for ways. In addition to the organic work, they were doing to really become more impactful, more connected, more knowledgeable and do it in a way that supported their open ecosystem environment where they don’t compete with health systems, they don’t compete with other major ISVs like EHRs. And to do it in a very complimentary way with another partner that was also very serious about making sure that our health system see them as a trusted partner for their data and not doing anything nefarious with patient data or anything else. So and from nuanced perspective we weren’t for sale, but we’ve had a long-term relationship with Microsoft around Azure. And we did work together on our dragon ambi experience product. And for us the ability to leverage the unbelievable scale and technology innovation that Microsoft has in cloud and AI. And putting those two things together really felt like a good fit. For us. It’s sort of this large, incredibly advanced horizontal technology platform company, who believes in empowering the ecosystem, and treating data as it should be with nuance, which is a highly verticalized healthcare-centric company, with applications that have meaningful impact in workflow for clinicians, nurses, coders, CDI, etc. So their basic thesis was to put this horizontal powerhouse together with nuances vertical expertise, and bring these together to see if we can accelerate innovation around AI and cloud to drive more and more impact or what in the language nuance I used earlier today, these business outcomes, how do we improve patient care, financial integrity, clinical quality and patient experience? Which are four big buckets of outcomes? How do we combine that and as I mentioned earlier, like these large AI models that Microsoft is building that nuance would never have the funding for, we’re looking to, like deploy those inside of our products, and with our third party, so it’s really a coming together of a very complementary sort of culture and mindset and technology base, to try to do more for the benefit of our clients and the patients they serve. In a nutshell, that’s really the story.
Renee DeSilva 44:15
It’s fascinating. Part of the just going back to some of the work of the academy is trying to activate around in terms of the Microsoft and nuance partnership. And then the way that we activate that through the collaboration that we recently announced is that you said it really well in terms of both the horizontal view and the vertical view and then how does that really allow us to really leapfrog and think through driving conversation that advancement that can truly be different in different in kind, so look forward to capturing that sort of secret sauce of the sort of large scale technology powerhouse that we all know Microsoft to be and then just having the nuance activation running through that I think there’s a ton of potential and promise and how that plays out.
Pete Durlach 44:58
Yes, we do too. And we’ve already started this See that even though we’ve only been one company for a few months, there’s been some real good progress already made. So we’re optimistic, obviously a lot, the proof will be in the pudding. But we feel really good at this point.
Renee DeSilva 45:10
Absolutely. And I think maybe just to sum that up, Pete, would just be and I think we talked a lot about this as the making AI real. And getting to the use cases today that drive both better experience for caregivers and providers as well as getting to patient outcomes that we’re all proud of, I think is a real impetus behind the work. And so we look forward to bringing that to life together across the next several years.
Pete Durlach 45:36
Yeah, we’re super excited. And thank you for the opportunity I just wrapped with one thing that was sort of saying that a number of health systems and said to us that we sort of adopted as our own, which is at the end of our at the end of the day, what’s the nuance of Microsoft job and healthcare, really, if you really boil it down, it’s to help. And I got, we work with health plans and life science companies, but focusing on the core collaborative Academy base of providers is to help our clients become the best place to give care, and the best place to get care. I love that.
Renee DeSilva 46:09
Maybe final question for you, Pete, which is one that I asked all of my podcast guests. So it’s kind of a cheat if you’ve listened to any of the episodes previously. But part of the vision for this was to just think through how to curate ideal conversations, and so much I think of the energy that we all get from life isn’t just having breaking bread with people and having really good conversations that drive your thinking forward. And so if you could invite two people to continue this conversation in some part like we had today, who would they be and why?
Pete Durlach 46:39
Yeah, it’s phenomenal question. And I went around a couple of different areas. But given the audience that I know listens to these, I wanted to pick two of their peers that we deal with quite thick, are extremely sharp, but are also very connected to the real world. So one is a gentleman named Hal Baker, who I think you know. He’s BP, CIO, at WellSpan. Health and how I’ve known how, for a long time become a good friend of mine, Howard is one of the few. He’s a practicing family doc, extremely deep technology, and incredibly focused on business issues like he is he’s got this sort of magic three-way, he’s clinically competent because he’s a DOT, he’s technology literate, beyond literate, but he’s not who just guy who just likes tech, he’s like, if it doesn’t solve a problem for me, and I don’t believe it, I ain’t gonna waste my time. And I’m gonna hold everybody to that. So he’s got this amazing combination of talents to really sort of navigate those three worlds. And he’s also worked with his peers to push the technology and become an early adopter in many cases at WellSpan. So he’s definitely one person I would pick similar but slightly different is a guy named Dr. Keith Dreyer, who’s the Chief Data Science Officer and Vice Chair of informatics and a radiologist at MassGeneral. Brigham also very similar to how little I would say, different type of Doc, probably a little deeper in his specific technical domain on the imaging side, but also very focused on business problem, and also deeply connected to sort of a specialized area in healthcare. That’s so critical to where AI is going, which is on the not just the imaging side, but he’s the chief data science. So digital pathology and genomics tied with EHR data. So how do you bring this multi-omics data together? Which is one of the big sort of exciting areas in healthcare? And then how do you build and deploy these AI models into the workflow at scale, which is what MGB spends a lot of time doing. So he’s got both the clinical background, the technical background, worried about the business problem, but then also super deep and how these models are built? And then how do we think about from an enterprise-wide actually deploy these because an MGB they’re not only consuming AI from the outside, they have, I think, 15,000 researchers that are building hundreds of AI models themselves. So they have sort of a different enhanced perspective around AI, not only as from a commercial point of view, but a research point of view. And then how do I build an infrastructure to enable us to deploy that at scale? So I think those are there are lots of other people that I would pick to, but since you helped me to, those are the other two sort of sea-level members of HMA that I think would have a lot of relevance and credibility with your audience that I wouldn’t.
Renee DeSilva 49:33
Well, I can see the three of you really geeking out on this content, given some of the evolving areas that you were mentioning that to me, it sounded a little bit like the Jetsons did when I was growing up. So I think that would be a fun conversation and one that we could probably make happen. So thank you. P. It’s been a pleasure to have you join us today. And I really do appreciate your guidance and insight on such an important topic for all of us in healthcare.
Pete Durlach 49:56
Well, thank you, Rene. Thanks very much for the time very much appreciate it.
Renee DeSilva 50:00
Thanks again for joining me at the table. The Table is a podcast produced by the Health Management Academy. Make sure you catch future episodes by visiting our website, TheAcademyTable.com, or by subscribing on the podcast platform of your choice, and if you have suggestions for topics or guests, I’d love to hear from you. Please drop me a note at firstname.lastname@example.org. I look forward to talking with you soon.