fbpx

Hospitals In Focus

The Future of Medicine: AI’s Role in Health Care

Artificial intelligence (AI) is dominating headlines and conversations, from how it will change our day-to-day routines to the debate on how far regulation of the ever-changing technology should go. AI’s impact on health care is profound, promising advancements in diagnostics, treatment plans, and patient care, but also raising questions about privacy, bias, and the role of human oversight. 

Our guest, Dr. Michael Schlosser, MD, MBA, Senior Vice President, Care Transformation and Innovation at HCA Healthcare, is a leading expert in AI applications within the health sector. In this episode, he and Chip delve into the multifaceted world of AI. Dr. Schlosser’s insights will guide us through the complexities of integrating AI into medical practices, highlighting both the transformative benefits and the critical safeguards needed to ensure ethical and effective use. 

Topics discussed include:  

  • Defining AI – whether we should anticipate a better future or be worried  
  • Leveraging AI – use cases in health care  
  • A human-centric approach – understanding risks and ways to mitigate harm and bias 
  • The federal government – finding the sweet spot for regulation  
  • Future of AI – the benefits of incorporating AI into health care for providers and caregivers 

Dr. Michael Schlosser (00:04):

I think generative AI is going to alleviate us some burden and make doing our jobs easier and even a better experience. But the reason why I say it’s both is that there is a lot of risk. I actually believe that what makes AI powerful is also what makes it risky.

Speaker 2 (00:28):

Welcome to Hospitals In Focus from the Federation of American Hospitals. Here’s your host, Chip Kahn.

Chip Kahn (00:38):

Thanks for joining us today. We so appreciate your listening. Artificial intelligence or AI is the trending topic across the U.S. economy today with business looking to incorporate AI tools to improve operations. They’re also adapting the new technology to achieve better results for consumers in industries from manufacturing to agriculture, and especially for healthcare. However, one man’s solution may be a threat to others. AI doesn’t come without risks. Here to talk about the technology and how it can impact patient care is an IT expert and a physician and someone already adopting AI in the real world. Dr. Michael Schlosser, the senior vice president of Care Transformation and Innovation for HCA Healthcare. Mike, thank you for joining us today.

Dr. Michael Schlosser (01:31):

Thanks, Chip. It’s great to be here with you.

Chip Kahn (01:32):

Just to get started, Mike, the term artificial intelligence encompasses everything from theoretical concepts to practical tools in use today. For purposes of our discussion, can you set some parameters? Give us a practical definition of what AI is so that our audience can understand the context that we’re addressing today.

Dr. Michael Schlosser (01:58):

Sure. Happy to Chip. And you’re correct, artificial intelligence can have sort of a wide definition and broad understanding of what that means. I think a good walking around definition that I like is it refers to the development of computer systems that can perform tasks that typically require human intelligence. So think of things like learning from experiences, recognizing patterns, understanding language, solving complex problems, and even decision-making. So things that we normally would attribute to human intelligence, the ability to bring all of these different inputs and pieces together, the ability to create a computer system that can do that same kind of work is generally what’s referred to as artificial intelligence.

(02:44):

If you go just one step deeper than that, it really can span everything from just really, really strong, what we call heuristic models, which are just rules-based, really big if-then statement-based models, which is a kind of AI that’s been around for 70 years all the way to the most modern which are the generative AI models. And these are really big neural networks that can take in massive amounts of data and learn the patterns in those data and then use that to turn around and actually generate novel content. And then there’s sort of everything in between.

Chip Kahn (03:20):

So what’s the potential for this generative AI? Can we anticipate a better future or should we be worried?

Dr. Michael Schlosser (03:28):

Well, I’ll give you maybe an unsatisfactory answer there, Chip, which is both. I think that the promise of generative AI is amazing. I’ll be honest, I’m a believer. I’m buying the hype. So I’m definitely on the side of a better future in that I think the technology is going to enable us to free up the humans from doing mundane and administrative tasks from things that have become burdensome to our lives. It’s going to make us more productive, but more productive in a way that’s easy as opposed to maybe productivity and tools in the past which just pushed us to go faster. I think generative AI is going to alleviate us some burden and make doing our jobs easier and even a better experience. But the reason why I say it’s both is that there is a lot of risk. I actually believe that what makes AI powerful is also what makes it risky.

(04:23):

It’s actually the same thing. And that these tools are powerful because they are able to take in information and learn patterns and then create content on their own. So that’s an incredibly powerful skill and it’s what makes it so useful in doing things like alleviating administrative tasks. But also the inherent nature of that approach means that they can do things like hallucinate, make up information, or produce erroneous results. We’ve probably all heard of examples of large language models producing fake citations when asked to do so. And so really those two are the halves of the same coin where what makes it so powerful is the same thing that can make it risky. But if we learn to manage it and we learn to use it and we put control systems around it, I think that better future is absolutely the one we’re going to find.

Chip Kahn (05:17):

Sort of continuing on these advantages and risks, in terms of the specifics of healthcare and healthcare in America, what are the advantages or possibilities, and then what are the risks?

Dr. Michael Schlosser (05:31):

Yeah. So, Chip, I’m going to go a little bit into our strategy and how we think about leveraging these kinds of AI tools because I think it’s sculpful context to the question that you’ve asked. And the reason why I think that is that I think the number one creator of risk or mitigator of risk is what use cases you choose to apply AI to. And so we’ve taken the opinion within HCA Healthcare that there’s an enormous opportunity to apply all sorts of AI, but generative AI in particular to eliminating administrative burden for our care teams and our caregivers. We sometimes refer to this as removing the friction from care. Anyone who’s in the healthcare space knows that over the last several decades, the administrative part of healthcare has expanded. The regulatory compliance requirements, the documentation requirements, registries, I mean it just goes on and on.

(06:28):

There’s an enormous amount of administrative work we have to do that really to a large extent sits next to the clinical care. It doesn’t really necessarily drive the clinical care. There’s hope that some of that work, like the documentation burden we’ve created, is eventually creating data that will bring insights and better care back to the caregiver, but that largely hasn’t been realized. And so a lot of these extra steps that we’ve created are just that. They’re just extra steps that the clinicians have to do. So we see an enormous opportunity to use AI to eliminate that friction, to allow caregivers to operate at the top of their license, to do the things that are most valuable to them and their patients. I also believe that those are lower-risk use cases because most of them still keep the human in the loop. It still keeps the physician or the nurse or the pharmacist or whoever it might be that’s using that AI-driven workflow in between the AI and the patient or the eventual outcome.

(07:34):

And so it allows the system to help them to make them faster, better, more efficient, more consistent decisions, but it doesn’t replace them or subvert their clinical judgment or their thinking. So to us, that’s a pretty sizable win-win. I also think that the more we clean up that administrative burden, the more we allow those really intelligent caregivers to spend their time doing things like critical thinking and spending time with patients, which I think ultimately will improve the quality of care. On the other end of the spectrum, our use cases where we’re asking the AI to directly make clinical decisions, where we’re training models to be the clinician or to provide the decision support.

(08:16):

And to me, there’s a lot of promise there down the road, but there’s a lot of risk. And we’ve seen examples of this. I’m not going to call anybody specific out, but we’ve seen examples of where when AI has been given the responsibility to make critical decisions about healthcare delivery, that things like bias and other errors can creep in and that can end up having a negative impact on patients. And so our thinking on this is control the risk by selecting the right use cases, those use cases that have high value and high impact on the current biggest challenges in healthcare delivery, but at the same time are not in that high-risk category where the errors that these types of AI are prone to could find their way all the way to a patient and could adversely affect their outcome.

Chip Kahn (09:01):

So if I understand your strategy then, to do the work to reduce this administrative burden, I assume that it includes issues like coding and other kind of matters. Who’s looking over the shoulder of the AI? How do you operationalize that to make sure that these systems are doing a better job? But clearly, as you say, even there, they’re not flawless. What’s the process for making sure that they’re getting it right?

Dr. Michael Schlosser (09:32):

Sure. So there’s a whole lot there and let me see if I can break it down into a couple of pieces here. Again, the first part of the way we manage that risk is by designing the workflow in a way that we keep that human in the loop. So I’ll use documentation as an example. There’s a ton of promise and a lot of hype around AI becoming a companion to clinicians in generating all the required documentation that goes into the EHR or really into other locations as well. And we’re working on this, putting an AI on a phone, allowing that phone with the permission of the patient to listen to an interaction between a doctor and a patient, for example, and then capture that ambiently and then use AI and natural language processing to turn that into the structured documentation that has to go back into electronic health record.

(10:26):

But you have to go a step further than just that concept of ambient documentation. You’ve got to build a workflow that allows the clinician to seamlessly oversee the AI in that process so that it’s very easy for them to see and evaluate and edit what the AI has done. And then also provide feedback or through the workflow, generate feedback so that those of us who are overseeing this work, designing these models, can be constantly evaluating whether the model is doing the right job and making it better, ideally, but also catching if there’s problems, if the models are starting to drift and the outcome’s starting to change. So workflow is one big piece of the answer to that question. You can’t just sprinkle the AI on top of how we’re currently doing things. Sometimes you have to pretty significantly change the workflow hopefully for the better in order for the AI to become an intelligent part of the process.

(11:22):

The second is something we refer to as responsible AI, and that’s our overarching governance structure that we’ve created, where anytime we think about deploying an AI use case, we have a set of questions that we can ask of that use case and of the sponsor of it to help understand what are all the various types of risk that that use case could create. So are there risks that your data is biased? Are there risks that there could be overconfidence in the output and we might not oversee it with the scrutiny that it deserves? Is there risk that the model once trained could inadvertently release private information or someone could back private information out of the model if given access to it? And we have about 43 total questions like those that I just gave you that we would ask of the sponsor and of the model before we would deploy it.

(12:15):

And that helps us understand what are those mitigating strategies we have to put into place. And a big piece of mitigating strategy with AI is ongoing monitoring. You’ve got to have systems to monitor the output and monitor the success of the model in perpetuity. This isn’t a piece of software where once it’s built, it will always provide the same result every single time, and so you can test it and if it works, you’re off and running. The nature of these models is that they can change over time. The data that’s driving them changes, the environment they’re operating in changes, and all of those can impact the outcome. And so you have to build processes to monitor these things in real-time and provide that continuous feedback loop to successfully deploy them. So those are just two examples of how we think about solving problem while mitigating risk.

Chip Kahn (13:03):

Looking forward, because, at some point, I think you will want to get into the clinical side. I mean, clearly, I see your priority is to adopt and adapt to places where it can have the most impact the soonest, and as you say, sort of a safer impact or at least a less risky impact. But at some point, you will get to the clinical side. And I want to ask about decision support. But before I get there, one of the things that goes on in hospitals, particularly in ICUs, but really on every floor is continuous monitoring of patients.

(13:40):

Let’s just take the ICU for example. I mean, it’s physically impossible even for an Einstein to keep track of all the information that’s being processed on a patient with all the different devices that are keeping track of that patient in an ICU. And basically, now we really depend on very experienced nurses and specialized physicians to make sure that ICU care is safe and effective. Does generative AI offer something there that’s really going to make a difference in terms of being able to take continuous monitoring and turn it into some kind of reading of trends that could really elevate the care and those kind of environments? And how far off is that?

Dr. Michael Schlosser (14:22):

Well, it’s a great question. And I’ll answer the last part first. It’s not far off. The big problem actually in an ICU or a labor delivery unit or wherever you might go where you have the situation you described of just tons and tons of streaming data, the big problem there is actually the data. And so what we are working on is how do we capture all of that in real-time, how do we bring it into a data lake structure, how do we organize it, how do you manage it because it’s an enormous amount of data. So that’s the problem that we’re working through. It’s not a problem that doesn’t already have solutions. I mean, the likes of Google and others that we’re working with can absolutely manage this kind of data. It’s just something we haven’t built systems within healthcare necessarily to do well.

(15:09):

But once we get the data all living together in harmony in our systems, AI models that can train on that kind of data that exists today. Open-source foundation models that we could fine-tune to be able to study that kind of data absolutely already exists. And the exciting thing is being able to look at what are the things those experienced nurses are looking at. And so this to me is a really fun part of AI, is going to the end users and saying, what are you paying attention to? What are the four or five things that you are always watching and then using that as the starting point to figure out what do we train the AI on? It’s usually not just one source of data, it’s usually, I’m looking at this monitor and this specific metric on the monitor, but then I’m also paying attention to how the patient is moving or not moving. And then I’m also watching their urine output.

(16:03):

And they’re triangulating multiple different sources of information. And so then we have to go and say, okay, well, where can we find the data to represent those sources of information? And then how do we bring that together? And then how do we train a model to potentially replicate that thinking that those experienced nurses or clinicians are providing? And so that’s a problem that we are already working on. And so it’s here now, it’s probably going to be a few years before you see real commercial production use of AI in that space. But for example, we’re working in the… I mentioned labor and delivery because that’s an area we’re actually working in. The clinicians there pushed us into the decision support space. They said we need better decision support when it comes to fetal heart tracings. The way we currently do that, there’s too much ambiguity. And so they’ve been pushing us forward towards creating some AI models that can do exactly what we were just talking about.

(16:57):

We’re excited about the clinical space, don’t get me wrong. And I would actually say that when I’m improving documentation, when I’m improving throughput in a hospital, when I’m improving the access of the care team to the right information that that is all clinical. That’s helping them do their job of delivering care better and we would measure the impact in terms of clinical outcomes. I mean, I know what you’re getting at, which is when do you get to that decision support piece where the AI is actually helping support the clinicians with decisions. I think that’s coming very soon.

Chip Kahn (17:32):

So we’ve got a lot of possibilities here, but clearly, there’s a public interest in regulating this too because there are a lot of systems out there developing and oversight is so important. You testified before Congress about this, and I know you have views as to the role that the law and the regulation should take. Could you give us some sense right now as to what you think the sweet spot of federal and state regulation ought to be in this area to assure that patients are safe?

Dr. Michael Schlosser (18:08):

And let me be clear on one thing. I do think this space needs regulation. I don’t think there’s any question about that. We can’t allow a wild west of AI in healthcare. That won’t lead us to a good place. That would probably move us more to the concerning future rather than the exciting one. What I mentioned when I testified in front of Congress, and I’ll repeat with you today, is I think we need to be a little bit patient. And I think we need to learn a little more about these systems and how they’re actually going to get used and what the risks are and how to mitigate those risks before we jump to conclusions and jump to regulations. So we’ve had some really good conversations with various government entities around starting with standards and even standard language of how we define and describe these various different types of tools and the systems that are embedded in.

(19:02):

Start with the basics. Let’s all have a common nomenclature about how we talk about this. So then when we do start putting together structures and guidance and eventually regulations, we’re all working from the same place. We’ve also had a lot of good conversations with the FDA. I think the FDA is being very thoughtful here about how they evolve their thinking around the medical device space. And clearly, that’ll be their space to continue to regulate, the AI that fits into that definition of a medical device, but also some good conversations with them and others about, well, what about all the AI use cases that aren’t medical devices. It’s not directly being used to care for patients, but it’s being used for healthcare delivery more broadly. And that’s a space it’s sort of greenfield right now. There’s not one agency or one approach.

(19:48):

And our advice has been and continues to be, let’s work together. Let’s take this one step at a time. Let’s not jump to a conclusion because too much regulation or bad regulation could absolutely undermine the potential huge impact, positive impact that AI could have on healthcare, just like not regulating it could. And so I’ve been actually very impressed so far with both the members of Congress we’ve had the chance to interact with as well as the various executive branch agencies. They’re listening to the advice from the public sector. They’re interested in partnering, a lot of private-public partnerships being established. And they’re interested in taking this systematic one-step-at-a-time approach to walking towards the right set of regulations. And I think that first step that we’ve recommended is why don’t we put guidance together on what a good responsible use of AI program would look like? How do we recommend you private entities self-govern to ensure you’re making good decisions?

(20:52):

And through that and through studying that can we learn what maybe an overarching federal approach, for example, could be to putting that kind of regulatory structure in place. So that’s been our advice is, yes, we agree we need to regulate this in a way to keep patients and caregivers and health systems safe, but let’s not rush. The reality is, is that while people may be using co-pilot to help them draft emails, and that might be very prevalent right now, AI isn’t making healthcare decisions across all of our hospitals today. We have a little bit of space here to be thoughtful and make good decisions.

Chip Kahn (21:31):

Sort of coming to the end here, you’ve really articulated a lot of potential for change that can be adapted in terms of dealing with making the caregiver more available to provide the care, and then ultimately to give the caregiver more tools in providing that care. Have we missed anything? Is there something else you think we should cover just so we have thoroughly given the audience the healthcare potential of AI?

Dr. Michael Schlosser (22:00):

Well, maybe a good way to sum this up would be to pull up and think about the big picture. Because I do think if you look 5 or 10 years down the road, we have a pretty exciting vision for what we think healthcare delivery could look like in the AI era, where the AI systems and really the data. You can’t really separate AI and data, they’re sort of both the tool, if you will. But where the AI systems and the underlying data make the delivery of healthcare highly efficient, really effective, reduces the unnecessary variation, allows the caregivers to focus at the top of their license on the things that they train to do. When you put that picture together, it’s a pretty exciting future. I think it’s one that people would really… They wouldn’t want to be part of a healthcare system like that, either be a patient in it to the extent you ever want to be a patient. I mean, most of the times you don’t.

(22:55):

But when you have to be, I think it would be a very different experience than they have right now where things would be much more seamless and much more easy to interact with. And the same thing on the care team side, where it would bring the joy back in delivering healthcare because the systems would be surrounding you and enabling you to do your job and to do what you’re passionate about and what you came to healthcare for in the first place. So to sum up maybe where we started, I’m incredibly bullish. I’m incredibly excited. I think with the right effort and the right focus and the right governance, we can really make this a defining moment for healthcare where we make a pivot here away from technology layering on top of the healthcare system to technology really enabling it. And it’s an exciting time, I think, in my lifetime in healthcare.

Chip Kahn (23:45):

Mike, this was such an informative discussion. And generative AI is moving so fast. I hope maybe we could get together a year, a year and a half from now, and see where we are at that point. I just want to thank you for being here today.

Dr. Michael Schlosser (24:00):

Well, I appreciate the opportunity, Chip. It’s always great to talk to you. And yeah, with the speed that AI is moving at, it’ll probably be very different in a year and a half.

Chip Kahn (24:07):

Thank you so much.

Dr. Michael Schlosser (24:08):

Take care.

Speaker 2 (24:12):

Thanks for listening to Hospitals In Focus from the Federation of American Hospitals. Learn more at fah.org. Follow the Federation on social media @FAHhospitals, and follow Chip @chipkahn. Please rate, review, and subscribe to Hospitals In Focus. Join us next time for more in-depth conversations with healthcare leaders.

Michael Schlosser, MD, MBA is Senior Vice President, Care Transformation and Innovation for HCA Healthcare. Reporting directly to the CEO of HCA, he is responsible for leading care delivery innovation and transformation for the enterprise. His department’s vision is to design, develop, integrate, implement, and optimize technology and processes that drive care delivery with the common goal of improving the experience and outcomes for HCA Healthcare’s leaders, care teams, and patients. As part of this strategy he leads the implementation and optimization of HCA Healthcare’s electronic health record systems, the data science and data strategy teams, and the enterprise Responsible AI program.

Prior to this role, he served as group Chief Medical Officer, leading the clinical operations for 100 HCA hospitals, overseeing quality, patient outcomes, and clinical strategy. He has also previously served as the chief medical officer for Healthtrust.

Dr. Schlosser is a neurosurgeon and completed his residency and fellowship at Johns Hopkins, has served as a medical officer with the FDA, and holds a degree in chemical engineering from MIT and an MBA from Vanderbilt.