Healthcare’s next big challenge: Closing the promise-to-achievement gap in AI
The hard work begins by reframing the conversation.
When it comes to implementing artificial intelligence, we see a gap in the healthcare industry: There’s a lot of belief in AI’s promise, and yet few organizations feel they have a concrete strategy to achieve their workforce goals.
I’ve spoken with leaders across healthcare, and the sense of feeling in uncharted territory is pervasive. For years, we have promised that technology would make doctors and nurses lives easier, starting with electronic medical records (EMRs). To put it mildly, that promise did not exactly live up to the hype.
The other thing is that healthcare professionals by their nature are wary of the “next big thing” that is going to be a smarter solution (think IBM Watson). In almost every previous technological “revolution,” machines meant to simplify things often added more administrative work and caregivers feeling less connected to their patients. So, with AI, we need to be clear, practical, and honest with the workforce.
What’s encouraging, though, is the number of healthcare workers — especially nurses — who want to learn how to work with AI. They’re eager to understand how it fits into their roles and how it can help them. For talent leaders, that eagerness represents a huge opportunity to guide the workforce strategically and with intention, focusing on what AI can realistically deliver and how it can make their work lives easier (and healthcare better in the process).
We tackled how to close the promise-to-achievement gap head-on during a recent Guild webinar, and the conversation yielded a number of recommendations for leaders looking to help their workforce thrive in the AI era. This work starts by changing how we view AI. Only when we shift our mindsets can we focus on AI literacy, establish trust and transparency and create greater equity — and in an ethical way.
Reframe the conversation.
One essential piece of this puzzle is ensuring that the workforce sees the value of AI. People need to know how AI will make their jobs better — but also how it connects to why they entered healthcare in the first place. If we don’t get that right, and if we can’t communicate the why effectively, then we’re all going to struggle.
Mark Smith, leader of insights and workforce optimization at Providence, offered a helpful concept to drive this conversation: “augmented assistance.” He described AI as a tool that should free up time for healthcare professionals, allowing them to focus on what they do best: building genuine human connections and caring for patients.
“I’m really trying hard not to talk about AI replacing roles,” Smith said, emphasizing that healthcare’s true value lies in the human relationships between caregivers and patients. According to Smith, AI should act more as a silent partner — an “augmented assistant” to make them more efficient and, at the same time, more human.
One example of this is managing and streamlining administrative tasks. HCA Healthcare’s Hillary Miller pointed out how generative AI technologies, such as chatbots and note-taking software, help reduce “brain load” by assisting with documentation and data synthesis. In this way, AI doesn’t replace healthcare workers but rather alleviates time-consuming and often-onerous administrative tasks, giving clinicians more time to spend with patients.
“AI will not replace your job. A human who understands how to use AI will replace your job.”
- Steve Klasko
former CEO, Jefferson Health
This approach reflects an important shift in perspective: Rather than seeing AI as a tool to fill gaps or save costs, healthcare talent leaders should instead recognize it as a means of enhancing the workforce’s human qualities.
Step up AI literacy training.
It’s our responsibility as leaders to invest in building AI literacy within our teams and to position AI as a support tool rather than a replacement technology. We must communicate transparently and involve our teams every step of the way. If we can do this thoughtfully, I’m confident that we’ll unlock AI’s full potential for the workforce and, most importantly, for the patients.
According to Miller, fostering AI literacy means breaking down complex concepts and highlighting the technology’s everyday applications.
“Many healthcare workers have already encountered AI in various forms,” Miller pointed out, “so we must show them how to build on what they know.”
Steve Klasko, former CEO of Jefferson Health, emphasized the importance of training that goes beyond technical skills. He recommended that organizations focus on educating employees about AI’s broader impact on patient care. Klasko advised leaders to avoid overemphasizing algorithms and analytics, instead focusing on skills that enable workers to collaborate with AI, such as critical thinking and creativity. Robots and humans will create better healthcare together once we understand how to work together. Not as easy as it sounds. Remember it took us many years of “inter-professional education” to just get doctors and nurses to work together as better teams.
Prioritize the two ‘T’s: transparency and trust.
Healthcare professionals are often wary of AI, particularly if they see it as a form of surveillance. Smith stressed the importance of transparency in any AI initiative, explaining that a lack of openness can lead to distrust and resistance. To mitigate this, he recommended involving end-users in the AI development process and maintaining open lines of communication. By engaging clinicians and staff from the outset, organizations can foster a sense of ownership and alignment.
Miller emphasized the role of culture in successful AI adoption. “AI won’t change your culture, but it can reflect and amplify what’s already there,” she said. For AI to become an effective tool in healthcare, leaders must ensure their organizational culture promotes collaboration and trust.
Address the two ‘E’s: ethics and equity.
Klasko cautioned that AI’s benefits might not be evenly distributed. Without deliberate planning, the technology could end up reinforcing existing disparities, benefiting only the most affluent patients and organizations. He urged leaders to integrate ethical considerations into AI development and deployment from day one. This means designing tools that address diverse healthcare needs and ensuring that underserved communities have access to AI’s potential advantages. The health gap may even widen if we don’t recognize a third E, education. We cannot just assume that healthcare workers and leaders will know how to use the technology without education. Similarly, if the healthcare industry concentrates only on people with Apple watches, Klasko said, we may fall into the trap of having technology make the “wealthy healthier.”
Building a human-centered AI culture
While these recommendations may be doable in the short-term, there’s a much bigger challenge: For AI to truly succeed in healthcare, leaders must build a culture that prioritizes the human experience. To be sure, this requires all the aforementioned: transparency, collaboration, and a commitment to helping staff feel empowered rather than displaced. Leaders should approach AI as a tool to be used alongside human intuition and expertise, enabling healthcare professionals to offer compassionate, effective care.
By viewing AI as a supportive partner, organizations can foster a culture that respects the value of human connection while leveraging technology to enhance efficiency and care quality. As Klasko put it, “AI will not replace your job. A human who understands how to use AI will replace your job.” For healthcare, this means preparing staff not just to work with AI, but to lead with it — an approach that ultimately benefits providers, patients, and the healthcare system as a whole.