October 18, 2024

Why Healthcare Isn’t Ready for AI Agents—Yet

Don't feel like reading? No worries, you can also listen to this AI-created podcast while you touch some grass:

A few months ago, Hippocratic AI launched its AI Agents, designed for tasks like pre-operative preparation, wellness coaching, post-discharge planning, and more.

I truly believe AI Agents have the potential to outperform clinicians in certain areas. Hippocratic AI is putting in the work to gather safety data and show that these agents can actually deliver results. But adoption is still going to be an uphill battle.

On the tech side, we’ve got what we need. AI Agents can, in theory, help keep patients healthier than many clinicians can right now. And that’s not because clinicians lack skill—far from it. It’s because of the realities of healthcare today: clinician shortages, burnout, and the constant pressure of seeing too many patients every day. AI doesn’t get tired, and it can scale up effortlessly.

But tech isn’t the only piece of the puzzle. From a societal standpoint, we’re just not there yet. The idea of letting AI take charge of critical healthcare decisions is a huge leap. And without the right regulations, accountability, and a shift in public mindset, it’s going to be tough for AI Agents to gain widespread acceptance—even if the tech is ready.

Put simply: if AI makes an error, it’s the doctor who goes to jail, not the engineer. Until we have clear rules outlining who’s responsible when something goes wrong, doctors will always assume the liability falls on them.

This hesitation shouldn’t come as a surprise. If we look at politics, we see a similar dynamic at play with the Overton Window—the range of ideas the public finds acceptable at any given time. AI Agents might be technically feasible, but socially, they sit outside that window right now. Like any major change, the idea of AI in healthcare needs time to enter mainstream discourse and gain public trust before it can be fully embraced.

And the recent Bain & Company study supports this. On the patient side, many are more comfortable with AI reading their radiology scans than with AI handling something as simple as answering the phone at their doctor’s office. Surprisingly, even AI-driven note-taking—a relatively low-risk task—barely earns patient approval.

And it’s not just patients; providers, too, are hesitant. While physicians and administrators appreciate AI’s potential to reduce administrative burdens, they worry about how it could disrupt the patient-clinician relationship.

AI Actions Are the Right Next Step for Healthcare

Does this mean we won’t see AI in clinical practice at all? Absolutely not. While investors are pouring millions into AI agents every week, I believe the solution lies in a phased approach—one where providers gradually get more comfortable with AI over time, rather than aiming for a big bang transformation right away.

The reality is that building trust and navigating the complexities of healthcare requires a steady, step-by-step integration. AI can play a pivotal role, but it has to be introduced in a way that enhances, rather than disrupts, the existing patient-care team dynamic.

At Awell, we’ve embraced this mindset. We recently launched AI-powered actions within our platform, which allow care teams to maintain full control over the IFTTT (If This, Then That) logic of their care flows. This means that care flows don’t become a black box—care teams are still in control, ensuring transparency and oversight. AI is integrated strategically to handle specific tasks that would normally require human intervention, but it doesn’t take over. Instead, it steps in where it can make a real difference, while the care teams remain in the driver’s seat.

For example, one of the AI Actions we recently launched is our Medication Extractor AI Action. Patients can simply take a picture of their medication, and the AI will fill in the name, dosage, and other details—a task that would typically take a human several minutes to complete. A care team member can then quickly review and approve the information, keeping the process efficient but still under human oversight.

Looking ahead, we’re continuing to develop AI-powered actions that our customers can seamlessly integrate into their care flows. From summarizing forms and summarizing care flows to categorizing messages and generating personalized messages, these actions are designed to take repetitive tasks off care teams’ plates while maintaining transparency and oversight. We believe these incremental AI actions will help ease the transition, allowing providers to adopt AI gradually and build trust as the technology evolves.

At the same time, we remain bullish on the future of autonomous AI agents and are already implementing them at scale. For instance, we’re partnering with a Voice AI Agent that engages with over 120,000 patients each month, handling tasks like post-discharge follow-ups and weekly check-ins. While the AI handles conversations, Awell’s orchestration engine controls the next steps using IFTTT logic.

This is just the beginning—we’re excited to push the boundaries alongside innovative organizations like Hippocratic AI and our forward-thinking customers. Together, we’re shaping the future of healthcare, making AI a powerful tool for better care.

PS: Call +12133204783 if you want to speak to a CareOps sales rep!

Continue reading

Build websites rapidly with over 100 interface blocks.