TECHNOLOGYVIDEO

Hallucination, Harm & Hype: Can LLMs Be Trusted in Healthcare?

In this thought-provoking panel hosted by Airmeet, leading voices from AI, healthcare, and policy explore how large language models (LLMs) can—and can’t—safely transform clinical practice. Moderated by Dan Roth (Chief AI Scientist, Oracle), the discussion features: Duncan Eddy (Stanford Center for AI Safety) Maneesh Goyal (COO, Mayo Clinic Platform) Gagan Bansal (Microsoft Research) Anil Jain (Chief Innovation Officer, Innovaccer) Soroush Saghafian (Professor, Harvard University) Topics include: • Reducing hallucinations in LLMs used in high-stakes environments • Designing guardrails, agents, and evaluation frameworks for healthcare AI • Regulatory and ethical frameworks for safe deployment • Trust, transparency, and explainability in human-AI collaboration • Scaling innovation without compromising safety or public trust This webinar is essential viewing for healthcare professionals, AI developers, policymakers, and anyone invested in the future of responsible AI in medicine. 📅 Recorded: June 2, 2025 🎥 Watch now and join the conversation.

Hallucination, Harm & Hype: Can LLMs Be Trusted in Healthcare?
FROM THE EVENTHallucination, Harm & Hype: Can LLMs Be Trusted in Healthcare?
Hallucination, Harm & Hype: Can LLMs Be Trusted in Healthcare? | World Salon