
Stanford research shows agentic LLMs can safely draft hospital discharge summaries, reducing physician burnout with minimal risk of patient harm.
Key Details
- 1Stanford study assessed AI-generated hospital discharge summaries over a 10-week period in 2025.
- 2Physicians incorporated AI content in 57% of 384 discharge events; 219 AI summaries were accepted.
- 3Physician reviews indicate 98% saw low or extremely low likelihood of harm from AI-generated notes.
- 488% of unedited summaries rated as having 'no harm potential'; only one summary considered likely to cause moderate harm.
- 5Major issues noted: omissions (25 cases) and inaccuracies (20), with hallucinations flagged only twice (2%).
- 6Physician burnout scores dropped from 1.75 to 1.20 (scale 0–4) after LLM implementation.
Why It Matters
This study provides evidence supporting the integration of LLMs into clinical practice with manageable safety risks, especially given the significant improvement in physician wellbeing outcomes. Findings highlight cognitive offloading as a potentially greater benefit than raw efficiency, with implications for AI rollout in radiology and related documentation-heavy fields.

Source
HealthExec
Related News

•Radiology Business
AI Model Outperforms Radiologists in Early Pancreatic Cancer Detection
REMOD, a new AI model, detects pancreatic cancer on CT scans much earlier and more accurately than radiologists.

•Radiology Business
AI Model Identifies Colorectal Cancer on Routine Noncontrast CT Scans
Researchers introduce the COCA AI tool to detect colorectal cancer opportunistically on routine noncontrast CT scans.

•AuntMinnie
ChatGPT Demonstrates High Diagnostic Agreement on FDG-PET/CT Reports
ChatGPT-4o and ChatGPT-5 matched or surpassed nuclear medicine experts in diagnosing neurodegenerative diseases using textual FDG-PET/CT scan descriptions.