RadAI Slice Newsletter Weekly Updates in Radiology AI |
Good morning, there. AI-driven CT analysis of lymph node ENE improved risk prediction in 1,733 head and neck cancer patients. I was struck by how this multicenter study shows real-world AI utility for complex, high-impact cancer decisions. Incorporating automated extranodal extension quantification improved risk stratification and survival prediction, showing clear added value over current staging. This feels like a practical leap where imaging AI reshapes how radiologists contribute to clinical decision-making and multidisciplinary care. How ready are you to add AI-derived risk metrics to your cancer reports?
Here's what you need to know about Radiology AI last week: AI boosts head and neck cancer CT risk prediction in multicenter study LLMs flag high-priority radiology reports with strong test accuracy Multimodal ML improves 5-year mortality prediction after PCI Automated radiomics, text, and image fusion improves ESCC chemoimmunotherapy response prediction Plus: 1 newly released dataset, 3 FDA approved devices & 4 new papers.
|
š¬ AI boosts head and neck cancer CT risk prediction in multicenter study RadAI Slice: AI-based ENE detection on CT improved survival risk models in oropharyngeal carcinoma. The details: Multicenter study of 1,733 patients across three institutions Automated lymph node ENE via deep learning from CT scans Independent association with distant control (HR 1.44) and overall survival (HR 1.30) AI-ENE improved risk grouping C-indices over traditional staging Larger benefits observed in HPV-negative patients
Key takeaway: This large, externally validated AI tool moves complex lymph node interpretation from pathology to pre-treatment imaging, enabling more personalized, accurate risk stratification and possibly altering radiation and surgical planning workflows. |
š LLMs flag high-priority radiology reports with strong test accuracy RadAI Slice: Fine-tuned LLMs accurately detected urgent findings in radiology reports from routine practice. The details: Tested on 176 reports with balanced critical vs non-critical cases Best LLM achieved ROCAUC 0.968, PRAUC 0.962, F1=0.916 Inputs used: radiology findings and referring department No extra benefit adding pre-exam diagnosis Could support faster communication of urgent results
Key takeaway: Deploying LLMs to identify actionable findings may optimize radiologist workflow, reduce error risk, and speed up escalation of careāespecially as imaging volumes and reporting demands increase. |
𧬠Multimodal ML improves 5-year mortality prediction after PCI RadAI Slice: A large-scale, real-world study shows patient outcomes are better predicted by models integrating imaging, text, and EMR data. The details: 10,353 patient cohort, 5-year all-cause mortality endpoint Model uses CT angiography video, procedural text (BioBERT), and EMR fields Trimodal AUC-ROC: 0.814, superior to single-source models Explains predictions with SHAP for greater transparency
Key takeaway: Multimodal ML leveraging CT images, reports, and structured data is practical in large cohorts and may advance individualized cardiac risk models beyond traditional scores. |
𩸠Automated radiomics, text, and image fusion improves ESCC chemoimmunotherapy response prediction RadAI Slice: A validated AI fusion of CT and pathology slides offers accurate, transparent pCR prediction for ESCC patients pre-surgery. The details: Three-center, 335-patient cohort for neoadjuvant chemoimmunotherapy Model fuses CT radiomics and H&E 'pathomics' features AUC 0.97 in training, 0.78ā0.76 in holdout and external test sets Model explains reasoning both per-case and cohort-wide
Key takeaway: This clinically relevant, interpretable model supports decision-making about surgery vs further surveillance in ESCC, showing the rising role of multimodal, explainable AI in oncologic radiology. |
Pixel-Level Tear Meniscus Segmentation Dataset (TMH-MM) (2025-12-11) Modality: Ocular Imaging (Color, Infrared) | Focus: Lower eyelid, Tear meniscus | Task: Segmentation, Quantification Size: 1,693 color + 1,739 infrared images; healthy patients; 5 Chinese centers Annotations: Pixel-level masks for tear meniscus and central pupillary area; reviewed by experts Institutions: Wenzhou Medical University, Zhejiang Normal University, et al. Availability: Highlight: First multicenter, multimodal, expert-verified, pixel-level dataset for tear meniscus segmentation in dry eye.
|
šļø FDA Clearances K253489 - Hyperfine Swoop Portable MRI receives 510(k) clearance as a point-of-care MR imaging solutionāsupporting rapid bedside neuroimaging in diverse practice settings. K251883 - MIM LesionID Pro receives FDA 510(k) clearance, supporting multi-modality lesion tracking and segmentation for radiology workflow efficiency. K252294 - Fetal EchoScan (v1.2) cleared as a CADe solution, analyzing medical images to help detect lesions suspicious for cancer. Explore last week's 5 radiology AI FDA approvals.
|
š Fresh Papers doi:10.1016/j.jacr.2025.12.024 - GPT-4o offered automated, helpful feedback on 5,000 radiology resident breast imaging reports, aligning closely with attending radiologist consensus. doi:10.1093/bjr/tqaf309 - Meta-analysis of 14 models shows AI detects pneumoperitoneum on radiography with 83.6% sensitivity and 92.9% specificity, aiding emergency diagnosis. doi:10.1038/s41598-025-31967-2 - A federated learning AI combining RegNetZ and Swin-Transformer achieves 99% AUC for pancreatic cancer detection across multiple institutions and modalities. doi:10.64898/2025.12.21.25342791 - A deep learning framework using multimodal MRI predicted motor decline and subtypes in 268 Parkinsonās patients, outperforming previous models. Browse 148 new radiology AI studies from last week.
|
That's it for today! Before you go weād love to know what you thought of today's newsletter to help us improve the RadAI Slice experience for you. |
|
š Quick favor: drag this into your Primary tab so you donāt miss next week. Or just hit Reply with one thought. See you next week.
P.S. We keep building free tools to accelerate your radiology work. What's the most time-consuming pain point in your day that we should help speed up? Reply and share your take so we keep building around you. |
|