RadAI Slice Newsletter Weekly Updates in Radiology AI |
Good morning, there. Gemini 3.0 delivered one of the largest jumps in multimodal reasoning seen in any general model, including tasks closely tied to radiology and structured medical imaging. Its gains on MMMU Pro and RadLE suggest that general models are starting to handle complex visual reasoning tasks previously out of reach. While not clinical tools, these improvements point to faster progress in radiology linked AI capabilities, especially in spatial interpretation and case level consistency. How soon do you think general models will begin to support real radiology workflows at a meaningful level?
Here's what you need to know about Radiology AI last week: Gemini 3.0 Shows Big Gains on Radiology Linked Benchmarks 🩻 Real Case Testing: Gemini 3.0 vs GPT 5.1 Inside RadAIChat Multicenter 216-Patient MRI-Pathology Model Outperforms Clinical Oncologic Prediction AI-Optimized DEXA Predicts 10-Year Mortality and Hip Fracture Risk Plus: 4 newly released datasets & 4 new papers.
|
📊 Gemini 3.0 Shows Big Gains on Radiology Linked Benchmarks RadAI Slice: Google released Gemini 3.0 with strong jumps in structured visual reasoning, including tasks closely related to radiology. The details: MMMU Pro: 68 to 81 percent, surpassing GPT 5.1 at 76 percent RadLE: first general AI to exceed the trainee baseline at 51 percent Large improvements in multimodal spatial reasoning Still constrained by single image style benchmarks
Key takeaway: Gemini 3.0 delivers the clearest signal yet that general AI models are improving in structured image understanding, but medical grade reliability still requires richer benchmarks that mirror real radiology practice. Read full article |
🩻 Real Case Testing: Gemini 3.0 vs GPT 5.1 Inside RadAIChat RadAI Slice: We ran live tests of Gemini 3.0 Pro, GPT 5.1, and Gemini 2.5 inside RadAIChat using real cases from RSNA datasets. The details: Gemini 3.0 produced the most stable reasoning among the three Better localization compared with Gemini 2.5 and GPT 5.1 All models still missed subtle or non obvious pneumonia cases Early signal that structured image reasoning is improving across models
Key takeaway: If you have ideas on what we should compare next, just reply to this email or DM us on X, or try it yourself on RadAIChat. Compare your images |
🩺 Multicenter 216-Patient MRI-Pathology Model Outperforms Clinical Oncologic Prediction RadAI Slice: Multicenter study integrates deep MRI, pathomics, and clinical data for long-term breast cancer survival prediction. The details: 216 women, multicenter validation: training/test Multimodal model AUC: 0.89/0.82 (5y), 0.91/0.87 (7y) Clinical-only models: AUC <0.53 across timepoints Model surpassed classic ER/HER2/TNBC markers
Key takeaway: Combining imaging and pathology with deep learning drives superior risk stratification for oncologic outcomes, aiding personalized breast cancer management beyond standard markers. |
🦴 AI-Optimized DEXA Predicts 10-Year Mortality and Hip Fracture Risk RadAI Slice: Self-supervised AI extracts robust long-term risk from standard DEXA for timely osteoporosis and mortality prediction. The details: Trained on 85,461 DEXA scans, validated in 17,000+ exams Predicted 10-year mortality (AUROC 0.7), hip fracture (AUROC 0.74) Requires no added imaging/clinical data No extra patient burden—uses routine scans
Key takeaway: Routine DEXA exams can become powerful, actionable risk tools with minimal workflow change, amplifying radiology's strategic value for primary prevention. |
MU-Glioma Post (2025) Modality: MRI | Focus: Brain, CNS | Task: Segmentation, Outcome prediction Size: 594 scans, 203 patients Annotations: Manual segmentations of 4 tumor subregions (enhancing tumor, necrotic core, FLAIR hyperintensity, resection cavity), refined by neuroradiologists; clinical and genetic data Institutions: University of Missouri, Washington University in St. Louis et al. Availability: Highlight: Serial postoperative MRIs with expert-refined segmentations and detailed clinical/molecular data
|
BCBM-RadioGenomics (2025) Modality: MRI | Focus: Brain, Breast (metastasis) | Task: Segmentation, Radiogenomics prediction Size: 297 scans, 165 patients Annotations: Lesion segmentations by experts, radiomic features, molecular/genetic status (ER/PR/HER2) Institutions: University of Minnesota, Stanford University et al. Availability: Highlight: First open dataset focused on breast-to-brain metastasis with expert segmentations, radiomic and genetic data.
|
AirRC (2025) Modality: CT | Focus: Lung, pulmonary vasculature | Task: Segmentation, classification Size: 254 CT scans, 254 patients Annotations: Expert 3D masks for pulmonary veins, arteries, airway lumen, airway wall Institutions: Shanghai Jiao Tong University, Tongji University et al. Availability: Highlight: First public dataset with co-registered, expert-verified 3D masks for both pulmonary vessels (arteries/veins) and detailed airway (lumen/wall) on LUNA16 scans.
|
FQS (2025-11-11) Modality: Fundus | Focus: Retina | Task: Image quality assessment, classification Size: 2,246 images; ~2,246 patients Annotations: Continuous mean opinion scores (0-100) + 3-level category labels ('Good', 'Usable', 'Reject') from ophthalmologists Institutions: Tsinghua University, Shenzhen Eye Hospital Availability: Highlight: First public fundus quality dataset with both continuous scores and three-level labels, enabling precise image quality assessment.
|
📄 Fresh Papers doi:10.1186/s12880-025-01988-4 - CNN-GCN deep learning model (1,002 pts, 4 centers) predicts axillary lymph node metastasis in breast cancer from DCE-MRI; external test AUC 0.87. doi:10.1161/CIRCIMAGING.125.018443 - ML model using 48 CCTA plaque features (8431 pts) improved 3.7y major cardiac event prediction over clinical risk (AUC 0.90). doi:10.1038/s41598-025-25037-w - Deep learning/ViT model (n=396, multicenter, external cohort) produces 5-year/7-year OS nomograms for advanced cervical cancer from CT; C-indices 0.78/0.73. doi:10.1038/s41746-025-02085-0 - UltraFedFM, a federated 1M-image ultrasound foundation model (16 institutions, 9 countries), matches expert sonographer accuracy (AUROC 0.93; dice 0.88). Browse 169 new radiology AI studies from last week.
|
That's it for today! Before you go we’d love to know what you thought of today's newsletter to help us improve the RadAI Slice experience for you. |
|
👋 Quick favor: drag this into your Primary tab so you don’t miss next week. Or just hit Reply with one thought. See you next week.
P.S. We keep building free tools to accelerate your radiology work. What's the most time-consuming pain point in your day that we should help speed up? Reply and share your take so we keep building around you. |
|