Stay Ahead of the Curve in Radiology AI.

RadAI Slice is your weekly intelligence briefing on the most critical developments at the intersection of radiology and artificial intelligence. Stop searching. Start leading.

Your Weekly Slice of Innovation

Each issue is precisely structured to give you exactly what you need. No fluff, just facts and forward-looking insights.

Recent Industry News

The Latest Research

FDA Approvals Database

From the Research Hub

MRIImage SynthesisAbdominal

Dynamic abdominal MRI image generation using cGANs: A generalized model for various breathing patterns with extensive evaluation.

Organ motion is a limiting factor during the treatment of abdominal tumors. During abdominal interventions, medical images are acquired to provide guidance, however, this increases operative time and radiation exposure. In this paper, conditional generative adversarial networks are implemented to generate dynamic magnetic resonance images using external abdominal motion as a surrogate signal. The generator was trained to account for breathing variability, and different models were investigated to improve motion quality. Additionally, an objective and subjective study were conducted to assess image and motion quality. The objective study included different metrics, such as structural similarity index measure (SSIM) and mean absolute error (MAE). In the subjective study, 32 clinical experts participated in evaluating the generated images by completing different tasks. The tasks involved identifying images and videos as real or fake, via a questionnaire allowing experts to assess the realism in static images and dynamic sequences. The results of the best-performing model displayed an SSIM of 0.73 ± 0.13, and the MAE was below 4.5 and 1.8 mm for the superior-inferior and anterior-posterior directions of motion. The proposed framework was compared to a related method that utilized a set of convolutional neural networks combined with recurrent layers. In the subjective study, more than 50% of the generated images and dynamic sequences were classified as real, except for one task. Synthetic images have the potential to reduce the need for acquiring intraoperative images, decreasing time and radiation exposure. A video summary can be found in the supplementary material.

Cordón-Avila A, Ballı ÖF, Damme K, et al.·Computers in biology and medicine
UltrasoundClassificationBreast

Efficient Ultrasound Breast Cancer Detection with DMFormer: A Dynamic Multiscale Fusion Transformer.

To develop an advanced deep learning model for accurate differentiation between benign and malignant masses in ultrasound breast cancer screening, addressing the challenges of noise, blur, and complex tissue structures in ultrasound imaging. We propose Dynamic Multiscale Fusion Transformer (DMFormer), a novel Transformer-based architecture featuring a dynamic multiscale feature fusion mechanism. The model integrates window attention for local feature interaction with grid attention for global context mixing, enabling comprehensive capture of both fine-grained tissue details and broader anatomical contexts. DMFormer was evaluated on two independent datasets and compared against state-of-the-art approaches, including convolutional neural networks, Transformer-based architectures, and hybrid models. The model achieved areas under the curve of 90.48% and 86.57% on the respective datasets, consistently outperforming all comparison models. DMFormer demonstrates superior performance in ultrasound breast cancer detection through its innovative dual-attention approach. The model's ability to effectively balance local and global feature processing while maintaining computational efficiency represents a significant advancement in medical image analysis. These results validate DMFormer's potential for enhancing the accuracy and reliability of breast cancer screening in clinical settings.

Guo L, Zhang H and Ma C·Ultrasound in medicine & biology..
UltrasoundDetectionOther

Artificial Intelligence in Prenatal Ultrasound: A Systematic Review of Diagnostic Tools for Detecting Congenital Anomalies

BackgroundArtificial intelligence (AI) has potentially shown promise in interpreting ultrasound imaging through flexible pattern recognition and algorithmic learning, but implementation in clinical practice remains limited. This study aimed to investigate the current application of AI in prenatal ultrasounds to identify congenital anomalies, and to synthesise challenges and opportunities for the advancement of AI-assisted ultrasound diagnosis. This comprehensive analysis addresses the clinical translation gap between AI performance metrics and practical implementation in prenatal care. MethodsSystematic searches were conducted in eight electronic databases (CINAHL Plus, Ovid/EMBASE, Ovid/MEDLINE, ProQuest, PubMed, Scopus, Web of Science and Cochrane Library) and Google Scholar from inception to May 2025. Studies were included if they applied an AI-assisted ultrasound diagnostic tool to identify a congenital anomaly during pregnancy. This review adhered to PRISMA guidelines for systematic reviews. We evaluated study quality using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) guidelines. FindingsOf 9,918 records, 224 were identified for full-text review and 20 met the inclusion criteria. The majority of studies (11/20, 55%) were conducted in China, with most published after 2020 (16/20, 80%). All AI models were developed as an assistive tool for anomaly detection or classification. Most models (85%) focused on single-organ systems: heart (35%), brain/cranial (30%), or facial features (20%), while three studies (15%) attempted multi-organ anomaly detection. Fifty percent of the included studies reported exceptionally high model performance, with both sensitivity and specificity exceeding 0.95, with AUC-ROC values ranging from 0.91 to 0.97. Most studies (75%) lacked external validation, with internal validation often limited to small training and testing datasets. InterpretationWhile AI applications in prenatal ultrasound showed potential, current evidence indicates significant limitations in their practical implementation. Much work is required to optimise their application, including the external validation of diagnostic models with clinical utility to have real-world implications. Future research should prioritise larger-scale multi-centre studies, developing multi-organ anomaly detection capabilities rather than the current single-organ focus, and robust evaluation of AI tools in real-world clinical settings.

Dunne, J., Kumarasamy, C., Belay, D. G., et al.·medRxiv

The Sharpest Insights, Effortlessly

Save Time

We scour dozens of sources so you don't have to. Get all the essential information in a 5-minute read.

Stay Informed

Never miss a critical update. Understand the trends shaping the future of your practice and research.

Gain an Edge

Be the first to know about the tools and technologies that matter, from clinical practice to academic research.

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.