RadAI Slice is your weekly intelligence briefing on the most critical developments at the intersection of radiology and artificial intelligence. Stop searching. Start leading.Subscribe to join 11k+ radiology professionals tracking the future of imaging AI.
The latest developments in Radiology & AI.
Each issue is precisely structured to give you exactly what you need. No fluff, just facts and forward-looking insights.

Researchers have released LazySlide, an open-source tool leveraging AI for advanced, interoperable digital pathology image analysis.

Researchers demonstrate an AI-powered OCT system for objective, non-invasive wound healing assessment.

Researchers have developed an AI tool that accurately diagnoses advanced heart failure using cardiac ultrasound and patient health records.
To develop a Generative Adversarial Network (GAN) for generating virtual T2 fat-suppressed (T2FS) sequences from standard T1- and T2-weighted images, with the clinical objective of reducing MRI scan time without compromising diagnostic value for spinal tumor assessment. This retrospective study included 1,389 consecutive patients with spinal tumors from two institutions, divided into training (n = 1,026; 49.2 ± 16.4 years; 540 males), internal validation (n = 257; 48.2 ± 17.2 years; 140 males), and external test (n = 106; 52.8 ± 17.0 years; 59 males) sets. The model used T1- and T2-weighted images as input to generate T2FS images. Quantitative image fidelity evaluations included mean squared error (MSE), structural similarity index measure (SSIM), and peak signal-to-noise ratio (PSNR). The Dice similarity coefficient (DSC) assessed lesion segmentation. Signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) measured objective quality. Two experienced radiologists independently rated the images on a 5-point scale, evaluating overall quality, tumor detail preservation, fat suppression performance, and artifacts. The external test set exhibited an MSE of 0.0060 ± 0.0038, SSIM of 0.667 ± 0.120, and PSNR of 23.368 ± 4.298 dB. The real and synthetic images agreed strongly in lesion segmentation, with mean DSC of 0.820 ± 0.176 (internal) and 0.807 ± 0.188 (external). SNR and CNR were comparable between real and synthetic images in both datasets. Qualitative assessments indicated equivalent overall image quality and artifacts. Synthetic images showed superior fat suppression, while real images offered better tumor internal detail. The proposed GAN-based method generated diagnostically valuable virtual T2FS images. 1. The proposed deep learning model successfully generated virtual T2FS images from standard T1/T2 MRI, demonstrating favorable quantitative agreement (MSE 0.0060, SSIM 0.667). 2. Synthetic and real images demonstrated strong consistency in lesion segmentation (DSC 0.809-0.824) and comparable SNR/CNR values. 3. While synthetic images provided superior fat suppression, real images maintained slightly better tumor internal detail visualization.
Multi-modal medical image fusion aimed to combine images from different modalities to leverage their complementary strengths and mitigate the limitations of individual imaging techniques. In recent years, deep learning-based approaches became the dominant direction, surpassing traditional methods in this field. However, existing medical image fusion methods struggle to balance local feature extraction with global context representation, and to effectively capture the specificity and complementarity of different modalities. To overcome these limitations, we propose a Multi-Role Collaborative Experts Network, termed MRCE-Net, for multi-modal medical image fusion. Specifically, we employed a dual-branch encoder to extract modality-specific features from each modality. This encoder integrated a window-based Transformer for local feature extraction and a global channel-based Transformer for capturing long-range contextual dependencies, effectively balancing both aspects. In addition, we propose a Multi-Role Collaborative Experts fusion module that enables specialized experts to jointly model distinct aspects of multi-modal features, with a particular focus on capturing both modality-specific characteristics and inter-modality complementarity. By exploiting the synergistic capabilities of the experts, our framework achieved more comprehensive feature representation and more accurate fusion results. Extensive experiments on a public multi-modal medical image fusion benchmark and an in-house collected brain anatomical and functional imaging dataset demonstrate that our method outperforms state-of-the-art approaches in both visual quality and quantitative performance. The source code will be made publicly available upon publication at https://github.com/Dpw506/MRCENet.
BackgroundA critical radiologist shortage exists in India, leading to delayed chest radiograph (CXR) interpretation. This leads to disease progression, higher morbidity, and mortality. Artificial intelligence-based CXR interpretation by Lenek Intelligent Radiology Assistant (LIRA) is a promising solution. This study aims to establish the screening and triaging capabilities of LIRA by assessing its accuracy in detecting abnormalities and pathologies in CXRs from geographically diverse institutions. MethodsWe conducted a retrospective multi-source validation of the diagnostic accuracy of LIRA for the detection of general abnormalities, tuberculosis, consolidation, pleural effusion, pneumothorax, and cardiomegaly. De-identified chest radiographs were input into LIRA models. The obtained interpretations were compared to the established ground truth reporting for the calculation of sensitivity, specificity, and AUROC with 95% CI for individual pathologies across varying probability thresholds. ResultsLIRA demonstrated high sensitivity for general abnormality detection (AUROC 0.93-0.986, 84.4-97.1% sensitivity, 88.9-92.4% specificity) and tuberculosis triaging (Shenzhen & Montgomery: 88.5-89.7% sensitivity, 89.9-90.5% specificity; Jaypee: 98.7% sensitivity, 63.6% specificity). For consolidation (AUROC 0.884-0.895, 96.4-96.9% sensitivity, 70.8-77.1% specificity), pleural effusion (AUROC 0.942-0.967, 79.7-99.1% sensitivity, 81.2-87.7% specificity), pneumothorax (AUROC 0.87, 90.6-94.8% sensitivity, 79.5-82.7% specificity) and cardiomegaly (AUROC 0.883, 95.1% sensitivity, 81.6% specificity), the model exhibited commendable accuracy as well. ConclusionsThe diagnostic performance of LIRA was consistent across various pathologies and chest radiographs from diverse geographic locations, with particular strengths in abnormality detection and tuberculosis screening. The risk-stratified triaging and high sensitivity of LIRA make it a reliable adjunct solution to address radiologist shortages, reduce turnaround times, and support Indias tuberculosis elimination goals.
Depuy Ireland UC
VELYSâ„¢ Hip Navigation is a system designed to assist clinicians during hip surgeries by processing radiological images to improve navigation and accuracy. This technology aids surgeons in planning and executing hip procedures more precisely, potentially improving patient outcomes.
BunkerHill Health
Bunkerhill Contrast CAC is a computed tomography (CT) system designed for radiology use, assisting clinicians by providing detailed cross-sectional images to evaluate patient conditions.
BunkerHill Health
Bunkerhill Contrast AVC is a computed tomography (CT) imaging system designed to assist clinicians by providing detailed X-ray images. This helps in diagnosing and analyzing various medical conditions effectively.
We scour dozens of sources so you don't have to. Get all the essential information in a 5-minute read.
Never miss a critical update. Understand the trends shaping the future of your practice and research.
Be the first to know about the tools and technologies that matter, from clinical practice to academic research.
Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.
We respect your privacy. Unsubscribe at any time.