Sort by:
Page 7 of 99986 results

UltimateSynth: MRI Physics for Pan-Contrast AI

Adams, R., Huynh, K. M., Zhao, W., Hu, S., Lyu, W., Ahmad, S., Ma, D., Yap, P.-T.

biorxiv logopreprintAug 7 2025
Magnetic resonance imaging (MRI) is commonly used in healthcare for its ability to generate diverse tissue contrasts without ionizing radiation. However, this flexibility complicates downstream analysis, as computational tools are often tailored to specific types of MRI and lack generalizability across the full spectrum of scans used in healthcare. Here, we introduce a versatile framework for the development and validation of AI models that can robustly process and analyze the full spectrum of scans achievable with MRI, enabling model deployment across scanner models, scan sequences, and age groups. Core to our framework is UltimateSynth, a technology that combines tissue physiology and MR physics in synthesizing realistic images across a comprehensive range of meaningful contrasts. This pan-contrast capability bolsters the AI development life cycle through efficient data labeling, generalizable model training, and thorough performance benchmarking. We showcase the effectiveness of UltimateSynth by training an off-the-shelf U-Net to generalize anatomical segmentation across any MR contrast. The U-Net yields highly robust tissue volume estimates, with variability under 4% across 150,000 unique-contrast images, 3.8% across 2,000+ low-field 0.3T scans, and 3.5% across 8,000+ images spanning the human lifespan from ages 0 to 100.

Robustness evaluation of an artificial intelligence-based automatic contouring software in daily routine practice.

Fontaine J, Suszko M, di Franco F, Leroux A, Bonnet E, Bosset M, Langrand-Escure J, Clippe S, Fleury B, Guy JB

pubmed logopapersAug 7 2025
AI-based automatic contouring streamlines radiotherapy by reducing contouring time but requires rigorous validation and ongoing daily monitoring. This study assessed how software updates affect contouring accuracy and examined how image quality variations influence AI performance. Two patient cohorts were analyzed. The software updates cohort (40 CT scans: 20 thorax, 10 pelvis, 10 H&N) compared six versions of Limbus AI contouring software. The image quality cohort (20 patients: H&N, pelvis, brain, thorax) analyzed 12 reconstructions per patient using Standard, iDose, and IMR algorithms, with simulated noise and spatial resolution (SR) degradations. AI performance was assessed using Volumetric Dice Similarity Coefficient (vDSC) and 95 % Hausdorff Distance (HD95%) with Wilcoxon tests for significance. In the software updates cohort, vDSC improved for re-trained structures across versions (mean DSC ≥ 0.75), with breast contour vDSC decreasing by 1 % between v1.5 and v1.8B3 (p > 0.05). Median HD95% values were consistently <4 mm, <5 mm, and <12 mm for H&N, pelvis, and thorax contours, respectively (p > 0.05). In the image quality cohort, no significant differences were observed between Standard, iDose, and IMR algorithms. However, noise and SR degradation significantly reduced performance: vDSC ≥ 0.9 dropped from 89 % at 2 % noise to 30 % at 20 %, and from 87 % to 70 % as SR degradation increased (p < 0.001). AI contouring accuracy improved with software updates and showed robustness to minor reconstruction variations, but it was sensitive to noise and SR degradation. Continuous validation and quality control of AI-generated contours are essential. Future studies should include a broader range of anatomical regions and larger cohorts.

Enhancing Domain Generalization in Medical Image Segmentation With Global and Local Prompts.

Zhao C, Li X

pubmed logopapersAug 7 2025
Enhancing domain generalization (DG) is a crucial and compelling research pursuit within the field of medical image segmentation, owing to the inherent heterogeneity observed in medical images. The recent success with large-scale pre-trained vision models (PVMs), such as Vision Transformer (ViT), inspires us to explore their application in this specific area. While a straightforward strategy involves fine-tuning the PVM using supervised signals from the source domains, this approach overlooks the domain shift issue and neglects the rich knowledge inherent in the instances themselves. To overcome these limitations, we introduce a novel framework enhanced by global and local prompts (GLPs). Specifically, to adapt PVM in the medical DG scenario, we explicitly separate domain-shared and domain-specific knowledge in the form of GLPs. Furthermore, we develop an individualized domain adapter to intricately investigate the relationship between each target domain sample and the source domains. To harness the inherent knowledge within instances, we devise two innovative regularization terms from both the consistency and anatomy perspectives, encouraging the model to preserve instance discriminability and organ position invariance. Extensive experiments and in-depth discussions in both vanilla and semi-supervised DG scenarios deriving from five diverse medical datasets consistently demonstrate the superior segmentation performance achieved by GLP. Our code and datasets are publicly available at https://github.com/xmed-lab/GLP.

X-UNet:A novel global context-aware collaborative fusion U-shaped network with progressive feature fusion of codec for medical image segmentation.

Xu S, Chen Y, Zhang X, Sun F, Chen S, Ou Y, Luo C

pubmed logopapersAug 7 2025
Due to the inductive bias of convolutions, CNNs perform hierarchical feature extraction efficiently in the field of medical image segmentation. However, the local correlation assumption of inductive bias limits the ability of convolutions to focus on global information, which has led to the performance of Transformer-based methods surpassing that of CNNs in some segmentation tasks in recent years. Although combining with Transformers can solve this problem, it also introduces computational complexity and considerable parameters. In addition, narrowing the encoder-decoder semantic gap for high-quality mask generation is a key challenge, addressed in recent works through feature aggregation from different skip connections. However, this often results in semantic mismatches and additional noise. In this paper, we propose a novel segmentation method, X-UNet, whose backbones employ the CFGC (Collaborative Fusion with Global Context-aware) module. The CFGC module enables multi-scale feature extraction and effective global context modeling. Simultaneously, we employ the CSPF (Cross Split-channel Progressive Fusion) module to progressively align and fuse features from corresponding encoder and decoder stages through channel-wise operations, offering a novel approach to feature integration. Experimental results demonstrate that X-UNet, with fewer computations and parameters, exhibits superior performance on various medical image datasets.The code and models are available on https://github.com/XSJ0410/X-UNet.

An evaluation of rectum contours generated by artificial intelligence automatic contouring software using geometry, dosimetry and predicted toxicity.

Mc Laughlin O, Gholami F, Osman S, O'Sullivan JM, McMahon SJ, Jain S, McGarry CK

pubmed logopapersAug 7 2025
Objective&#xD;This study assesses rectum contours generated using a commercial deep learning auto-contouring model and compares them to clinician contours using geometry, changes in dosimetry and toxicity modelling. &#xD;Approach&#xD;This retrospective study involved 308 prostate cancer patients who were treated using 3D-conformal radiotherapy. Computed tomography images were input into Limbus Contour (v1.8.0b3) to generate auto-contour structures for each patient. Auto-contours were not edited after their generation.&#xD;Rectum auto-contours were compared to clinician contours geometrically and dosimetrically. Dice similarity coefficient (DSC), mean Hausdorff distance (HD) and volume difference were assessed. Dose-volume histogram (DVH) constraints (V41%-V100%) were compared, and a Wilcoxon signed rank test was used to evaluate statistical significance of differences. &#xD;Toxicity modelling to compare contours was carried out using equivalent uniform dose (EUD) and clinical factors of abdominal surgery and atrial fibrillation. Trained models were tested (80:20) in their prediction of grade 1 late rectal bleeding (ntotal=124) using area-under the receiver operating characteristic curve (AUC).&#xD;Main results&#xD;Median DSC (interquartile range (IQR)) was 0.85 (0.09), median HD was 1.38 mm (0.60 mm) and median volume difference was -1.73 cc (14.58 cc). Median DVH differences between contours were found to be small (<1.5%) for all constraints although systematically larger than clinician contours (p<0.05). However, an IQR up to 8.0% was seen for individual patients across all dose constraints.&#xD;Models using EUD alone derived from clinician or auto-contours had AUCs of 0.60 (0.10) and 0.60 (0.09). AUC for models involving clinical factors and dosimetry was 0.65 (0.09) and 0.66 (0.09) when using clinician contours and auto-contours.&#xD;Significance&#xD;Although median DVH metrics were similar, variation for individual patients highlights the importance of clinician review. Rectal bleeding prediction accuracy did not depend on the contour method for this cohort. The auto-contouring model used in this study shows promise in a supervised workflow.&#xD.

MedCLIP-SAMv2: Towards universal text-driven medical image segmentation.

Koleilat T, Asgariandehkordi H, Rivaz H, Xiao Y

pubmed logopapersAug 7 2025
Segmentation of anatomical structures and pathologies in medical images is essential for modern disease diagnosis, clinical research, and treatment planning. While significant advancements have been made in deep learning-based segmentation techniques, many of these methods still suffer from limitations in data efficiency, generalizability, and interactivity. As a result, developing robust segmentation methods that require fewer labeled datasets remains a critical challenge in medical image analysis. Recently, the introduction of foundation models like CLIP and Segment-Anything-Model (SAM), with robust cross-domain representations, has paved the way for interactive and universal image segmentation. However, further exploration of these models for data-efficient segmentation in medical imaging is an active field of research. In this paper, we introduce MedCLIP-SAMv2, a novel framework that integrates the CLIP and SAM models to perform segmentation on clinical scans using text prompts, in both zero-shot and weakly supervised settings. Our approach includes fine-tuning the BiomedCLIP model with a new Decoupled Hard Negative Noise Contrastive Estimation (DHN-NCE) loss, and leveraging the Multi-modal Information Bottleneck (M2IB) to create visual prompts for generating segmentation masks with SAM in the zero-shot setting. We also investigate using zero-shot segmentation labels in a weakly supervised paradigm to enhance segmentation quality further. Extensive validation across four diverse segmentation tasks and medical imaging modalities (breast tumor ultrasound, brain tumor MRI, lung X-ray, and lung CT) demonstrates the high accuracy of our proposed framework. Our code is available at https://github.com/HealthX-Lab/MedCLIP-SAMv2.

Few-Shot Deployment of Pretrained MRI Transformers in Brain Imaging Tasks

Mengyu Li, Guoyao Shen, Chad W. Farris, Xin Zhang

arxiv logopreprintAug 7 2025
Machine learning using transformers has shown great potential in medical imaging, but its real-world applicability remains limited due to the scarcity of annotated data. In this study, we propose a practical framework for the few-shot deployment of pretrained MRI transformers in diverse brain imaging tasks. By utilizing the Masked Autoencoder (MAE) pretraining strategy on a large-scale, multi-cohort brain MRI dataset comprising over 31 million slices, we obtain highly transferable latent representations that generalize well across tasks and datasets. For high-level tasks such as classification, a frozen MAE encoder combined with a lightweight linear head achieves state-of-the-art accuracy in MRI sequence identification with minimal supervision. For low-level tasks such as segmentation, we propose MAE-FUnet, a hybrid architecture that fuses multiscale CNN features with pretrained MAE embeddings. This model consistently outperforms other strong baselines in both skull stripping and multi-class anatomical segmentation under data-limited conditions. With extensive quantitative and qualitative evaluations, our framework demonstrates efficiency, stability, and scalability, suggesting its suitability for low-resource clinical environments and broader neuroimaging applications.

Deep Learning-Based Cascade 3D Kidney Segmentation Method.

Hao Z, Chapman BE

pubmed logopapersAug 7 2025
Renal tumors require early diagnosis and precise localization for effective treatment. This study aims to automate renal tumor analysis in abdominal CT images using a cascade 3D U-Net architecture for semantic kidney segmentation. To address challenges like edge detection and small object segmentation, the framework incorporates residual blocks to enhance convergence and efficiency. Comprehensive training configurations, preprocessing, and postprocessing strategies were employed to ensure accurate results. Tested on KiTS2019 data, the method ranked 23rd on the leaderboard (Nov 2024), demonstrating the enhanced cascade 3D U-Net's effectiveness in improving segmentation precision.

Novel radiotherapy target definition using AI-driven predictions of glioblastoma recurrence from metabolic and diffusion MRI.

Tran N, Luks TL, Li Y, Jakary A, Ellison J, Liu B, Adegbite O, Nair D, Kakhandiki P, Molinaro AM, Villanueva-Meyer JE, Butowski N, Clarke JL, Chang SM, Braunstein SE, Morin O, Lin H, Lupo JM

pubmed logopapersAug 7 2025
The current standard-of-care (SOC) practice for defining the clinical target volume (CTV) for radiation therapy (RT) in patients with glioblastoma still employs an isotropic 1-2 cm expansion of the T2-hyperintensity lesion, without considering the heterogeneous infiltrative nature of these tumors. This study aims to improve RT CTV definition in patients with glioblastoma by incorporating biologically relevant metabolic and physiologic imaging acquired before RT along with a deep learning model that can predict regions of subsequent tumor progression by either the presence of contrast-enhancement or T2-hyperintensity. The results were compared against two standard CTV definitions. Our multi-parametric deep learning model significantly outperformed the uniform 2 cm expansion of the T2-lesion CTV in terms of specificity (0.89 ± 0.05 vs 0.79 ± 0.11; p = 0.004), while also achieving comparable sensitivity (0.92 ± 0.11 vs 0.95 ± 0.08; p = 0.10), sparing more normal brain. Model performance was significantly enhanced by incorporating lesion size-weighted loss functions during training and including metabolic images as inputs.

FedGIN: Federated Learning with Dynamic Global Intensity Non-linear Augmentation for Organ Segmentation using Multi-modal Images

Sachin Dudda Nagaraju, Ashkan Moradi, Bendik Skarre Abrahamsen, Mattijs Elschot

arxiv logopreprintAug 7 2025
Medical image segmentation plays a crucial role in AI-assisted diagnostics, surgical planning, and treatment monitoring. Accurate and robust segmentation models are essential for enabling reliable, data-driven clinical decision making across diverse imaging modalities. Given the inherent variability in image characteristics across modalities, developing a unified model capable of generalizing effectively to multiple modalities would be highly beneficial. This model could streamline clinical workflows and reduce the need for modality-specific training. However, real-world deployment faces major challenges, including data scarcity, domain shift between modalities (e.g., CT vs. MRI), and privacy restrictions that prevent data sharing. To address these issues, we propose FedGIN, a Federated Learning (FL) framework that enables multimodal organ segmentation without sharing raw patient data. Our method integrates a lightweight Global Intensity Non-linear (GIN) augmentation module that harmonizes modality-specific intensity distributions during local training. We evaluated FedGIN using two types of datasets: an imputed dataset and a complete dataset. In the limited dataset scenario, the model was initially trained using only MRI data, and CT data was added to assess its performance improvements. In the complete dataset scenario, both MRI and CT data were fully utilized for training on all clients. In the limited-data scenario, FedGIN achieved a 12 to 18% improvement in 3D Dice scores on MRI test cases compared to FL without GIN and consistently outperformed local baselines. In the complete dataset scenario, FedGIN demonstrated near-centralized performance, with a 30% Dice score improvement over the MRI-only baseline and a 10% improvement over the CT-only baseline, highlighting its strong cross-modality generalization under privacy constraints.
Page 7 of 99986 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.