Sort by:
Page 96 of 1201200 results

CineMA: A Foundation Model for Cine Cardiac MRI

Yunguan Fu, Weixi Yi, Charlotte Manisty, Anish N Bhuva, Thomas A Treibel, James C Moon, Matthew J Clarkson, Rhodri Huw Davies, Yipeng Hu

arxiv logopreprintMay 31 2025
Cardiac magnetic resonance (CMR) is a key investigation in clinical cardiovascular medicine and has been used extensively in population research. However, extracting clinically important measurements such as ejection fraction for diagnosing cardiovascular diseases remains time-consuming and subjective. We developed CineMA, a foundation AI model automating these tasks with limited labels. CineMA is a self-supervised autoencoder model trained on 74,916 cine CMR studies to reconstruct images from masked inputs. After fine-tuning, it was evaluated across eight datasets on 23 tasks from four categories: ventricle and myocardium segmentation, left and right ventricle ejection fraction calculation, disease detection and classification, and landmark localisation. CineMA is the first foundation model for cine CMR to match or outperform convolutional neural networks (CNNs). CineMA demonstrated greater label efficiency than CNNs, achieving comparable or better performance with fewer annotations. This reduces the burden of clinician labelling and supports replacing task-specific training with fine-tuning foundation models in future cardiac imaging applications. Models and code for pre-training and fine-tuning are available at https://github.com/mathpluscode/CineMA, democratising access to high-performance models that otherwise require substantial computational resources, promoting reproducibility and accelerating clinical translation.

Sparsity-Driven Parallel Imaging Consistency for Improved Self-Supervised MRI Reconstruction

Yaşar Utku Alçalar, Mehmet Akçakaya

arxiv logopreprintMay 30 2025
Physics-driven deep learning (PD-DL) models have proven to be a powerful approach for improved reconstruction of rapid MRI scans. In order to train these models in scenarios where fully-sampled reference data is unavailable, self-supervised learning has gained prominence. However, its application at high acceleration rates frequently introduces artifacts, compromising image fidelity. To mitigate this shortcoming, we propose a novel way to train PD-DL networks via carefully-designed perturbations. In particular, we enhance the k-space masking idea of conventional self-supervised learning with a novel consistency term that assesses the model's ability to accurately predict the added perturbations in a sparse domain, leading to more reliable and artifact-free reconstructions. The results obtained from the fastMRI knee and brain datasets show that the proposed training strategy effectively reduces aliasing artifacts and mitigates noise amplification at high acceleration rates, outperforming state-of-the-art self-supervised methods both visually and quantitatively.

Multiclass ensemble framework for enhanced prostate gland Segmentation: Integrating Self-ONN decoders with EfficientNet.

Islam Sumon MS, Chowdhury MEH, Bhuiyan EH, Rahman MS, Khan MM, Al-Hashimi I, Mushtak A, Zoghoul SB

pubmed logopapersMay 30 2025
Digital pathology relies on the morphological architecture of prostate glands to recognize cancerous tissue. Prostate cancer (PCa) originates in walnut shaped prostate gland in the male reproductive system. Deep learning (DL) pipelines can assist in identifying these regions with advanced segmentation techniques which are effective in diagnosing and treating prostate diseases. This facilitates early detection, targeted biopsy, and accurate treatment planning, ensuring consistent, reproducible results while minimizing human error. Automated segmentation techniques trained on MRI datasets can aid in monitoring disease progression which leads to clinical support by developing patient-specific models for personalized medicine. In this study, we present multiclass segmentation models designed to localize the prostate gland and its zonal regions-specifically the peripheral zone (PZ), transition zone (TZ), and the whole gland-by combining EfficientNetB4 encoders with Self-organized Operational Neural Network (Self-ONN)-based decoders. Traditional convolutional neural networks (CNNs) rely on linear neuron models, which limit their ability to capture the complex dynamics of biological neural systems. In contrast, Operational Neural Networks (ONNs), particularly Self-ONNs, address this limitation by incorporating nonlinear and adaptive operations at the neuron level. We evaluated various encoder-decoder configurations and identified that the combination of an EfficientNet-based encoder with a Self-ONN-based decoder yielded the best performance. To further enhance segmentation accuracy, we employed the STAPLE method to ensemble the top three performing models. Our approach was tested on the large-scale, recently updated PI-CAI Challenge dataset using 5-fold cross-validation, achieving Dice scores of 95.33 % for the whole gland and 92.32 % for the combined PZ and TZ regions. These advanced segmentation techniques significantly improve the quality of PCa diagnosis and treatment, contributing to better patient care and outcomes.

The Impact of Model-based Deep-learning Reconstruction Compared with that of Compressed Sensing-Sensitivity Encoding on the Image Quality and Precision of Cine Cardiac MR in Evaluating Left-ventricular Volume and Strain: A Study on Healthy Volunteers.

Tsuneta S, Aono S, Kimura R, Kwon J, Fujima N, Ishizaka K, Nishioka N, Yoneyama M, Kato F, Minowa K, Kudo K

pubmed logopapersMay 30 2025
To evaluate the effect of model-based deep-learning reconstruction (DLR) compared with that of compressed sensing-sensitivity encoding (CS) on cine cardiac magnetic resonance (CMR). Cine CMR images of 10 healthy volunteers were obtained with reduction factors of 2, 4, 6, and 8 and reconstructed using CS and DLR. The visual image quality scores assessed sharpness, image noise, and artifacts. Left-ventricular (LV) end-diastolic volume (EDV), end-systolic volume (ESV), stroke volume (SV), and ejection fraction (EF) were manually measured. LV global circumferential strain (GCS) was automatically measured using the software. The precision of EDV, ESV, SV, EF, and GCS measurements was compared between CS and DLR using Bland-Altman analysis with full-sampling data as the gold standard. Compared with CS, DLR significantly improved image quality with reduction factors of 6 and 8. The precision of EDV and ESV with a reduction factor of 8, and GCS with reduction factors of 6 and 8 measurements improved with DLR compared with CS, whereas those of SV and EF measurements were not different between DLR and CS. The effect of DLR on cine CMR's image quality and precision in evaluating quantitative volume and strain was equal or superior to that of CS. DLR may replace CS for cine CMR.

Using AI to triage patients without clinically significant prostate cancer using biparametric MRI and PSA.

Grabke EP, Heming CAM, Hadari A, Finelli A, Ghai S, Lajkosz K, Taati B, Haider MA

pubmed logopapersMay 30 2025
To train and evaluate the performance of a machine learning triaging tool that identifies MRI negative for clinically significant prostate cancer and to compare this against non-MRI models. 2895 MRIs were collected from two sources (1630 internal, 1265 public) in this retrospective study. Risk models compared were: Prostate Cancer Prevention Trial Risk Calculator 2.0, Prostate Biopsy Collaborative Group Calculator, PSA density, U-Net segmentation, and U-Net combined with clinical parameters. The reference standard was histopathology or negative follow-up. Performance metrics were calculated by simulating a triaging workflow compared to radiologist interpreting all exams on a test set of 465 patients. Sensitivity and specificity differences were assessed using the McNemar test. Differences in PPV and NPV were assessed using the Leisenring, Alonzo and Pepe generalized score statistic. Equivalence test p-values were adjusted within each measure using Benjamini-Hochberg correction. Triaging using U-Net with clinical parameters reduced radiologist workload by 12.5% with sensitivity decrease from 93 to 90% (p = 0.023) and specificity increase from 39 to 47% (p < 0.001). This simulated workload reduction was greater than triaging with risk calculators (3.2% and 1.3%, p < 0.001), and comparable to PSA density (8.4%, p = 0.071) and U-Net alone (11.6%, p = 0.762). Both U-Net triaging strategies increased PPV (+ 2.8% p = 0.005 clinical, + 2.2% p = 0.020 nonclinical), unlike non-U-Net strategies (p > 0.05). NPV remained equivalent for all scenarios (p > 0.05). Clinically-informed U-Net triaging correctly ruled out 20 (13.4%) radiologist false positives (12 PI-RADS = 3, 8 PI-RADS = 4). Of the eight (3.6%) false negatives, two were misclassified by the radiologist. No misclassified case was interpreted as PI-RADS 5. Prostate MRI triaging using machine learning could reduce radiologist workload by 12.5% with a 3% sensitivity decrease and 8% specificity increase, outperforming triaging using non-imaging-based risk models. Further prospective validation is required.

Edge Computing for Physics-Driven AI in Computational MRI: A Feasibility Study

Yaşar Utku Alçalar, Yu Cao, Mehmet Akçakaya

arxiv logopreprintMay 30 2025
Physics-driven artificial intelligence (PD-AI) reconstruction methods have emerged as the state-of-the-art for accelerating MRI scans, enabling higher spatial and temporal resolutions. However, the high resolution of these scans generates massive data volumes, leading to challenges in transmission, storage, and real-time processing. This is particularly pronounced in functional MRI, where hundreds of volumetric acquisitions further exacerbate these demands. Edge computing with FPGAs presents a promising solution for enabling PD-AI reconstruction near the MRI sensors, reducing data transfer and storage bottlenecks. However, this requires optimization of PD-AI models for hardware efficiency through quantization and bypassing traditional FFT-based approaches, which can be a limitation due to their computational demands. In this work, we propose a novel PD-AI computational MRI approach optimized for FPGA-based edge computing devices, leveraging 8-bit complex data quantization and eliminating redundant FFT/IFFT operations. Our results show that this strategy improves computational efficiency while maintaining reconstruction quality comparable to conventional PD-AI methods, and outperforms standard clinical methods. Our approach presents an opportunity for high-resolution MRI reconstruction on resource-constrained devices, highlighting its potential for real-world deployment.

Beyond the LUMIR challenge: The pathway to foundational registration models

Junyu Chen, Shuwen Wei, Joel Honkamaa, Pekka Marttinen, Hang Zhang, Min Liu, Yichao Zhou, Zuopeng Tan, Zhuoyuan Wang, Yi Wang, Hongchao Zhou, Shunbo Hu, Yi Zhang, Qian Tao, Lukas Förner, Thomas Wendler, Bailiang Jian, Benedikt Wiestler, Tim Hable, Jin Kim, Dan Ruan, Frederic Madesta, Thilo Sentker, Wiebke Heyer, Lianrui Zuo, Yuwei Dai, Jing Wu, Jerry L. Prince, Harrison Bai, Yong Du, Yihao Liu, Alessa Hering, Reuben Dorent, Lasse Hansen, Mattias P. Heinrich, Aaron Carass

arxiv logopreprintMay 30 2025
Medical image challenges have played a transformative role in advancing the field, catalyzing algorithmic innovation and establishing new performance standards across diverse clinical applications. Image registration, a foundational task in neuroimaging pipelines, has similarly benefited from the Learn2Reg initiative. Building on this foundation, we introduce the Large-scale Unsupervised Brain MRI Image Registration (LUMIR) challenge, a next-generation benchmark designed to assess and advance unsupervised brain MRI registration. Distinct from prior challenges that leveraged anatomical label maps for supervision, LUMIR removes this dependency by providing over 4,000 preprocessed T1-weighted brain MRIs for training without any label maps, encouraging biologically plausible deformation modeling through self-supervision. In addition to evaluating performance on 590 held-out test subjects, LUMIR introduces a rigorous suite of zero-shot generalization tasks, spanning out-of-domain imaging modalities (e.g., FLAIR, T2-weighted, T2*-weighted), disease populations (e.g., Alzheimer's disease), acquisition protocols (e.g., 9.4T MRI), and species (e.g., macaque brains). A total of 1,158 subjects and over 4,000 image pairs were included for evaluation. Performance was assessed using both segmentation-based metrics (Dice coefficient, 95th percentile Hausdorff distance) and landmark-based registration accuracy (target registration error). Across both in-domain and zero-shot tasks, deep learning-based methods consistently achieved state-of-the-art accuracy while producing anatomically plausible deformation fields. The top-performing deep learning-based models demonstrated diffeomorphic properties and inverse consistency, outperforming several leading optimization-based methods, and showing strong robustness to most domain shifts, the exception being a drop in performance on out-of-domain contrasts.

Deep learning-driven modality imputation and subregion segmentation to enhance high-grade glioma grading.

Yu J, Liu Q, Xu C, Zhou Q, Xu J, Zhu L, Chen C, Zhou Y, Xiao B, Zheng L, Zhou X, Zhang F, Ye Y, Mi H, Zhang D, Yang L, Wu Z, Wang J, Chen M, Zhou Z, Wang H, Wang VY, Wang E, Xu D

pubmed logopapersMay 30 2025
This study aims to develop a deep learning framework that leverages modality imputation and subregion segmentation to improve grading accuracy in high-grade gliomas. A retrospective analysis was conducted using data from 1,251 patients in the BraTS2021 dataset as the main cohort and 181 clinical cases collected from a medical center between April 2013 and June 2018 (51 years ± 17; 104 males) as the external test set. We propose a PatchGAN-based modality imputation network with an Aggregated Residual Transformer (ART) module combining Transformer self-attention and CNN feature extraction via residual links, paired with a U-Net variant for segmentation. Generative accuracy used PSNR and SSIM for modality conversions, while segmentation performance was measured with DSC and HD95 across necrotic core (NCR), edema (ED), and enhancing tumor (ET) regions. Senior radiologists conducted a comprehensive Likert-based assessment, with diagnostic accuracy evaluated by AUC. Statistical analysis was performed using the Wilcoxon signed-rank test and the DeLong test. The best source-target modality pairs for imputation were T1 to T1ce and T1ce to T2 (p < 0.001). In subregion segmentation, the overall DSC was 0.878 and HD95 was 19.491, with the ET region showing the highest segmentation accuracy (DSC: 0.877, HD95: 12.149). Clinical validation revealed an improvement in grading accuracy by the senior radiologist, with the AUC increasing from 0.718 to 0.913 (P < 0.001) when using the combined imputation and segmentation models. The proposed deep learning framework improves high-grade glioma grading by modality imputation and segmentation, aiding the senior radiologist and offering potential to advance clinical decision-making.

Bidirectional Projection-Based Multi-Modal Fusion Transformer for Early Detection of Cerebral Palsy in Infants.

Qi K, Huang T, Jin C, Yang Y, Ying S, Sun J, Yang J

pubmed logopapersMay 30 2025
Periventricular white matter injury (PWMI) is the most frequent magnetic resonance imaging (MRI) finding in infants with Cerebral Palsy (CP). We aim to detect CP and identify subtle, sparse PWMI lesions in infants under two years of age with immature brain structures. Based on the characteristic that the responsible lesions are located within five target regions, we first construct a multi-modal dataset including 243 cases with the mask annotations of five target regions for delineating anatomical structures on T1-Weighted Imaging (T1WI) images, masks for lesions on T2-Weighted Imaging (T2WI) images, and categories (CP or Non-CP). Furthermore, we develop a bidirectional projection-based multi-modal fusion transformer (BiP-MFT), incorporating a Bidirectional Projection Fusion Module (BPFM) for integrating the features between five target regions on T1WI images and lesions on T2WI images. Our BiP-MFT achieves subject-level classification accuracy of 0.90, specificity of 0.87, and sensitivity of 0.94. It surpasses the best results of nine comparative methods, with 0.10, 0.08, and 0.09 improvements in classification accuracy, specificity and sensitivity respectively. Our BPFM outperforms eight compared feature fusion strategies using Transformer and U-Net backbones on our dataset. Ablation studies on the dataset annotations and model components justify the effectiveness of our annotation method and the model rationality. The proposed dataset and codes are available at https://github.com/Kai-Qi/BiP-MFT.

A Study on Predicting the Efficacy of Posterior Lumbar Interbody Fusion Surgery Using a Deep Learning Radiomics Model.

Fang L, Pan Y, Zheng H, Li F, Zhang W, Liu J, Zhou Q

pubmed logopapersMay 30 2025
This study seeks to develop a combined model integrating clinical data, radiomics, and deep learning (DL) for predicting the efficacy of posterior lumbar interbody fusion (PLIF) surgery. A retrospective review was conducted on 461 patients who underwent PLIF for degenerative lumbar diseases. These patients were partitioned into a training set (n=368) and a test set (n=93) in an 8:2 ratio. Clinical models, radiomics models, and DL models were constructed based on logistic regression and random forest, respectively. A combined model was established by integrating these three models. All radiomics and DL features were extracted from sagittal T2-weighted images using 3D slicer software. The least absolute shrinkage and selection operator method selected the optimal radiomics and DL features to build the models. In addition to analyzing the original region of interest (ROI), we also conducted different degrees of mask expansion on the ROI to determine the optimal ROI. The performance of the model was evaluated by using the receiver operating characteristic curve (ROC) and the area under the ROC curve. The differences in AUC were compared by DeLong test. Among the clinical characteristics, patient age, body weight, and preoperative intervertebral distance at the surgical segment are risk factors affecting the fusion outcome. The radiomics model based on MRI with expanded 10 mm mask showed excellent performance (training set AUC=0.814, 95% CI: (0.761-0.866); test set AUC=0.749, 95% CI: [0.631-0.866]). Among all single models, the DL model had the best diagnostic prediction performance, with AUC values of (0.995, 95% CI: [0.991-0.999]) for the training set and (0.803, 95% CI: [0.705-0.902]) for the test set. Compared to all the models, the combined model of clinical features, radiomics features, and DL features had the best diagnostic prediction performance, with AUC values of (0.993, 95% CI: [0.987-0.999]) for the training set and (0.866, 95% CI: [0.778-0.955]) for the test set. The proposed clinical feature-deep learning radiomics model can effectively predict the postoperative efficacy of patients undergoing PLIF surgery and has good clinical applicability.
Page 96 of 1201200 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.