Sort by:
Page 143 of 1601593 results

Brightness-Invariant Tracking Estimation in Tagged MRI

Zhangxing Bian, Shuwen Wei, Xiao Liang, Yuan-Chiao Lu, Samuel W. Remedios, Fangxu Xing, Jonghye Woo, Dzung L. Pham, Aaron Carass, Philip V. Bayly, Jiachen Zhuo, Ahmed Alshareef, Jerry L. Prince

arxiv logopreprintMay 23 2025
Magnetic resonance (MR) tagging is an imaging technique for noninvasively tracking tissue motion in vivo by creating a visible pattern of magnetization saturation (tags) that deforms with the tissue. Due to longitudinal relaxation and progression to steady-state, the tags and tissue brightnesses change over time, which makes tracking with optical flow methods error-prone. Although Fourier methods can alleviate these problems, they are also sensitive to brightness changes as well as spectral spreading due to motion. To address these problems, we introduce the brightness-invariant tracking estimation (BRITE) technique for tagged MRI. BRITE disentangles the anatomy from the tag pattern in the observed tagged image sequence and simultaneously estimates the Lagrangian motion. The inherent ill-posedness of this problem is addressed by leveraging the expressive power of denoising diffusion probabilistic models to represent the probabilistic distribution of the underlying anatomy and the flexibility of physics-informed neural networks to estimate biologically-plausible motion. A set of tagged MR images of a gel phantom was acquired with various tag periods and imaging flip angles to demonstrate the impact of brightness variations and to validate our method. The results show that BRITE achieves more accurate motion and strain estimates as compared to other state of the art methods, while also being resistant to tag fading.

Self-supervised feature learning for cardiac Cine MR image reconstruction.

Xu S, Fruh M, Hammernik K, Lingg A, Kubler J, Krumm P, Rueckert D, Gatidis S, Kustner T

pubmed logopapersMay 23 2025
We propose a self-supervised feature learning assisted reconstruction (SSFL-Recon) framework for MRI reconstruction to address the limitation of existing supervised learning methods. Although recent deep learning-based methods have shown promising performance in MRI reconstruction, most require fully-sampled images for supervised learning, which is challenging in practice considering long acquisition times under respiratory or organ motion. Moreover, nearly all fully-sampled datasets are obtained from conventional reconstruction of mildly accelerated datasets, thus potentially biasing the achievable performance. The numerous undersampled datasets with different accelerations in clinical practice, hence, remain underutilized. To address these issues, we first train a self-supervised feature extractor on undersampled images to learn sampling-insensitive features. The pre-learned features are subsequently embedded in the self-supervised reconstruction network to assist in removing artifacts. Experiments were conducted retrospectively on an in-house 2D cardiac Cine dataset, including 91 cardiovascular patients and 38 healthy subjects. The results demonstrate that the proposed SSFL-Recon framework outperforms existing self-supervised MRI reconstruction methods and even exhibits comparable or better performance to supervised learning up to 16× retrospective undersampling. The feature learning strategy can effectively extract global representations, which have proven beneficial in removing artifacts and increasing generalization ability during reconstruction.

Multi-view contrastive learning and symptom extraction insights for medical report generation.

Bai Q, Zou X, Alhaskawi A, Dong Y, Zhou H, Ezzi SHA, Kota VG, AbdullaAbdulla MHH, Abdalbary SA, Hu X, Lu H

pubmed logopapersMay 23 2025
The task of generating medical reports automatically is of paramount importance in modern healthcare, offering a substantial reduction in the workload of radiologists and accelerating the processes of clinical diagnosis and treatment. Current challenges include handling limited sample sizes and interpreting intricate multi-modal and multi-view medical data. In order to improve the accuracy and efficiency for radiologists, we conducted this investigation. This study aims to present a novel methodology for medical report generation that leverages Multi-View Contrastive Learning (MVCL) applied to MRI data, combined with a Symptom Consultant (SC) for extracting medical insights, to improve the quality and efficiency of automated medical report generation. We introduce an advanced MVCL framework that maximizes the potential of multi-view MRI data to enhance visual feature extraction. Alongside, the SC component is employed to distill critical medical insights from symptom descriptions. These components are integrated within a transformer decoder architecture, which is then applied to the Deep Wrist dataset for model training and evaluation. Our experimental analysis on the Deep Wrist dataset reveals that our proposed integration of MVCL and SC significantly outperforms the baseline model in terms of accuracy and relevance of the generated medical reports. The results indicate that our approach is particularly effective in capturing and utilizing the complex information inherent in multi-modal and multi-view medical datasets. The combination of MVCL and SC constitutes a powerful approach to medical report generation, addressing the existing challenges in the field. The demonstrated superiority of our model over traditional methods holds promise for substantial improvements in clinical diagnosis and automated report generation, indicating a significant stride forward in medical technology.

Anatomy-Guided Multitask Learning for MRI-Based Classification of Placenta Accreta Spectrum and its Subtypes

Hai Jiang, Qiongting Liu, Yuanpin Zhou, Jiawei Pan, Ting Song, Yao Lu

arxiv logopreprintMay 23 2025
Placenta Accreta Spectrum Disorders (PAS) pose significant risks during pregnancy, frequently leading to postpartum hemorrhage during cesarean deliveries and other severe clinical complications, with bleeding severity correlating to the degree of placental invasion. Consequently, accurate prenatal diagnosis of PAS and its subtypes-placenta accreta (PA), placenta increta (PI), and placenta percreta (PP)-is crucial. However, existing guidelines and methodologies predominantly focus on the presence of PAS, with limited research addressing subtype recognition. Additionally, previous multi-class diagnostic efforts have primarily relied on inefficient two-stage cascaded binary classification tasks. In this study, we propose a novel convolutional neural network (CNN) architecture designed for efficient one-stage multiclass diagnosis of PAS and its subtypes, based on 4,140 magnetic resonance imaging (MRI) slices. Our model features two branches: the main classification branch utilizes a residual block architecture comprising multiple residual blocks, while the second branch integrates anatomical features of the uteroplacental area and the adjacent uterine serous layer to enhance the model's attention during classification. Furthermore, we implement a multitask learning strategy to leverage both branches effectively. Experiments conducted on a real clinical dataset demonstrate that our model achieves state-of-the-art performance.

MRI-based habitat analysis for Intratumoral heterogeneity quantification combined with deep learning for HER2 status prediction in breast cancer.

Li QY, Liang Y, Zhang L, Li JH, Wang BJ, Wang CF

pubmed logopapersMay 23 2025
Human epidermal growth factor receptor 2 (HER2) is a crucial determinant of breast cancer prognosis and treatment options. The study aimed to establish an MRI-based habitat model to quantify intratumoral heterogeneity (ITH) and evaluate its potential in predicting HER2 expression status. Data from 340 patients with pathologically confirmed invasive breast cancer were retrospectively analyzed. Two tasks were designed for this study: Task 1 distinguished between HER2-positive and HER2-negative breast cancer. Task 2 distinguished between HER2-low and HER2-zero breast cancer. We developed the ITH, deep learning (DL), and radiomics signatures based on the features extracted from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Clinical independent predictors were determined by multivariable logistic regression. Finally, a combined model was constructed by integrating the clinical independent predictors, ITH signature, and DL signature. The area under the receiver operating characteristic curve (AUC) served as the standard for assessing the performance of models. In task 1, the ITH signature performed well in the training set (AUC = 0.855) and the validation set (AUC = 0.842). In task 2, the AUCs of the ITH signature were 0.844 and 0.840, respectively, which still showed good prediction performance. In the validation sets of both tasks, the combined model exhibited the best prediction performance, with AUCs of 0.912 and 0.917 respectively, making it the optimal model. A combined model integrating clinical independent predictors, ITH signature, and DL signature can predict HER2 expression status preoperatively and noninvasively.

A Foundation Model Framework for Multi-View MRI Classification of Extramural Vascular Invasion and Mesorectal Fascia Invasion in Rectal Cancer

Yumeng Zhang, Zohaib Salahuddin, Danial Khan, Shruti Atul Mali, Henry C. Woodruff, Sina Amirrajab, Eduardo Ibor-Crespo, Ana Jimenez-Pastor, Luis Marti-Bonmati, Philippe Lambin

arxiv logopreprintMay 23 2025
Background: Accurate MRI-based identification of extramural vascular invasion (EVI) and mesorectal fascia invasion (MFI) is pivotal for risk-stratified management of rectal cancer, yet visual assessment is subjective and vulnerable to inter-institutional variability. Purpose: To develop and externally evaluate a multicenter, foundation-model-driven framework that automatically classifies EVI and MFI on axial and sagittal T2-weighted MRI. Methods: This retrospective study used 331 pre-treatment rectal cancer MRI examinations from three European hospitals. After TotalSegmentator-guided rectal patch extraction, a self-supervised frequency-domain harmonization pipeline was trained to minimize scanner-related contrast shifts. Four classifiers were compared: ResNet50, SeResNet, the universal biomedical pretrained transformer (UMedPT) with a lightweight MLP head, and a logistic-regression variant using frozen UMedPT features (UMedPT_LR). Results: UMedPT_LR achieved the best EVI detection when axial and sagittal features were fused (AUC = 0.82; sensitivity = 0.75; F1 score = 0.73), surpassing the Chaimeleon Grand-Challenge winner (AUC = 0.74). The highest MFI performance was attained by UMedPT on axial harmonized images (AUC = 0.77), surpassing the Chaimeleon Grand-Challenge winner (AUC = 0.75). Frequency-domain harmonization improved MFI classification but variably affected EVI performance. Conventional CNNs (ResNet50, SeResNet) underperformed, especially in F1 score and balanced accuracy. Conclusion: These findings demonstrate that combining foundation model features, harmonization, and multi-view fusion significantly enhances diagnostic performance in rectal MRI.

Explainable Anatomy-Guided AI for Prostate MRI: Foundation Models and In Silico Clinical Trials for Virtual Biopsy-based Risk Assessment

Danial Khan, Zohaib Salahuddin, Yumeng Zhang, Sheng Kuang, Shruti Atul Mali, Henry C. Woodruff, Sina Amirrajab, Rachel Cavill, Eduardo Ibor-Crespo, Ana Jimenez-Pastor, Adrian Galiana-Bordera, Paula Jimenez Gomez, Luis Marti-Bonmati, Philippe Lambin

arxiv logopreprintMay 23 2025
We present a fully automated, anatomically guided deep learning pipeline for prostate cancer (PCa) risk stratification using routine MRI. The pipeline integrates three key components: an nnU-Net module for segmenting the prostate gland and its zones on axial T2-weighted MRI; a classification module based on the UMedPT Swin Transformer foundation model, fine-tuned on 3D patches with optional anatomical priors and clinical data; and a VAE-GAN framework for generating counterfactual heatmaps that localize decision-driving image regions. The system was developed using 1,500 PI-CAI cases for segmentation and 617 biparametric MRIs with metadata from the CHAIMELEON challenge for classification (split into 70% training, 10% validation, and 20% testing). Segmentation achieved mean Dice scores of 0.95 (gland), 0.94 (peripheral zone), and 0.92 (transition zone). Incorporating gland priors improved AUC from 0.69 to 0.72, with a three-scale ensemble achieving top performance (AUC = 0.79, composite score = 0.76), outperforming the 2024 CHAIMELEON challenge winners. Counterfactual heatmaps reliably highlighted lesions within segmented regions, enhancing model interpretability. In a prospective multi-center in-silico trial with 20 clinicians, AI assistance increased diagnostic accuracy from 0.72 to 0.77 and Cohen's kappa from 0.43 to 0.53, while reducing review time per case by 40%. These results demonstrate that anatomy-aware foundation models with counterfactual explainability can enable accurate, interpretable, and efficient PCa risk assessment, supporting their potential use as virtual biopsies in clinical practice.

Highlights of the Society for Cardiovascular Magnetic Resonance (SCMR) 2025 Conference: leading the way to accessible, efficient and sustainable CMR.

Prieto C, Allen BD, Azevedo CF, Lima BB, Lam CZ, Mills R, Huisman M, Gonzales RA, Weingärtner S, Christodoulou AG, Rochitte C, Markl M

pubmed logopapersMay 23 2025
The 28th Annual Scientific Sessions of the Society for Cardiovascular Magnetic Resonance (SCMR) took place from January 29 to February 1, 2025, in Washington, D.C. SCMR 2025 brought together a diverse group of 1714 cardiologists, radiologists, scientists, and technologists from more than 80 countries to discuss emerging trends and the latest developments in cardiovascular magnetic resonance (CMR). The conference centered on the theme "Leading the Way to Accessible, Sustainable, and Efficient CMR," highlighting innovations aimed at making CMR more clinically efficient, widely accessible, and environmentally sustainable. The program featured 728 abstracts and case presentations with an acceptance rate of 86% (728/849), including Early Career Award abstracts, oral abstracts, oral cases and rapid-fire sessions, covering a broad range of CMR topics. It also offered engaging invited lectures across eight main parallel tracks and included four plenary sessions, two gold medalists, and one keynote speaker, with a total of 826 faculty participating. Focused sessions on accessibility, efficiency, and sustainability provided a platform for discussing current challenges and exploring future directions, while the newly introduced CMR Innovations Track showcased innovative session formats and fostered greater collaboration between researchers, clinicians, and industry. For the first time, SCMR 2025 also offered the opportunity for attendees to obtain CMR Level 1 Training Verification, integrated into the program. Additionally, expert case reading sessions and hands-on interactive workshops allowed participants to engage with real-world clinical scenarios and deepen their understanding through practical experience. Key highlights included plenary sessions on a variety of important topics, such as expanding boundaries, health equity, women's cardiovascular disease and a patient-clinician testimonial that emphasized the profound value of patient-centered research and collaboration. The scientific sessions covered a wide range of topics, from clinical applications in cardiomyopathies, congenital heart disease, and vascular imaging to women's heart health and environmental sustainability. Technical topics included novel reconstruction, motion correction, quantitative CMR, contrast agents, novel field strengths, and artificial intelligence applications, among many others. This paper summarizes the key themes and discussions from SCMR 2025, highlighting the collaborative efforts that are driving the future of CMR and underscoring the Society's unwavering commitment to research, education, and clinical excellence.

Artificial Intelligence enhanced R1 maps can improve lesion detection in focal epilepsy in children

Doumou, G., D'Arco, F., Figini, M., Lin, H., Lorio, S., Piper, R., O'Muircheartaigh, J., Cross, H., Weiskopf, N., Alexander, D., Carmichael, D. W.

medrxiv logopreprintMay 23 2025
Background and purposeMRI is critical for the detection of subtle cortical pathology in epilepsy surgery assessment. This can be aided by improved MRI quality and resolution using ultra-high field (7T). But poor access and long scan durations limit widespread use, particularly in a paediatric setting. AI-based learning approaches may provide similar information by enhancing data obtained with conventional MRI (3T). We used a convolutional neural network trained on matched 3T and 7T images to enhance quantitative R1-maps (longitudinal relaxation rate) obtained at 3T in paediatric epilepsy patients and to determine their potential clinical value for lesion identification. Materials and MethodsA 3D U-Net was trained using paired patches from 3T and 7T R1-maps from n=10 healthy volunteers. The trained network was applied to enhance paediatric focal epilepsy 3T R1 images from a different scanner/site (n=17 MRI lesion positive / n=14 MR-negative). Radiological review assessed image quality, as well as lesion identification and visualization of enhanced maps in comparison to the 3T R1-maps without clinical information. Lesion appearance was then compared to 3D-FLAIR. ResultsAI enhanced R1 maps were superior in terms of image quality in comparison to the original 3T R1 maps, while preserving and enhancing the visibility of lesions. After exclusion of 5/31 patients (due to movement artefact or incomplete data), lesions were detected in AI Enhanced R1 maps for 14/15 (93%) MR-positive and 4/11 (36%) MR-negative patients. ConclusionAI enhanced R1 maps improved the visibility of lesions in MR positive patients, as well as providing higher sensitivity in the MR-negative group compared to either the original 3T R1-maps or 3D-FLAIR. This provides promising initial evidence that 3T quantitative maps can outperform conventional 3T imaging via enhancement by an AI model trained on 7T MRI data, without the need for pathology-specific information.

Renal Transplant Survival Prediction From Unsupervised Deep Learning-Based Radiomics on Early Dynamic Contrast-Enhanced MRI.

Milecki L, Bodard S, Kalogeiton V, Poinard F, Tissier AM, Boudhabhay I, Correas JM, Anglicheau D, Vakalopoulou M, Timsit MO

pubmed logopapersMay 23 2025
End-stage renal disease is characterized by an irreversible decline in kidney function. Despite a risk of chronic dysfunction of the transplanted kidney, renal transplantation is considered the most effective solution among available treatment options. Clinical attributes of graft survival prediction, such as allocation variables or results of pathological examinations, have been widely studied. Nevertheless, medical imaging is clinically used only to assess current transplant status. This study investigated the use of unsupervised deep learning-based algorithms to identify rich radiomic features that may be linked to graft survival from early dynamic contrast-enhanced magnetic resonance imaging data of renal transplants. A retrospective cohort of 108 transplanted patients (mean age 50 +/- 15, 67 men) undergoing systematic magnetic resonance imaging follow-up examinations (2013 to 2015) was used to train deep convolutional neural network models based on an unsupervised contrastive learning approach. 5-year graft survival analysis was performed from the obtained artificial intelligence radiomics features using penalized Cox models and Kaplan-Meier estimates. Using a validation set of 48 patients (mean age 54 +/- 13, 30 men) having 1-month post-transplantation magnetic resonance imaging examinations, the proposed approach demonstrated promising 5-year graft survival capability with a 72.7% concordance index from the artificial intelligence radiomics features. Unsupervised clustering of these radiomics features enabled statistically significant stratification of patients (p=0.029). This proof-of-concept study exposed the promising capability of artificial intelligence algorithms to extract relevant radiomics features that enable renal transplant survival prediction. Further studies are needed to demonstrate the robustness of this technique, and to identify appropriate procedures for integration of such an approach into multimodal and clinical settings.
Page 143 of 1601593 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.