Sort by:
Page 17 of 72720 results

High-performance Open-source AI for Breast Cancer Detection and Localization in MRI.

Hirsch L, Sutton EJ, Huang Y, Kayis B, Hughes M, Martinez D, Makse HA, Parra LC

pubmed logopapersJun 25 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop and evaluate an open-source deep learning model for detection and localization of breast cancer on MRI. Materials and Methods In this retrospective study, a deep learning model for breast cancer detection and localization was trained on the largest breast MRI dataset to date. Data included all breast MRIs conducted at a tertiary cancer center in the United States between 2002 and 2019. The model was validated on sagittal MRIs from the primary site (<i>n</i> = 6,615 breasts). Generalizability was assessed by evaluating model performance on axial data from the primary site (<i>n</i> = 7,058 breasts) and a second clinical site (<i>n</i> = 1,840 breasts). Results The primary site dataset included 30,672 sagittal MRI examinations (52,598 breasts) from 9,986 female patients (mean [SD] age, 53 [11] years). The model achieved an area under the receiver operating characteristic curve (AUC) of 0.95 for detecting cancer in the primary site. At 90% specificity (5717/6353), model sensitivity was 83% (217/262), which was comparable to historical performance data for radiologists. The model generalized well to axial examinations, achieving an AUC of 0.92 on data from the same clinical site and 0.92 on data from a secondary site. The model accurately located the tumor in 88.5% (232/262) of sagittal images, 92.8% (272/293) of axial images from the primary site, and 87.7% (807/920) of secondary site axial images. Conclusion The model demonstrated state-of-the-art performance on breast cancer detection. Code and weights are openly available to stimulate further development and validation. ©RSNA, 2025.

Weighted Mean Frequencies: a handcraft Fourier feature for 4D Flow MRI segmentation

Simon Perrin, Sébastien Levilly, Huajun Sun, Harold Mouchère, Jean-Michel Serfaty

arxiv logopreprintJun 25 2025
In recent decades, the use of 4D Flow MRI images has enabled the quantification of velocity fields within a volume of interest and along the cardiac cycle. However, the lack of resolution and the presence of noise in these biomarkers are significant issues. As indicated by recent studies, it appears that biomarkers such as wall shear stress are particularly impacted by the poor resolution of vessel segmentation. The Phase Contrast Magnetic Resonance Angiography (PC-MRA) is the state-of-the-art method to facilitate segmentation. The objective of this work is to introduce a new handcraft feature that provides a novel visualisation of 4D Flow MRI images, which is useful in the segmentation task. This feature, termed Weighted Mean Frequencies (WMF), is capable of revealing the region in three dimensions where a voxel has been passed by pulsatile flow. Indeed, this feature is representative of the hull of all pulsatile velocity voxels. The value of the feature under discussion is illustrated by two experiments. The experiments involved segmenting 4D Flow MRI images using optimal thresholding and deep learning methods. The results obtained demonstrate a substantial enhancement in terms of IoU and Dice, with a respective increase of 0.12 and 0.13 in comparison with the PC-MRA feature, as evidenced by the deep learning task. This feature has the potential to yield valuable insights that could inform future segmentation processes in other vascular regions, such as the heart or the brain.

Radiomic fingerprints for knee MR images assessment

Yaxi Chen, Simin Ni, Shaheer U. Saeed, Aleksandra Ivanova, Rikin Hargunani, Jie Huang, Chaozong Liu, Yipeng Hu

arxiv logopreprintJun 25 2025
Accurate interpretation of knee MRI scans relies on expert clinical judgment, often with high variability and limited scalability. Existing radiomic approaches use a fixed set of radiomic features (the signature), selected at the population level and applied uniformly to all patients. While interpretable, these signatures are often too constrained to represent individual pathological variations. As a result, conventional radiomic-based approaches are found to be limited in performance, compared with recent end-to-end deep learning (DL) alternatives without using interpretable radiomic features. We argue that the individual-agnostic nature in current radiomic selection is not central to its intepretability, but is responsible for the poor generalization in our application. Here, we propose a novel radiomic fingerprint framework, in which a radiomic feature set (the fingerprint) is dynamically constructed for each patient, selected by a DL model. Unlike the existing radiomic signatures, our fingerprints are derived on a per-patient basis by predicting the feature relevance in a large radiomic feature pool, and selecting only those that are predictive of clinical conditions for individual patients. The radiomic-selecting model is trained simultaneously with a low-dimensional (considered relatively explainable) logistic regression for downstream classification. We validate our methods across multiple diagnostic tasks including general knee abnormalities, anterior cruciate ligament (ACL) tears, and meniscus tears, demonstrating comparable or superior diagnostic accuracy relative to state-of-the-art end-to-end DL models. More importantly, we show that the interpretability inherent in our approach facilitates meaningful clinical insights and potential biomarker discovery, with detailed discussion, quantitative and qualitative analysis of real-world clinical cases to evidence these advantages.

Alterations in the functional MRI-based temporal brain organisation in individuals with obesity.

Lee S, Namgung JY, Han JH, Park BY

pubmed logopapersJun 25 2025
Obesity is associated with functional alterations in the brain. Although spatial organisation changes in the brains of individuals with obesity have been widely studied, the temporal dynamics in their brains remain poorly understood. Therefore, in this study, we investigated variations in the intrinsic neural timescale (INT) across different degrees of obesity using resting-state functional and diffusion magnetic resonance imaging data from the enhanced Nathan Kline Institute Rockland Sample database. We examined the relationship between the INT and obesity phenotypes using supervised machine learning, controlling for age and sex. To further explore the structure-function characteristics of these regions, we assessed the modular network properties by analysing the participation coefficients and within-module degree derived from the structure-function coupling matrices. Finally, the INT values of the identified regions were used to predict eating behaviour traits. A significant negative correlation was observed, particularly in the default mode, limbic and reward networks. We found a negative association with the participation coefficients, suggesting that shorter INT values in higher-order association areas are related to reduced network integration. Moreover, the INT values of these identified regions moderately predicted eating behaviours, underscoring the potential of the INT as a candidate marker for obesity and eating behaviours. These findings provide insight into the temporal organisation of neural activity in obesity, highlighting the role of specific brain networks in shaping behavioural outcomes.

Assessment of Robustness of MRI Radiomic Features in the Abdomen: Impact of Deep Learning Reconstruction and Accelerated Acquisition.

Zhong J, Xing Y, Hu Y, Liu X, Dai S, Ding D, Lu J, Yang J, Song Y, Lu M, Nickel D, Lu W, Zhang H, Yao W

pubmed logopapersJun 25 2025
The objective of this study is to investigate the impact of deep learning reconstruction and accelerated acquisition on reproducibility and variability of radiomic features in abdominal MRI. Seventeen volunteers were prospectively included to undergo abdominal MRI on a 3-T scanner for axial T2-weighted, axial T2-weighted fat-suppressed, and coronal T2-weighted sequences. Each sequence was scanned for four times using clinical reference acquisition with standard reconstruction, clinical reference acquisition with deep learning reconstruction, accelerated acquisition with standard reconstruction, and accelerated acquisition with deep learning reconstruction, respectively. The regions of interest were drawn for ten anatomical sites with rigid registrations. Ninety-three radiomic features were extracted via PyRadiomics after z-score normalization. The reproducibility was evaluated using clinical reference acquisition with standard reconstruction as reference by intraclass correlation coefficient (ICC) and concordance correlation coefficient (CCC). The variability among four scans was assessed by coefficient of variation (CV) and quartile coefficient of dispersion (QCD). Our study found that the median (first and third quartile) of overall ICC and CCC values were 0.451 (0.305, 0.583) and 0.450 (0.304, 0.582). The overall percentage of radiomic features with ICC > 0.90 and CCC > 0.90 was 8.1% and 8.1%, and was considered acceptable. The median (first and third quartile) of overall CV and QCD values was 9.4% (4.9%, 17.2%) and 4.9% (2.5%, 9.7%). The overall percentage of radiomic features with CV < 10% and QCD < 10% was 51.9% and 75.0%, and was considered acceptable. Without respect to clinical significance, deep learning reconstruction and accelerated acquisition led to a poor reproducibility of radiomic features, but more than a half of the radiomic features varied within an acceptable range.

Few-Shot Learning for Prostate Cancer Detection on MRI: Comparative Analysis with Radiologists' Performance.

Yamagishi Y, Baba Y, Suzuki J, Okada Y, Kanao K, Oyama M

pubmed logopapersJun 25 2025
Deep-learning models for prostate cancer detection typically require large datasets, limiting clinical applicability across institutions due to domain shift issues. This study aimed to develop a few-shot learning deep-learning model for prostate cancer detection on multiparametric MRI that requires minimal training data and to compare its diagnostic performance with experienced radiologists. In this retrospective study, we used 99 cases (80 positive, 19 negative) of biopsy-confirmed prostate cancer (2017-2022), with 20 cases for training, 5 for validation, and 74 for testing. A 2D transformer model was trained on T2-weighted, diffusion-weighted, and apparent diffusion coefficient map images. Model predictions were compared with two radiologists using Matthews correlation coefficient (MCC) and F1 score, with 95% confidence intervals (CIs) calculated via bootstrap method. The model achieved an MCC of 0.297 (95% CI: 0.095-0.474) and F1 score of 0.707 (95% CI: 0.598-0.847). Radiologist 1 had an MCC of 0.276 (95% CI: 0.054-0.484) and F1 score of 0.741; Radiologist 2 had an MCC of 0.504 (95% CI: 0.289-0.703) and F1 score of 0.871, showing that the model performance was comparable to Radiologist 1. External validation on the Prostate158 dataset revealed that ImageNet pretraining substantially improved model performance, increasing study-level ROC-AUC from 0.464 to 0.636 and study-level PR-AUC from 0.637 to 0.773 across all architectures. Our findings demonstrate that few-shot deep-learning models can achieve clinically relevant performance when using pretrained transformer architectures, offering a promising approach to address domain shift challenges across institutions.

Contrast-enhanced image synthesis using latent diffusion model for precise online tumor delineation in MRI-guided adaptive radiotherapy for brain metastases.

Ma X, Ma Y, Wang Y, Li C, Liu Y, Chen X, Dai J, Bi N, Men K

pubmed logopapersJun 25 2025
&#xD;Magnetic resonance imaging-guided adaptive radiotherapy (MRIgART) is a promising technique for long-course RT of large-volume brain metastasis (BM), due to the capacity to track tumor changes throughout treatment course. Contrast-enhanced T1-weighted (T1CE) MRI is essential for BM delineation, yet is often unavailable during online treatment concerning the requirement of contrast agent injection. This study aims to develop a synthetic T1CE (sT1CE) generation method to facilitate accurate online adaptive BM delineation.&#xD;Approach:&#xD;We developed a novel ControlNet-coupled latent diffusion model (CTN-LDM) combined with a personalized transfer learning strategy and a denoising diffusion implicit model (DDIM) inversion method to generate high quality sT1CE images from online T2-weighted (T2) or fluid attenuated inversion recovery (FLAIR) images. Visual quality of sT1CE images generated by the CTN-LDM was compared with classical deep learning models. BM delineation results using the combination of our sT1CE images and online T2/FLAIR images were compared with the results solely using online T2/FLAIR images, which is the current clinical method.&#xD;Main results:&#xD;Visual quality of sT1CE images from our CTN-LDM was superior to classical models both quantitatively and qualitatively. Leveraging sT1CE images, radiation oncologists achieved significant higher precision of adaptive BM delineation, with average Dice similarity coefficient of 0.93 ± 0.02 vs. 0.86 ± 0.04 (p < 0.01), compared with only using online T2/FLAIR images. &#xD;Significance:&#xD;The proposed method could generate high quality sT1CE images and significantly improve accuracy of online adaptive tumor delineation for long-course MRIgART of large-volume BM, potentially enhancing treatment outcomes and minimizing toxicity.

Novel Application of Connectomics to the Surgical Management of Pediatric Arteriovenous Malformations.

Syed SA, Al-Mufti F, Hanft SJ, Gandhi CD, Pisapia JM

pubmed logopapersJun 25 2025
Introduction The emergence of connectomics in neurosurgery has allowed for construction of detailed maps of white matter connections, incorporating both structural and functional connectivity patterns. The advantage of mapping cerebral vascular lesions to guide surgical approach shows great potential. We aim to identify the clinical utility of connectomics for the surgical treatment of pediatric arteriovenous malformations (AVM). Case Presentation We present two illustrative cases of the application of connectomics to the management of cerebral AVM in a 9-year-old and 8-year-old female. Using magnetic resonance anatomic and diffusion tensor imaging, a machine learning algorithm generated patient-specific representations of the corticospinal tract for the first patient, and the optic radiations for the second patient. The default mode network and language network were also examined for each patient. The imaging output served as an adjunct to guide operative decision making. It assisted with selection of the superior parietal lobule as the operative corridor for the first case. Furthermore, it alerted the surgeon to white matter tracts in close proximity to the AVM nidus during resection. Finally, it aided in risk versus benefit analysis regarding treatment approach, such as craniotomy for resection for the first patient versus radiosurgery for the second patient. Both patients had favorable neurologic outcomes at the available follow-up period. Conclusion Use of the software integrated well with clinical workflow. The output was used for planning and overlaid on the intraoperative neuro-navigation system. It improved visualization of eloquent regions, especially those networks not visible on standard anatomic imaging. Future studies will focus on expanding the cohort, conducting in pre- and post-operative connectomic analysis with correlation to clinical outcome measures, and incorporating functional magnetic resonance imaging.

U-R-VEDA: Integrating UNET, Residual Links, Edge and Dual Attention, and Vision Transformer for Accurate Semantic Segmentation of CMRs

Racheal Mukisa, Arvind K. Bansal

arxiv logopreprintJun 25 2025
Artificial intelligence, including deep learning models, will play a transformative role in automated medical image analysis for the diagnosis of cardiac disorders and their management. Automated accurate delineation of cardiac images is the first necessary initial step for the quantification and automated diagnosis of cardiac disorders. In this paper, we propose a deep learning based enhanced UNet model, U-R-Veda, which integrates convolution transformations, vision transformer, residual links, channel-attention, and spatial attention, together with edge-detection based skip-connections for an accurate fully-automated semantic segmentation of cardiac magnetic resonance (CMR) images. The model extracts local-features and their interrelationships using a stack of combination convolution blocks, with embedded channel and spatial attention in the convolution block, and vision transformers. Deep embedding of channel and spatial attention in the convolution block identifies important features and their spatial localization. The combined edge information with channel and spatial attention as skip connection reduces information-loss during convolution transformations. The overall model significantly improves the semantic segmentation of CMR images necessary for improved medical image analysis. An algorithm for the dual attention module (channel and spatial attention) has been presented. Performance results show that U-R-Veda achieves an average accuracy of 95.2%, based on DSC metrics. The model outperforms the accuracy attained by other models, based on DSC and HD metrics, especially for the delineation of right-ventricle and left-ventricle-myocardium.

Deep learning-based diffusion MRI tractography: Integrating spatial and anatomical information.

Yang Y, Yuan Y, Ren B, Wu Y, Feng Y, Zhang X

pubmed logopapersJun 25 2025
Diffusion MRI tractography technique enables non-invasive visualization of the white matter pathways in the brain. It plays a crucial role in neuroscience and clinical fields by facilitating the study of brain connectivity and neurological disorders. However, the accuracy of reconstructed tractograms has been a longstanding challenge. Recently, deep learning methods have been applied to improve tractograms for better white matter coverage, but often comes at the expense of generating excessive false-positive connections. This is largely due to their reliance on local information to predict long-range streamlines. To improve the accuracy of streamline propagation predictions, we introduce a novel deep learning framework that integrates image-domain spatial information and anatomical information along tracts, with the former extracted through convolutional layers and the latter modeled via a Transformer-decoder. Additionally, we employ a weighted loss function to address fiber class imbalance encountered during training. We evaluate the proposed method on the simulated ISMRM 2015 Tractography Challenge dataset, achieving a valid streamline rate of 66.2 %, white matter coverage of 63.8 %, and successfully reconstructing 24 out of 25 bundles. Furthermore, on the multi-site Tractoinferno dataset, the proposed method demonstrates its ability to handle various diffusion MRI acquisition schemes, achieving a 5.7 % increase in white matter coverage and a 4.1 % decrease in overreach compared to RNN-based methods.
Page 17 of 72720 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.