Sort by:
Page 13 of 68675 results

Assessment of Robustness of MRI Radiomic Features in the Abdomen: Impact of Deep Learning Reconstruction and Accelerated Acquisition.

Zhong J, Xing Y, Hu Y, Liu X, Dai S, Ding D, Lu J, Yang J, Song Y, Lu M, Nickel D, Lu W, Zhang H, Yao W

pubmed logopapersJun 25 2025
The objective of this study is to investigate the impact of deep learning reconstruction and accelerated acquisition on reproducibility and variability of radiomic features in abdominal MRI. Seventeen volunteers were prospectively included to undergo abdominal MRI on a 3-T scanner for axial T2-weighted, axial T2-weighted fat-suppressed, and coronal T2-weighted sequences. Each sequence was scanned for four times using clinical reference acquisition with standard reconstruction, clinical reference acquisition with deep learning reconstruction, accelerated acquisition with standard reconstruction, and accelerated acquisition with deep learning reconstruction, respectively. The regions of interest were drawn for ten anatomical sites with rigid registrations. Ninety-three radiomic features were extracted via PyRadiomics after z-score normalization. The reproducibility was evaluated using clinical reference acquisition with standard reconstruction as reference by intraclass correlation coefficient (ICC) and concordance correlation coefficient (CCC). The variability among four scans was assessed by coefficient of variation (CV) and quartile coefficient of dispersion (QCD). Our study found that the median (first and third quartile) of overall ICC and CCC values were 0.451 (0.305, 0.583) and 0.450 (0.304, 0.582). The overall percentage of radiomic features with ICC > 0.90 and CCC > 0.90 was 8.1% and 8.1%, and was considered acceptable. The median (first and third quartile) of overall CV and QCD values was 9.4% (4.9%, 17.2%) and 4.9% (2.5%, 9.7%). The overall percentage of radiomic features with CV < 10% and QCD < 10% was 51.9% and 75.0%, and was considered acceptable. Without respect to clinical significance, deep learning reconstruction and accelerated acquisition led to a poor reproducibility of radiomic features, but more than a half of the radiomic features varied within an acceptable range.

Few-Shot Learning for Prostate Cancer Detection on MRI: Comparative Analysis with Radiologists' Performance.

Yamagishi Y, Baba Y, Suzuki J, Okada Y, Kanao K, Oyama M

pubmed logopapersJun 25 2025
Deep-learning models for prostate cancer detection typically require large datasets, limiting clinical applicability across institutions due to domain shift issues. This study aimed to develop a few-shot learning deep-learning model for prostate cancer detection on multiparametric MRI that requires minimal training data and to compare its diagnostic performance with experienced radiologists. In this retrospective study, we used 99 cases (80 positive, 19 negative) of biopsy-confirmed prostate cancer (2017-2022), with 20 cases for training, 5 for validation, and 74 for testing. A 2D transformer model was trained on T2-weighted, diffusion-weighted, and apparent diffusion coefficient map images. Model predictions were compared with two radiologists using Matthews correlation coefficient (MCC) and F1 score, with 95% confidence intervals (CIs) calculated via bootstrap method. The model achieved an MCC of 0.297 (95% CI: 0.095-0.474) and F1 score of 0.707 (95% CI: 0.598-0.847). Radiologist 1 had an MCC of 0.276 (95% CI: 0.054-0.484) and F1 score of 0.741; Radiologist 2 had an MCC of 0.504 (95% CI: 0.289-0.703) and F1 score of 0.871, showing that the model performance was comparable to Radiologist 1. External validation on the Prostate158 dataset revealed that ImageNet pretraining substantially improved model performance, increasing study-level ROC-AUC from 0.464 to 0.636 and study-level PR-AUC from 0.637 to 0.773 across all architectures. Our findings demonstrate that few-shot deep-learning models can achieve clinically relevant performance when using pretrained transformer architectures, offering a promising approach to address domain shift challenges across institutions.

Contrast-enhanced image synthesis using latent diffusion model for precise online tumor delineation in MRI-guided adaptive radiotherapy for brain metastases.

Ma X, Ma Y, Wang Y, Li C, Liu Y, Chen X, Dai J, Bi N, Men K

pubmed logopapersJun 25 2025
&#xD;Magnetic resonance imaging-guided adaptive radiotherapy (MRIgART) is a promising technique for long-course RT of large-volume brain metastasis (BM), due to the capacity to track tumor changes throughout treatment course. Contrast-enhanced T1-weighted (T1CE) MRI is essential for BM delineation, yet is often unavailable during online treatment concerning the requirement of contrast agent injection. This study aims to develop a synthetic T1CE (sT1CE) generation method to facilitate accurate online adaptive BM delineation.&#xD;Approach:&#xD;We developed a novel ControlNet-coupled latent diffusion model (CTN-LDM) combined with a personalized transfer learning strategy and a denoising diffusion implicit model (DDIM) inversion method to generate high quality sT1CE images from online T2-weighted (T2) or fluid attenuated inversion recovery (FLAIR) images. Visual quality of sT1CE images generated by the CTN-LDM was compared with classical deep learning models. BM delineation results using the combination of our sT1CE images and online T2/FLAIR images were compared with the results solely using online T2/FLAIR images, which is the current clinical method.&#xD;Main results:&#xD;Visual quality of sT1CE images from our CTN-LDM was superior to classical models both quantitatively and qualitatively. Leveraging sT1CE images, radiation oncologists achieved significant higher precision of adaptive BM delineation, with average Dice similarity coefficient of 0.93 ± 0.02 vs. 0.86 ± 0.04 (p < 0.01), compared with only using online T2/FLAIR images. &#xD;Significance:&#xD;The proposed method could generate high quality sT1CE images and significantly improve accuracy of online adaptive tumor delineation for long-course MRIgART of large-volume BM, potentially enhancing treatment outcomes and minimizing toxicity.

Novel Application of Connectomics to the Surgical Management of Pediatric Arteriovenous Malformations.

Syed SA, Al-Mufti F, Hanft SJ, Gandhi CD, Pisapia JM

pubmed logopapersJun 25 2025
Introduction The emergence of connectomics in neurosurgery has allowed for construction of detailed maps of white matter connections, incorporating both structural and functional connectivity patterns. The advantage of mapping cerebral vascular lesions to guide surgical approach shows great potential. We aim to identify the clinical utility of connectomics for the surgical treatment of pediatric arteriovenous malformations (AVM). Case Presentation We present two illustrative cases of the application of connectomics to the management of cerebral AVM in a 9-year-old and 8-year-old female. Using magnetic resonance anatomic and diffusion tensor imaging, a machine learning algorithm generated patient-specific representations of the corticospinal tract for the first patient, and the optic radiations for the second patient. The default mode network and language network were also examined for each patient. The imaging output served as an adjunct to guide operative decision making. It assisted with selection of the superior parietal lobule as the operative corridor for the first case. Furthermore, it alerted the surgeon to white matter tracts in close proximity to the AVM nidus during resection. Finally, it aided in risk versus benefit analysis regarding treatment approach, such as craniotomy for resection for the first patient versus radiosurgery for the second patient. Both patients had favorable neurologic outcomes at the available follow-up period. Conclusion Use of the software integrated well with clinical workflow. The output was used for planning and overlaid on the intraoperative neuro-navigation system. It improved visualization of eloquent regions, especially those networks not visible on standard anatomic imaging. Future studies will focus on expanding the cohort, conducting in pre- and post-operative connectomic analysis with correlation to clinical outcome measures, and incorporating functional magnetic resonance imaging.

U-R-VEDA: Integrating UNET, Residual Links, Edge and Dual Attention, and Vision Transformer for Accurate Semantic Segmentation of CMRs

Racheal Mukisa, Arvind K. Bansal

arxiv logopreprintJun 25 2025
Artificial intelligence, including deep learning models, will play a transformative role in automated medical image analysis for the diagnosis of cardiac disorders and their management. Automated accurate delineation of cardiac images is the first necessary initial step for the quantification and automated diagnosis of cardiac disorders. In this paper, we propose a deep learning based enhanced UNet model, U-R-Veda, which integrates convolution transformations, vision transformer, residual links, channel-attention, and spatial attention, together with edge-detection based skip-connections for an accurate fully-automated semantic segmentation of cardiac magnetic resonance (CMR) images. The model extracts local-features and their interrelationships using a stack of combination convolution blocks, with embedded channel and spatial attention in the convolution block, and vision transformers. Deep embedding of channel and spatial attention in the convolution block identifies important features and their spatial localization. The combined edge information with channel and spatial attention as skip connection reduces information-loss during convolution transformations. The overall model significantly improves the semantic segmentation of CMR images necessary for improved medical image analysis. An algorithm for the dual attention module (channel and spatial attention) has been presented. Performance results show that U-R-Veda achieves an average accuracy of 95.2%, based on DSC metrics. The model outperforms the accuracy attained by other models, based on DSC and HD metrics, especially for the delineation of right-ventricle and left-ventricle-myocardium.

Deep learning-based diffusion MRI tractography: Integrating spatial and anatomical information.

Yang Y, Yuan Y, Ren B, Wu Y, Feng Y, Zhang X

pubmed logopapersJun 25 2025
Diffusion MRI tractography technique enables non-invasive visualization of the white matter pathways in the brain. It plays a crucial role in neuroscience and clinical fields by facilitating the study of brain connectivity and neurological disorders. However, the accuracy of reconstructed tractograms has been a longstanding challenge. Recently, deep learning methods have been applied to improve tractograms for better white matter coverage, but often comes at the expense of generating excessive false-positive connections. This is largely due to their reliance on local information to predict long-range streamlines. To improve the accuracy of streamline propagation predictions, we introduce a novel deep learning framework that integrates image-domain spatial information and anatomical information along tracts, with the former extracted through convolutional layers and the latter modeled via a Transformer-decoder. Additionally, we employ a weighted loss function to address fiber class imbalance encountered during training. We evaluate the proposed method on the simulated ISMRM 2015 Tractography Challenge dataset, achieving a valid streamline rate of 66.2 %, white matter coverage of 63.8 %, and successfully reconstructing 24 out of 25 bundles. Furthermore, on the multi-site Tractoinferno dataset, the proposed method demonstrates its ability to handle various diffusion MRI acquisition schemes, achieving a 5.7 % increase in white matter coverage and a 4.1 % decrease in overreach compared to RNN-based methods.

Patch2Loc: Learning to Localize Patches for Unsupervised Brain Lesion Detection

Hassan Baker, Austin J. Brockmeier

arxiv logopreprintJun 25 2025
Detecting brain lesions as abnormalities observed in magnetic resonance imaging (MRI) is essential for diagnosis and treatment. In the search of abnormalities, such as tumors and malformations, radiologists may benefit from computer-aided diagnostics that use computer vision systems trained with machine learning to segment normal tissue from abnormal brain tissue. While supervised learning methods require annotated lesions, we propose a new unsupervised approach (Patch2Loc) that learns from normal patches taken from structural MRI. We train a neural network model to map a patch back to its spatial location within a slice of the brain volume. During inference, abnormal patches are detected by the relatively higher error and/or variance of the location prediction. This generates a heatmap that can be integrated into pixel-wise methods to achieve finer-grained segmentation. We demonstrate the ability of our model to segment abnormal brain tissues by applying our approach to the detection of tumor tissues in MRI on T2-weighted images from BraTS2021 and MSLUB datasets and T1-weighted images from ATLAS and WMH datasets. We show that it outperforms the state-of-the art in unsupervised segmentation. The codebase for this work can be found on our \href{https://github.com/bakerhassan/Patch2Loc}{GitHub page}.

Preoperative Assessment of Lymph Node Metastasis in Rectal Cancer Using Deep Learning: Investigating the Utility of Various MRI Sequences.

Zhao J, Zheng P, Xu T, Feng Q, Liu S, Hao Y, Wang M, Zhang C, Xu J

pubmed logopapersJun 24 2025
This study aimed to develop a deep learning (DL) model based on three-dimensional multi-parametric magnetic resonance imaging (mpMRI) for preoperative assessment of lymph node metastasis (LNM) in rectal cancer (RC) and to investigate the contribution of different MRI sequences. A total of 613 eligible patients with RC from four medical centres who underwent preoperative mpMRI were retrospectively enrolled and randomly assigned to training (n = 372), validation (n = 106), internal test (n = 88) and external test (n = 47) cohorts. A multi-parametric multi-scale EfficientNet (MMENet) was designed to effectively extract LNM-related features from mpMR for preoperative LNM assessment. Its performance was compared with other DL models and radiologists using metrics of area under the receiver operating curve (AUC), accuracy (ACC), sensitivity, specificity and average precision with 95% confidence interval (CI). To investigate the utility of various MRI sequences, the performances of the mono-parametric model and the MMENet with different sequences combinations as input were compared. The MMENet using a combination of T2WI, DWI and DCE sequence achieved an AUC of 0.808 (95% CI 0.720-0.897) with an ACC of 71.6% (95% CI 62.3-81.0) in the internal test cohort and an AUC of 0.782 (95% CI 0.636-0.925) with an ACC of 76.6% (95% CI 64.6-88.6) in the external test cohort, outperforming the mono-parametric model, the MMENet with other sequences combinations and the radiologists. The MMENet, leveraging a combination of T2WI, DWI and DCE sequences, can accurately assess LNM in RC preoperatively and holds great promise for automated evaluation of LNM in clinical practice.

Advances and Integrations of Computer-Assisted Planning, Artificial Intelligence, and Predictive Modeling Tools for Laser Interstitial Thermal Therapy in Neurosurgical Oncology.

Warman A, Moorthy D, Gensler R, Horowtiz MA, Ellis J, Tomasovic L, Srinivasan E, Ahmed K, Azad TD, Anderson WS, Rincon-Torroella J, Bettegowda C

pubmed logopapersJun 24 2025
Laser interstitial thermal therapy (LiTT) has emerged as a minimally invasive, MRI-guided treatment of brain tumors that are otherwise considered inoperable because of their location or the patient's poor surgical candidacy. By directing thermal energy at neoplastic lesions while minimizing damage to surrounding healthy tissue, LiTT offers promising therapeutic outcomes for both newly diagnosed and recurrent tumors. However, challenges such as postprocedural edema, unpredictable heat diffusion near blood vessels and ventricles in real time underscore the need for improved planning and monitoring. Incorporating artificial intelligence (AI) presents a viable solution to many of these obstacles. AI has already demonstrated effectiveness in optimizing surgical trajectories, predicting seizure-free outcomes in epilepsy cases, and generating heat distribution maps to guide real-time ablation. This technology could be similarly deployed in neurosurgical oncology to identify patients most likely to benefit from LiTT, refine trajectory planning, and predict tissue-specific heat responses. Despite promising initial studies, further research is needed to establish the robust data sets and clinical trials necessary to develop and validate AI-driven LiTT protocols. Such advancements have the potential to bolster LiTT's efficacy, minimize complications, and ultimately transform the neurosurgical management of primary and metastatic brain tumors.

Systematic Review of Pituitary Gland and Pituitary Adenoma Automatic Segmentation Techniques in Magnetic Resonance Imaging

Mubaraq Yakubu, Navodini Wijethilake, Jonathan Shapey, Andrew King, Alexander Hammers

arxiv logopreprintJun 24 2025
Purpose: Accurate segmentation of both the pituitary gland and adenomas from magnetic resonance imaging (MRI) is essential for diagnosis and treatment of pituitary adenomas. This systematic review evaluates automatic segmentation methods for improving the accuracy and efficiency of MRI-based segmentation of pituitary adenomas and the gland itself. Methods: We reviewed 34 studies that employed automatic and semi-automatic segmentation methods. We extracted and synthesized data on segmentation techniques and performance metrics (such as Dice overlap scores). Results: The majority of reviewed studies utilized deep learning approaches, with U-Net-based models being the most prevalent. Automatic methods yielded Dice scores of 0.19--89.00\% for pituitary gland and 4.60--96.41\% for adenoma segmentation. Semi-automatic methods reported 80.00--92.10\% for pituitary gland and 75.90--88.36\% for adenoma segmentation. Conclusion: Most studies did not report important metrics such as MR field strength, age and adenoma size. Automated segmentation techniques such as U-Net-based models show promise, especially for adenoma segmentation, but further improvements are needed to achieve consistently good performance in small structures like the normal pituitary gland. Continued innovation and larger, diverse datasets are likely critical to enhancing clinical applicability.
Page 13 of 68675 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.