Sort by:
Page 22 of 72720 results

Artificial Intelligence-Assisted Segmentation of Prostate Tumors and Neurovascular Bundles: Applications in Precision Surgery for Prostate Cancer.

Mei H, Yang R, Huang J, Jiao P, Liu X, Chen Z, Chen H, Zheng Q

pubmed logopapersJun 18 2025
The aim of this study was to guide prostatectomy by employing artificial intelligence for the segmentation of tumor gross tumor volume (GTV) and neurovascular bundles (NVB). The preservation and dissection of NVB differ between intrafascial and extrafascial robot-assisted radical prostatectomy (RARP), impacting postoperative urinary control. We trained the nnU-Net v2 neural network using data from 220 patients in the PI-CAI cohort for the segmentation of prostate GTV and NVB in biparametric magnetic resonance imaging (bpMRI). The model was then validated in an external cohort of 209 patients from Renmin Hospital of Wuhan University (RHWU). Utilizing three-dimensional reconstruction and point cloud analysis, we explored the spatial distribution of GTV and NVB in relation to intrafascial and extrafascial approaches. We also prospectively included 40 patients undergoing intrafascial and extrafascial RARP, applying the aforementioned procedure to classify the surgical approach. Additionally, 3D printing was employed to guide surgery, and follow-ups on short- and long-term urinary function in patients were conducted. The nnU-Net v2 neural network demonstrated precise segmentation of GTV, NVB, and prostate, achieving Dice scores of 0.5573 ± 0.0428, 0.7679 ± 0.0178, and 0.7483 ± 0.0290, respectively. By establishing the distance from GTV to NVB, we successfully predicted the surgical approach. Urinary control analysis revealed that the extrafascial approach yielded better postoperative urinary function, facilitating more refined management of patients with prostate cancer and personalized medical care. Artificial intelligence technology can accurately identify GTV and NVB in preoperative bpMRI of patients with prostate cancer and guide the choice between intrafascial and extrafascial RARP. Patients undergoing intrafascial RARP with preserved NVB demonstrate improved postoperative urinary control.

Dual-scan self-learning denoising for application in ultralow-field MRI.

Zhang Y, He W, Wu J, Xu Z

pubmed logopapersJun 18 2025
This study develops a self-learning method to denoise MR images for use in ultralow field (ULF) applications. We propose use of a self-learning neural network for denoising 3D MRI obtained from two acquisitions (dual scan), which are utilized as training pairs. Based on the self-learning method Noise2Noise, an effective data augmentation method and integrated learning strategy for enhancing model performance are proposed. Experimental results demonstrate that (1) the proposed model can produce exceptional denoising results and outperform the traditional Noise2Noise method subjectively and objectively; (2) magnitude images can be effectively denoised comparing with several state-of-the-art methods on synthetic and real ULF data; and (3) the proposed method can yield better results on phase images and quantitative imaging applications than other denoisers due to the self-learning framework. Theoretical and experimental implementations show that the proposed self-learning model achieves improved performance on magnitude image denoising with synthetic and real-world data at ULF. Additionally, we test our method on calculated phase and quantification images, demonstrating its superior performance over several contrastive methods.

Identification, characterisation and outcomes of pre-atrial fibrillation in heart failure with reduced ejection fraction.

Helbitz A, Nadarajah R, Mu L, Larvin H, Ismail H, Wahab A, Thompson P, Harrison P, Harris M, Joseph T, Plein S, Petrie M, Metra M, Wu J, Swoboda P, Gale CP

pubmed logopapersJun 18 2025
Atrial fibrillation (AF) in heart failure with reduced ejection fraction (HFrEF) has prognostic implications. Using a machine learning algorithm (FIND-AF), we aimed to explore clinical events and the cardiac magnetic resonance (CMR) characteristics of the pre-AF phenotype in HFrEF. A cohort of individuals aged ≥18 years with HFrEF without AF from the MATCH 1 and MATCH 2 studies (2018-2024) stratified by FIND-AF score. All received cardiac magnetic resonance using Cvi42 software for volumetric and T1/T2. The primary outcome was time to a composite of MACE inclusive of heart failure hospitalisation, myocardial infarction, stroke and all-cause mortality. Secondary outcomes included the association between CMR findings and FIND-AF score. Of 385 patients [mean age 61.7 (12.6) years, 39.0% women] with a median 2.5 years follow-up, the primary outcome occurred in 58 (30.2%) patients in the high FIND-AF risk group and 23 (11.9%) in the low FIND-AF risk group (hazard ratio 3.25, 95% CI 2.00-5.28, P < 0.001). Higher FIND-AF score was associated with higher indexed left ventricular mass (β = 4.7, 95% CI 0.5-8.9), indexed left atrial volume (β = 5.9, 95% CI 2.2-9.6), higher indexed left ventricular end-diastolic volume (β = 9.55, 95% CI 1.37-17.74, P = 0.022), native T1 signal (β = 18.0, 95% CI 7.0-29.1) and extracellular volume (β = 1.6, 95% CI 0.6-2.5). A pre-AF HFrEF subgroup with distinct CMR characteristics and poor prognosis may be identified, potentially guiding interventions to reduce clinical events.

Federated Learning for MRI-based BrainAGE: a multicenter study on post-stroke functional outcome prediction

Vincent Roca, Marc Tommasi, Paul Andrey, Aurélien Bellet, Markus D. Schirmer, Hilde Henon, Laurent Puy, Julien Ramon, Grégory Kuchcinski, Martin Bretzner, Renaud Lopes

arxiv logopreprintJun 18 2025
$\textbf{Objective:}$ Brain-predicted age difference (BrainAGE) is a neuroimaging biomarker reflecting brain health. However, training robust BrainAGE models requires large datasets, often restricted by privacy concerns. This study evaluates the performance of federated learning (FL) for BrainAGE estimation in ischemic stroke patients treated with mechanical thrombectomy, and investigates its association with clinical phenotypes and functional outcomes. $\textbf{Methods:}$ We used FLAIR brain images from 1674 stroke patients across 16 hospital centers. We implemented standard machine learning and deep learning models for BrainAGE estimates under three data management strategies: centralized learning (pooled data), FL (local training at each site), and single-site learning. We reported prediction errors and examined associations between BrainAGE and vascular risk factors (e.g., diabetes mellitus, hypertension, smoking), as well as functional outcomes at three months post-stroke. Logistic regression evaluated BrainAGE's predictive value for these outcomes, adjusting for age, sex, vascular risk factors, stroke severity, time between MRI and arterial puncture, prior intravenous thrombolysis, and recanalisation outcome. $\textbf{Results:}$ While centralized learning yielded the most accurate predictions, FL consistently outperformed single-site models. BrainAGE was significantly higher in patients with diabetes mellitus across all models. Comparisons between patients with good and poor functional outcomes, and multivariate predictions of these outcomes showed the significance of the association between BrainAGE and post-stroke recovery. $\textbf{Conclusion:}$ FL enables accurate age predictions without data centralization. The strong association between BrainAGE, vascular risk factors, and post-stroke recovery highlights its potential for prognostic modeling in stroke care.

Automated MRI Tumor Segmentation using hybrid U-Net with Transformer and Efficient Attention

Syed Haider Ali, Asrar Ahmad, Muhammad Ali, Asifullah Khan, Muhammad Shahban, Nadeem Shaukat

arxiv logopreprintJun 18 2025
Cancer is an abnormal growth with potential to invade locally and metastasize to distant organs. Accurate auto-segmentation of the tumor and surrounding normal tissues is required for radiotherapy treatment plan optimization. Recent AI-based segmentation models are generally trained on large public datasets, which lack the heterogeneity of local patient populations. While these studies advance AI-based medical image segmentation, research on local datasets is necessary to develop and integrate AI tumor segmentation models directly into hospital software for efficient and accurate oncology treatment planning and execution. This study enhances tumor segmentation using computationally efficient hybrid UNet-Transformer models on magnetic resonance imaging (MRI) datasets acquired from a local hospital under strict privacy protection. We developed a robust data pipeline for seamless DICOM extraction and preprocessing, followed by extensive image augmentation to ensure model generalization across diverse clinical settings, resulting in a total dataset of 6080 images for training. Our novel architecture integrates UNet-based convolutional neural networks with a transformer bottleneck and complementary attention modules, including efficient attention, Squeeze-and-Excitation (SE) blocks, Convolutional Block Attention Module (CBAM), and ResNeXt blocks. To accelerate convergence and reduce computational demands, we used a maximum batch size of 8 and initialized the encoder with pretrained ImageNet weights, training the model on dual NVIDIA T4 GPUs via checkpointing to overcome Kaggle's runtime limits. Quantitative evaluation on the local MRI dataset yielded a Dice similarity coefficient of 0.764 and an Intersection over Union (IoU) of 0.736, demonstrating competitive performance despite limited data and underscoring the importance of site-specific model development for clinical deployment.

CLAIM: Clinically-Guided LGE Augmentation for Realistic and Diverse Myocardial Scar Synthesis and Segmentation

Farheen Ramzan, Yusuf Kiberu, Nikesh Jathanna, Shahnaz Jamil-Copley, Richard H. Clayton, Chen, Chen

arxiv logopreprintJun 18 2025
Deep learning-based myocardial scar segmentation from late gadolinium enhancement (LGE) cardiac MRI has shown great potential for accurate and timely diagnosis and treatment planning for structural cardiac diseases. However, the limited availability and variability of LGE images with high-quality scar labels restrict the development of robust segmentation models. To address this, we introduce CLAIM: \textbf{C}linically-Guided \textbf{L}GE \textbf{A}ugmentation for Real\textbf{i}stic and Diverse \textbf{M}yocardial Scar Synthesis and Segmentation framework, a framework for anatomically grounded scar generation and segmentation. At its core is the SMILE module (Scar Mask generation guided by cLinical knowledgE), which conditions a diffusion-based generator on the clinically adopted AHA 17-segment model to synthesize images with anatomically consistent and spatially diverse scar patterns. In addition, CLAIM employs a joint training strategy in which the scar segmentation network is optimized alongside the generator, aiming to enhance both the realism of synthesized scars and the accuracy of the scar segmentation performance. Experimental results show that CLAIM produces anatomically coherent scar patterns and achieves higher Dice similarity with real scar distributions compared to baseline models. Our approach enables controllable and realistic myocardial scar synthesis and has demonstrated utility for downstream medical imaging task.

Brain Stroke Classification Using Wavelet Transform and MLP Neural Networks on DWI MRI Images

Mana Mohammadi, Amirhesam Jafari Rad, Ashkan Behrouzi

arxiv logopreprintJun 18 2025
This paper presents a lightweight framework for classifying brain stroke types from Diffusion-Weighted Imaging (DWI) MRI scans, employing a Multi-Layer Perceptron (MLP) neural network with Wavelet Transform for feature extraction. Accurate and timely stroke detection is critical for effective treatment and improved patient outcomes in neuroimaging. While Convolutional Neural Networks (CNNs) are widely used for medical image analysis, their computational complexity often hinders deployment in resource-constrained clinical settings. In contrast, our approach combines Wavelet Transform with a compact MLP to achieve efficient and accurate stroke classification. Using the "Brain Stroke MRI Images" dataset, our method yields classification accuracies of 82.0% with the "db4" wavelet (level 3 decomposition) and 86.00% with the "Haar" wavelet (level 2 decomposition). This analysis highlights a balance between diagnostic accuracy and computational efficiency, offering a practical solution for automated stroke diagnosis. Future research will focus on enhancing model robustness and integrating additional MRI modalities for comprehensive stroke assessment.

Classification of Multi-Parametric Body MRI Series Using Deep Learning

Boah Kim, Tejas Sudharshan Mathai, Kimberly Helm, Peter A. Pinto, Ronald M. Summers

arxiv logopreprintJun 18 2025
Multi-parametric magnetic resonance imaging (mpMRI) exams have various series types acquired with different imaging protocols. The DICOM headers of these series often have incorrect information due to the sheer diversity of protocols and occasional technologist errors. To address this, we present a deep learning-based classification model to classify 8 different body mpMRI series types so that radiologists read the exams efficiently. Using mpMRI data from various institutions, multiple deep learning-based classifiers of ResNet, EfficientNet, and DenseNet are trained to classify 8 different MRI series, and their performance is compared. Then, the best-performing classifier is identified, and its classification capability under the setting of different training data quantities is studied. Also, the model is evaluated on the out-of-training-distribution datasets. Moreover, the model is trained using mpMRI exams obtained from different scanners in two training strategies, and its performance is tested. Experimental results show that the DenseNet-121 model achieves the highest F1-score and accuracy of 0.966 and 0.972 over the other classification models with p-value$<$0.05. The model shows greater than 0.95 accuracy when trained with over 729 studies of the training data, whose performance improves as the training data quantities grew larger. On the external data with the DLDS and CPTAC-UCEC datasets, the model yields 0.872 and 0.810 accuracy for each. These results indicate that in both the internal and external datasets, the DenseNet-121 model attains high accuracy for the task of classifying 8 body MRI series types.

Automated Multi-grade Brain Tumor Classification Using Adaptive Hierarchical Optimized Horse Herd BiLSTM Fusion Network in MRI Images.

Thanya T, Jeslin T

pubmed logopapersJun 18 2025
Brain tumor classification using Magnetic Resonance Imaging (MRI) images is an important and emerging field of medical imaging and artificial intelligence in the current world. With advancements in technology, particularly in deep learning and machine learning, researchers and clinicians are leveraging these tools to create complex models that, using MRI data, can reliably detect and classify tumors in the brain. However, it has a number of drawbacks, including the intricacy of tumor types and grades, intensity variations in MRI data and tumors varying in severity. This paper proposes a Multi-Grade Hierarchical Classification Network Model (MGHCN) for the hierarchical classification of tumor grades in MRI images. The model's distinctive feature lies in its ability to categorize tumors into multiple grades, thereby capturing the hierarchical nature of tumor severity. To address variations in intensity levels across different MRI samples, an Improved Adaptive Intensity Normalization (IAIN) pre-processing step is employed. This step standardizes intensity values, effectively mitigating the impact of intensity variations and ensuring more consistent analyses. The model renders utilization of the Dual Tree Complex Wavelet Transform with Enhanced Trigonometric Features (DTCWT-ETF) for efficient feature extraction. DTCWT-ETF captures both spatial and frequency characteristics, allowing the model to distinguish between different tumor types more effectively. In the classification stage, the framework introduces the Adaptive Hierarchical Optimized Horse Herd BiLSTM Fusion Network (AHOHH-BiLSTM). This multi-grade classification model is designed with a comprehensive architecture, including distinct layers that enhance the learning process and adaptively refine parameters. The purpose of this study is to improve the precision of distinguishing different grades of tumors in MRI images. To evaluate the proposed MGHCN framework, a set of evaluation metrics is incorporated which includes precision, recall, and the F1-score. The structure employs BraTS Challenge 2021, Br35H, and BraTS Challenge 2023 datasets, a significant combination that ensures comprehensive training and evaluation. The MGHCN framework aims to enhance brain tumor classification in MRI images by utilizing these datasets along with a comprehensive set of evaluation metrics, providing a more thorough and sophisticated understanding of its capabilities and performance.

Generalist medical foundation model improves prostate cancer segmentation from multimodal MRI images.

Zhang Y, Ma X, Li M, Huang K, Zhu J, Wang M, Wang X, Wu M, Heng PA

pubmed logopapersJun 18 2025
Prostate cancer (PCa) is one of the most common types of cancer, seriously affecting adult male health. Accurate and automated PCa segmentation is essential for radiologists to confirm the location of cancer, evaluate its severity, and design appropriate treatments. This paper presents PCaSAM, a fully automated PCa segmentation model that allows us to input multi-modal MRI images into the foundation model to improve performance significantly. We collected multi-center datasets to conduct a comprehensive evaluation. The results showed that PCaSAM outperforms the generalist medical foundation model and the other representative segmentation models, with the average DSC of 0.721 and 0.706 in the internal and external datasets, respectively. Furthermore, with the assistance of segmentation, the PI-RADS scoring of PCa lesions was improved significantly, leading to a substantial increase in average AUC by 8.3-8.9% on two external datasets. Besides, PCaSAM achieved superior efficiency, making it highly suitable for real-world deployment scenarios.
Page 22 of 72720 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.