Sort by:
Page 9 of 1241236 results

Neural Network-Driven Direct CBCT-Based Dose Calculation for Head-and-Neck Proton Treatment Planning

Muheng Li, Evangelia Choulilitsa, Lisa Fankhauser, Francesca Albertini, Antony Lomax, Ye Zhang

arxiv logopreprintSep 22 2025
Accurate dose calculation on cone beam computed tomography (CBCT) images is essential for modern proton treatment planning workflows, particularly when accounting for inter-fractional anatomical changes in adaptive treatment scenarios. Traditional CBCT-based dose calculation suffers from image quality limitations, requiring complex correction workflows. This study develops and validates a deep learning approach for direct proton dose calculation from CBCT images using extended Long Short-Term Memory (xLSTM) neural networks. A retrospective dataset of 40 head-and-neck cancer patients with paired planning CT and treatment CBCT images was used to train an xLSTM-based neural network (CBCT-NN). The architecture incorporates energy token encoding and beam's-eye-view sequence modelling to capture spatial dependencies in proton dose deposition patterns. Training utilized 82,500 paired beam configurations with Monte Carlo-generated ground truth doses. Validation was performed on 5 independent patients using gamma analysis, mean percentage dose error assessment, and dose-volume histogram comparison. The CBCT-NN achieved gamma pass rates of 95.1 $\pm$ 2.7% using 2mm/2% criteria. Mean percentage dose errors were 2.6 $\pm$ 1.4% in high-dose regions ($>$90% of max dose) and 5.9 $\pm$ 1.9% globally. Dose-volume histogram analysis showed excellent preservation of target coverage metrics (Clinical Target Volume V95% difference: -0.6 $\pm$ 1.1%) and organ-at-risk constraints (parotid mean dose difference: -0.5 $\pm$ 1.5%). Computation time is under 3 minutes without sacrificing Monte Carlo-level accuracy. This study demonstrates the proof-of-principle of direct CBCT-based proton dose calculation using xLSTM neural networks. The approach eliminates traditional correction workflows while achieving comparable accuracy and computational efficiency suitable for adaptive protocols.

Feature-Based Machine Learning for Brain Metastasis Detection Using Clinical MRI

Rahi, A., Shafiabadi, M. H.

medrxiv logopreprintSep 22 2025
Brain metastases represent one of the most common intracranial malignancies, yet early and accurate detection remains challenging, particularly in clinical datasets with limited availability of healthy controls. In this study, we developed a feature-based machine learning framework to classify patients with and without brain metastases using multi-modal clinical MRI scans. A dataset of 50 subjects from the UCSF Brain Metastases collection was analyzed, including pre- and post-contrast T1-weighted images and corresponding segmentation masks. We designed advanced feature extraction strategies capturing intensity, enhancement patterns, texture gradients, and histogram-based metrics, resulting in 44 quantitative descriptors per subject. To address the severe class imbalance (46 metastasis vs. 4 non-metastasis cases), we applied minority oversampling and noise-based augmentation, combined with stratified cross-validation. Among multiple classifiers, Random Forest consistently achieved the highest performance with an average accuracy of 96.7% and an area under the ROC curve (AUC) of 0.99 across five folds. The proposed approach highlights the potential of handcrafted radiomic-like features coupled with machine learning to improve metastasis detection in heterogeneous clinical MRI cohorts. These findings underscore the importance of methodological strategies for handling imbalanced data and support the integration of feature-based models as complementary tools for brain metastasis screening and research.

Exploring Machine Learning Models for Physical Dose Calculation in Carbon Ion Therapy Using Heterogeneous Imaging Data - A Proof of Concept Study

Miriam Schwarze, Hui Khee Looe, Björn Poppe, Pichaya Tappayuthpijarn, Leo Thomas, Hans Rabus

arxiv logopreprintSep 22 2025
Background: Accurate and fast dose calculation is essential for optimizing carbon ion therapy. Existing machine learning (ML) models have been developed for other radiotherapy modalities. They use patient data with uniform CT imaging properties. Purpose: This study investigates the application of several ML models for physical dose calculation in carbon ion therapy and compares their ability to generalize to CT data with varying resolutions. Among the models examined is a Diffusion Model, which is tested for the first time for the calculation of physical dose distributions. Methods: A dataset was generated using publicly available CT images of the head and neck region. Monoenergetic carbon ion beams were simulated at various initial energies using Geant4 simulation software. A U-Net architecture was developed for dose prediction based on distributions of material density in patients and of absorbed dose in water. It was trained as a Generative Adversarial Network (GAN) generator, a Diffusion Model noise estimator, and as a standalone network. Their performances were compared with two models from literature. Results: All models produced dose distributions deviating by less than 2% from that obtained by a full Monte Carlo simulation, even for a patient not seen during training. Dose calculation time on a GPU was in the range of 3 ms to 15 s. The resource-efficient U-Net appears to perform comparably to the more computationally intensive GAN and Diffusion Model. Conclusion: This study demonstrates that ML models can effectively balance accuracy and speed for physical dose calculation in carbon ion therapy. Using the computationally efficient U-Net can help conserve resources. The generalizability of the models to different CT image resolutions enables the use for different patients without extensive retraining.

Path-Weighted Integrated Gradients for Interpretable Dementia Classification

Firuz Kamalov, Mohmad Al Falasi, Fadi Thabtah

arxiv logopreprintSep 22 2025
Integrated Gradients (IG) is a widely used attribution method in explainable artificial intelligence (XAI). In this paper, we introduce Path-Weighted Integrated Gradients (PWIG), a generalization of IG that incorporates a customizable weighting function into the attribution integral. This modification allows for targeted emphasis along different segments of the path between a baseline and the input, enabling improved interpretability, noise mitigation, and the detection of path-dependent feature relevance. We establish its theoretical properties and illustrate its utility through experiments on a dementia classification task using the OASIS-1 MRI dataset. Attribution maps generated by PWIG highlight clinically meaningful brain regions associated with various stages of dementia, providing users with sharp and stable explanations. The results suggest that PWIG offers a flexible and theoretically grounded approach for enhancing attribution quality in complex predictive models.

Exploring transfer learning techniques for classifying Alzheimer's disease with rs-fMRI.

Abbasabadi S, Fattahi P, Shiri M

pubmed logopapersSep 22 2025
Alzheimer's disease, the most prevalent form of dementia, leads to a fatal progression after progressively destroying memory at each stage. This irreversible disease appears more frequently in older populations. Even though research on Alzheimer's disease has risen over the past few years, the intricacy of brain structure and function creates challenges for accurate disease diagnosis. As a neuroimaging technology, resting-state functional magnetic resonance imaging enables researchers to study debilitating neural diseases while scanning the brain. The research investigates resting-state functional magnetic resonance imaging approaches and deep learning methods to distinguish between Alzheimer's patients and normal individuals. resting-state functional magnetic resonance imaging of 97 participants is obtained from the Alzheimer's disease neuroimaging initiative database, with 56 participants classified in the Alzheimer's disease group and 41 in the normal control group. Extensive preprocessing is applied to the resting-state functional magnetic resonance imaging data before classification. Using transfer learning, classification between the normal control and Alzheimer's disease groups is conducted with proposed VGG19, AlexNet, and ResNet50 algorithms; the classification accuracy of them is 96.91 %, 98.71 %, and 98.20 %, respectively. For evaluation, precision, recall, and F1-score are utilized as additional assessment metrics. The AlexNet model exhibits higher accuracy than the other models and outperforms them in other evaluation metrics, including precision, recall, and F1-score. While AlexNet achieves the highest overall classification performance, ResNet50 demonstrates superior interpretability through Grad-CAM visualizations, producing more anatomically focused and clinically meaningful attention maps.

Automated Labeling of Intracranial Arteries with Uncertainty Quantification Using Deep Learning

Javier Bisbal, Patrick Winter, Sebastian Jofre, Aaron Ponce, Sameer A. Ansari, Ramez Abdalla, Michael Markl, Oliver Welin Odeback, Sergio Uribe, Cristian Tejos, Julio Sotelo, Susanne Schnell, David Marlevi

arxiv logopreprintSep 22 2025
Accurate anatomical labeling of intracranial arteries is essential for cerebrovascular diagnosis and hemodynamic analysis but remains time-consuming and subject to interoperator variability. We present a deep learning-based framework for automated artery labeling from 3D Time-of-Flight Magnetic Resonance Angiography (3D ToF-MRA) segmentations (n=35), incorporating uncertainty quantification to enhance interpretability and reliability. We evaluated three convolutional neural network architectures: (1) a UNet with residual encoder blocks, reflecting commonly used baselines in vascular labeling; (2) CS-Net, an attention-augmented UNet incorporating channel and spatial attention mechanisms for enhanced curvilinear structure recognition; and (3) nnUNet, a self-configuring framework that automates preprocessing, training, and architectural adaptation based on dataset characteristics. Among these, nnUNet achieved the highest labeling performance (average Dice score: 0.922; average surface distance: 0.387 mm), with improved robustness in anatomically complex vessels. To assess predictive confidence, we implemented test-time augmentation (TTA) and introduced a novel coordinate-guided strategy to reduce interpolation errors during augmented inference. The resulting uncertainty maps reliably indicated regions of anatomical ambiguity, pathological variation, or manual labeling inconsistency. We further validated clinical utility by comparing flow velocities derived from automated and manual labels in co-registered 4D Flow MRI datasets, observing close agreement with no statistically significant differences. Our framework offers a scalable, accurate, and uncertainty-aware solution for automated cerebrovascular labeling, supporting downstream hemodynamic analysis and facilitating clinical integration.

SeruNet-MS: A Two-Stage Interpretable Framework for Multiple Sclerosis Risk Prediction with SHAP-Based Explainability.

Aksoy S, Demircioglu P, Bogrekci I

pubmed logopapersSep 22 2025
<b>Background/Objectives:</b> Multiple sclerosis (MS) is a chronic demyelinating disease where early identification of patients at risk of conversion from clinically isolated syndrome (CIS) to clinically definite MS remains a critical unmet clinical need. Existing machine learning approaches often lack interpretability, limiting clinical trust and adoption. The objective of this research was to develop a novel two-stage machine learning framework with comprehensive explainability to predict CIS-to-MS conversion while addressing demographic bias and interpretability limitations. <b>Methods:</b> A cohort of 177 CIS patients from the National Institute of Neurology and Neurosurgery in Mexico City was analyzed using SeruNet-MS, a two-stage framework that separates demographic baseline risk from clinical risk modification. Stage 1 applied logistic regression to demographic features, while Stage 2 incorporated 25 clinical and symptom features, including MRI lesions, cerebrospinal fluid biomarkers, electrophysiological tests, and symptom characteristics. Patient-level interpretability was achieved through SHAP (SHapley Additive exPlanations) analysis, providing transparent attribution of each factor's contribution to risk assessment. <b>Results:</b> The two-stage model achieved a ROC-AUC of 0.909, accuracy of 0.806, precision of 0.842, and recall of 0.800, outperforming baseline machine learning methods. Cross-validation confirmed stable performance (0.838 ± 0.095 AUC) with appropriate generalization. SHAP analysis identified periventricular lesions, oligoclonal bands, and symptom complexity as the strongest predictors, with clinical examples illustrating transparent patient-specific risk communication. <b>Conclusions:</b> The two-stage approach effectively mitigates demographic bias by separating non-modifiable factors from actionable clinical findings. SHAP explanations provide clinicians with clear, individualized insights into prediction drivers, enhancing trust and supporting decision making. This framework demonstrates that high predictive performance can be achieved without sacrificing interpretability, representing a significant step forward for explainable AI in MS risk stratification and real-world clinical adoption.

Measurement Score-Based MRI Reconstruction with Automatic Coil Sensitivity Estimation

Tingjun Liu, Chicago Y. Park, Yuyang Hu, Hongyu An, Ulugbek S. Kamilov

arxiv logopreprintSep 22 2025
Diffusion-based inverse problem solvers (DIS) have recently shown outstanding performance in compressed-sensing parallel MRI reconstruction by combining diffusion priors with physical measurement models. However, they typically rely on pre-calibrated coil sensitivity maps (CSMs) and ground truth images, making them often impractical: CSMs are difficult to estimate accurately under heavy undersampling and ground-truth images are often unavailable. We propose Calibration-free Measurement Score-based diffusion Model (C-MSM), a new method that eliminates these dependencies by jointly performing automatic CSM estimation and self-supervised learning of measurement scores directly from k-space data. C-MSM reconstructs images by approximating the full posterior distribution through stochastic sampling over partial measurement posterior scores, while simultaneously estimating CSMs. Experiments on the multi-coil brain fastMRI dataset show that C-MSM achieves reconstruction performance close to DIS with clean diffusion priors -- even without access to clean training data and pre-calibrated CSMs.

Training the next generation of physicians for artificial intelligence-assisted clinical neuroradiology: ASNR MICCAI Brain Tumor Segmentation (BraTS) 2025 Lighthouse Challenge education platform

Raisa Amiruddin, Nikolay Y. Yordanov, Nazanin Maleki, Pascal Fehringer, Athanasios Gkampenis, Anastasia Janas, Kiril Krantchev, Ahmed Moawad, Fabian Umeh, Salma Abosabie, Sara Abosabie, Albara Alotaibi, Mohamed Ghonim, Mohanad Ghonim, Sedra Abou Ali Mhana, Nathan Page, Marko Jakovljevic, Yasaman Sharifi, Prisha Bhatia, Amirreza Manteghinejad, Melisa Guelen, Michael Veronesi, Virginia Hill, Tiffany So, Mark Krycia, Bojan Petrovic, Fatima Memon, Justin Cramer, Elizabeth Schrickel, Vilma Kosovic, Lorenna Vidal, Gerard Thompson, Ichiro Ikuta, Basimah Albalooshy, Ali Nabavizadeh, Nourel Hoda Tahon, Karuna Shekdar, Aashim Bhatia, Claudia Kirsch, Gennaro D'Anna, Philipp Lohmann, Amal Saleh Nour, Andriy Myronenko, Adam Goldman-Yassen, Janet R. Reid, Sanjay Aneja, Spyridon Bakas, Mariam Aboian

arxiv logopreprintSep 21 2025
High-quality reference standard image data creation by neuroradiology experts for automated clinical tools can be a powerful tool for neuroradiology & artificial intelligence education. We developed a multimodal educational approach for students and trainees during the MICCAI Brain Tumor Segmentation Lighthouse Challenge 2025, a landmark initiative to develop accurate brain tumor segmentation algorithms. Fifty-six medical students & radiology trainees volunteered to annotate brain tumor MR images for the BraTS challenges of 2023 & 2024, guided by faculty-led didactics on neuropathology MRI. Among the 56 annotators, 14 select volunteers were then paired with neuroradiology faculty for guided one-on-one annotation sessions for BraTS 2025. Lectures on neuroanatomy, pathology & AI, journal clubs & data scientist-led workshops were organized online. Annotators & audience members completed surveys on their perceived knowledge before & after annotations & lectures respectively. Fourteen coordinators, each paired with a neuroradiologist, completed the data annotation process, averaging 1322.9+/-760.7 hours per dataset per pair and 1200 segmentations in total. On a scale of 1-10, annotation coordinators reported significant increase in familiarity with image segmentation software pre- and post-annotation, moving from initial average of 6+/-2.9 to final average of 8.9+/-1.1, and significant increase in familiarity with brain tumor features pre- and post-annotation, moving from initial average of 6.2+/-2.4 to final average of 8.1+/-1.2. We demonstrate an innovative offering for providing neuroradiology & AI education through an image segmentation challenge to enhance understanding of algorithm development, reinforce the concept of data reference standard, and diversify opportunities for AI-driven image analysis among future physicians.

Diffusion-based arbitrary-scale magnetic resonance image super-resolution via progressive k-space reconstruction and denoising.

Wang J, Shi Z, Gu X, Yang Y, Sun J

pubmed logopapersSep 20 2025
Acquiring high-resolution Magnetic resonance (MR) images is challenging due to constraints such as hardware limitations and acquisition times. Super-resolution (SR) techniques offer a potential solution to enhance MR image quality without changing the magnetic resonance imaging (MRI) hardware. However, typical SR methods are designed for fixed upsampling scales and often produce over-smoothed images that lack fine textures and edge details. To address these issues, we propose a unified diffusion-based framework for arbitrary-scale in-plane MR image SR, dubbed Progressive Reconstruction and Denoising Diffusion Model (PRDDiff). Specifically, the forward diffusion process of PRDDiff gradually masks out high-frequency components and adds Gaussian noise to simulate the downsampling process in MRI. To reverse this process, we propose an Adaptive Resolution Restoration Network (ARRNet), which introduces a current step corresponding to the resolution of input MR image and an ending step corresponding to the target resolution. This design guide the ARRNet to recovering the clean MR image at the target resolution from input MR image. The SR process starts from an MR image at the initial resolution and gradually enhances them to higher resolution by progressively reconstructing high-frequency components and removing the noise based on the recovered MR image from ARRNet. Furthermore, we design a multi-stage SR strategy that incrementally enhances resolution through multiple sequential stages to further improve recovery accuracy. Each stage utilizes a set number of sampling steps from PRDDiff, guided by a specific ending step, to recover details pertinent to the predefined intermediate resolution. We conduct extensive experiments on fastMRI knee dataset, fastMRI brain dataset, our real-collected LR-HR brain dataset, and clinical pediatric cerebral palsy (CP) dataset, including T1-weighted and T2-weighted images for the brain and proton density-weighted images for the knee. The results demonstrate that PRDDiff outperforms previous MR image super-resolution methods in term of reconstruction accuracy, generalization, and downstream lesion segmentation accuracy and CP classification performance. The code is publicly available at https://github.com/Jiazhen-Wang/PRDDiff-main.
Page 9 of 1241236 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.