Sort by:
Page 2 of 14131 results

A Comprehensive Framework for Uncertainty Quantification of Voxel-wise Supervised Models in IVIM MRI

Nicola Casali, Alessandro Brusaferri, Giuseppe Baselli, Stefano Fumagalli, Edoardo Micotti, Gianluigi Forloni, Riaz Hussein, Giovanna Rizzo, Alfonso Mastropietro

arxiv logopreprintAug 6 2025
Accurate estimation of intravoxel incoherent motion (IVIM) parameters from diffusion-weighted MRI remains challenging due to the ill-posed nature of the inverse problem and high sensitivity to noise, particularly in the perfusion compartment. In this work, we propose a probabilistic deep learning framework based on Deep Ensembles (DE) of Mixture Density Networks (MDNs), enabling estimation of total predictive uncertainty and decomposition into aleatoric (AU) and epistemic (EU) components. The method was benchmarked against non probabilistic neural networks, a Bayesian fitting approach and a probabilistic network with single Gaussian parametrization. Supervised training was performed on synthetic data, and evaluation was conducted on both simulated and two in vivo datasets. The reliability of the quantified uncertainties was assessed using calibration curves, output distribution sharpness, and the Continuous Ranked Probability Score (CRPS). MDNs produced more calibrated and sharper predictive distributions for the D and f parameters, although slight overconfidence was observed in D*. The Robust Coefficient of Variation (RCV) indicated smoother in vivo estimates for D* with MDNs compared to Gaussian model. Despite the training data covering the expected physiological range, elevated EU in vivo suggests a mismatch with real acquisition conditions, highlighting the importance of incorporating EU, which was allowed by DE. Overall, we present a comprehensive framework for IVIM fitting with uncertainty quantification, which enables the identification and interpretation of unreliable estimates. The proposed approach can also be adopted for fitting other physical models through appropriate architectural and simulation adjustments.

TotalRegistrator: Towards a Lightweight Foundation Model for CT Image Registration

Xuan Loc Pham, Gwendolyn Vuurberg, Marjan Doppen, Joey Roosen, Tip Stille, Thi Quynh Ha, Thuy Duong Quach, Quoc Vu Dang, Manh Ha Luu, Ewoud J. Smit, Hong Son Mai, Mattias Heinrich, Bram van Ginneken, Mathias Prokop, Alessa Hering

arxiv logopreprintAug 6 2025
Image registration is a fundamental technique in the analysis of longitudinal and multi-phase CT images within clinical practice. However, most existing methods are tailored for single-organ applications, limiting their generalizability to other anatomical regions. This work presents TotalRegistrator, an image registration framework capable of aligning multiple anatomical regions simultaneously using a standard UNet architecture and a novel field decomposition strategy. The model is lightweight, requiring only 11GB of GPU memory for training. To train and evaluate our method, we constructed a large-scale longitudinal dataset comprising 695 whole-body (thorax-abdomen-pelvic) paired CT scans from individual patients acquired at different time points. We benchmarked TotalRegistrator against a generic classical iterative algorithm and a recent foundation model for image registration. To further assess robustness and generalizability, we evaluated our model on three external datasets: the public thoracic and abdominal datasets from the Learn2Reg challenge, and a private multiphase abdominal dataset from a collaborating hospital. Experimental results on the in-house dataset show that the proposed approach generally surpasses baseline methods in multi-organ abdominal registration, with a slight drop in lung alignment performance. On out-of-distribution datasets, it achieved competitive results compared to leading single-organ models, despite not being fine-tuned for those tasks, demonstrating strong generalizability. The source code will be publicly available at: https://github.com/DIAGNijmegen/oncology_image_registration.git.

A Comprehensive Framework for Uncertainty Quantification of Voxel-wise Supervised Models in IVIM MRI

Nicola Casali, Alessandro Brusaferri, Giuseppe Baselli, Stefano Fumagalli, Edoardo Micotti, Gianluigi Forloni, Riaz Hussein, Giovanna Rizzo, Alfonso Mastropietro

arxiv logopreprintAug 6 2025
Accurate estimation of intravoxel incoherent motion (IVIM) parameters from diffusion-weighted MRI remains challenging due to the ill-posed nature of the inverse problem and high sensitivity to noise, particularly in the perfusion compartment. In this work, we propose a probabilistic deep learning framework based on Deep Ensembles (DE) of Mixture Density Networks (MDNs), enabling estimation of total predictive uncertainty and decomposition into aleatoric (AU) and epistemic (EU) components. The method was benchmarked against non probabilistic neural networks, a Bayesian fitting approach and a probabilistic network with single Gaussian parametrization. Supervised training was performed on synthetic data, and evaluation was conducted on both simulated and an in vivo dataset. The reliability of the quantified uncertainties was assessed using calibration curves, output distribution sharpness, and the Continuous Ranked Probability Score (CRPS). MDNs produced more calibrated and sharper predictive distributions for the diffusion coefficient D and fraction f parameters, although slight overconfidence was observed in pseudo-diffusion coefficient D*. The Robust Coefficient of Variation (RCV) indicated smoother in vivo estimates for D* with MDNs compared to Gaussian model. Despite the training data covering the expected physiological range, elevated EU in vivo suggests a mismatch with real acquisition conditions, highlighting the importance of incorporating EU, which was allowed by DE. Overall, we present a comprehensive framework for IVIM fitting with uncertainty quantification, which enables the identification and interpretation of unreliable estimates. The proposed approach can also be adopted for fitting other physical models through appropriate architectural and simulation adjustments.

A Survey of Medical Point Cloud Shape Learning: Registration, Reconstruction and Variation

Tongxu Zhang, Zhiming Liang, Bei Wang

arxiv logopreprintAug 5 2025
Point clouds have become an increasingly important representation for 3D medical imaging, offering a compact, surface-preserving alternative to traditional voxel or mesh-based approaches. Recent advances in deep learning have enabled rapid progress in extracting, modeling, and analyzing anatomical shapes directly from point cloud data. This paper provides a comprehensive and systematic survey of learning-based shape analysis for medical point clouds, focusing on three fundamental tasks: registration, reconstruction, and variation modeling. We review recent literature from 2021 to 2025, summarize representative methods, datasets, and evaluation metrics, and highlight clinical applications and unique challenges in the medical domain. Key trends include the integration of hybrid representations, large-scale self-supervised models, and generative techniques. We also discuss current limitations, such as data scarcity, inter-patient variability, and the need for interpretable and robust solutions for clinical deployment. Finally, future directions are outlined for advancing point cloud-based shape learning in medical imaging.

Recurrent inference machine for medical image registration.

Zhang Y, Zhao Y, Xue H, Kellman P, Klein S, Tao Q

pubmed logopapersAug 5 2025
Image registration is essential for medical image applications where alignment of voxels across multiple images is needed for qualitative or quantitative analysis. With recent advances in deep neural networks and parallel computing, deep learning-based medical image registration methods become competitive with their flexible modeling and fast inference capabilities. However, compared to traditional optimization-based registration methods, the speed advantage may come at the cost of registration performance at inference time. Besides, deep neural networks ideally demand large training datasets while optimization-based methods are training-free. To improve registration accuracy and data efficiency, we propose a novel image registration method, termed Recurrent Inference Image Registration (RIIR) network. RIIR is formulated as a meta-learning solver for the registration problem in an iterative manner. RIIR addresses the accuracy and data efficiency issues, by learning the update rule of optimization, with implicit regularization combined with explicit gradient input. We extensively evaluated RIIR on brain MRI, lung CT, and quantitative cardiac MRI datasets, in terms of both registration accuracy and training data efficiency. Our experiments showed that RIIR outperformed a range of deep learning-based methods, even with only 5% of the training data, demonstrating high data efficiency. Key findings from our ablation studies highlighted the important added value of the hidden states introduced in the recurrent inference framework for meta-learning. Our proposed RIIR offers a highly data-efficient framework for deep learning-based medical image registration.

Modeling differences in neurodevelopmental maturity of the reading network using support vector regression on functional connectivity data

Lasnick, O. H. M., Luo, J., Kinnie, B., Kamal, S., Low, S., Marrouch, N., Hoeft, F.

biorxiv logopreprintAug 5 2025
The construction of growth charts trained to predict age or developmental deviation (the brain-age index) based on structural/functional properties of the brain may be informative of childrens neurodevelopmental trajectories. When applied to both typically and atypically developing populations, results may indicate that a particular condition is associated with atypical maturation of certain brain networks. Here, we focus on the relationship between reading disorder (RD) and maturation of functional connectivity (FC) patterns in the prototypical reading/language network using a cross-sectional sample of N = 742 participants aged 6-21 years. A support vector regression model is trained to predict chronological age from FC data derived from a whole-brain model as well as multiple reduced models, which are trained on FC data generated from a successively smaller number of regions in the brains reading network. We hypothesized that the trained models would show systematic underestimation of brain network maturity for poor readers, particularly for the models trained with reading/language regions. Comparisons of the different models predictions revealed that while the whole-brain model outperforms the others in terms of overall prediction accuracy, all models successfully predicted brain maturity, including the one trained with the smallest amount of FC data. In addition, all models showed that reading ability affected the brain-age gap, with poor readers ages being underestimated and advanced readers ages being overestimated. Exploratory results demonstrated that the most important regions and connections for prediction were derived from the default mode and frontoparietal control networks. GlossaryDevelopmental dyslexia / reading disorder (RD): A specific learning disorder affecting reading ability in the absence of any other explanatory condition such as intellectual disability or visual impairment Support vector regression (SVR): A supervised machine learning technique which predicts continuous outcomes (such as chronological age) rather than classifying each observation; finds the best-fit function within a defined error margin Principal component analysis (PCA): A dimensionality reduction technique that transforms a high-dimensional dataset with many features per observation into a reduced set of principal components for each observation; each component is a linear combination of several original (correlated) features, and the final set of components are all orthogonal (uncorrelated) to one another Brain-age index: A numerical index quantifying deviation from the brains typical developmental trajectory for a single individual; may be based on a variety of morphometric or functional properties of the brain, resulting in different estimates for the same participant depending on the imaging modality used Brain-age gap (BAG): The difference, given in units of time, between a participants true chronological age and a predictive models estimated age for that participant based on brain data (Actual - Predicted); may be used as a brain-age index HighlightsO_LIA machine learning model trained on functional data predicted participants ages C_LIO_LIThe model showed variability in age prediction accuracy based on reading skills C_LIO_LIThe model highly weighted data from frontoparietal and default mode regions C_LIO_LINeural markers of reading and language are diffusely represented in the brain C_LI

Conditional Diffusion Model with Anatomical-Dose Dual Constraints for End-to-End Multi-Tumor Dose Prediction

Hui Xie, Haiqin Hu, Lijuan Ding, Qing Li, Yue Sun, Tao Tan

arxiv logopreprintAug 4 2025
Radiotherapy treatment planning often relies on time-consuming, trial-and-error adjustments that heavily depend on the expertise of specialists, while existing deep learning methods face limitations in generalization, prediction accuracy, and clinical applicability. To tackle these challenges, we propose ADDiff-Dose, an Anatomical-Dose Dual Constraints Conditional Diffusion Model for end-to-end multi-tumor dose prediction. The model employs LightweightVAE3D to compress high-dimensional CT data and integrates multimodal inputs, including target and organ-at-risk (OAR) masks and beam parameters, within a progressive noise addition and denoising framework. It incorporates conditional features via a multi-head attention mechanism and utilizes a composite loss function combining MSE, conditional terms, and KL divergence to ensure both dosimetric accuracy and compliance with clinical constraints. Evaluation on a large-scale public dataset (2,877 cases) and three external institutional cohorts (450 cases in total) demonstrates that ADDiff-Dose significantly outperforms traditional baselines, achieving an MAE of 0.101-0.154 (compared to 0.316 for UNet and 0.169 for GAN models), a DICE coefficient of 0.927 (a 6.8% improvement), and limiting spinal cord maximum dose error to within 0.1 Gy. The average plan generation time per case is reduced to 22 seconds. Ablation studies confirm that the structural encoder enhances compliance with clinical dose constraints by 28.5%. To our knowledge, this is the first study to introduce a conditional diffusion model framework for radiotherapy dose prediction, offering a generalizable and efficient solution for automated treatment planning across diverse tumor sites, with the potential to substantially reduce planning time and improve clinical workflow efficiency.

Early prediction of proton therapy dose distributions and DVHs for hepatocellular carcinoma using contour-based CNN models from diagnostic CT and MRI.

Rachi T, Tochinai T

pubmed logopapersAug 4 2025
Proton therapy is commonly used for treating hepatocellular carcinoma (HCC); however, its feasibility can be challenging to assess in large tumors or those adjacent to critical organs at risk (OARs), which are typically assessed only after planning computed tomography (CT) acquisition. This study aimed to predict proton dose distributions using diagnostic CT (dCT) and diagnostic MRI (dMRI) with a convolutional neural network (CNN), enabling early treatment feasibility assessments. Dose distributions and dose-volume histograms (DVHs) were calculated for 118 patients with HCC using intensity-modulated proton therapy (IMPT) and passive proton therapy. A CPU-based CNN model was used to predict DVHs and 3D dose distributions from diagnostic images. Prediction accuracy was evaluated using mean absolute error (MAE), mean squared error (MSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and gamma passing rate with a 3 mm/3% criterion. The predicted DVHs and dose distributions showed high agreement with actual values. MAE remained below 3.0%, with passive techniques achieving 1.2-1.8%. MSE was below 0.004 in all cases. PSNR ranged from 24 to 28 dB, and SSIM exceeded 0.94 in most conditions. Gamma passing rates averaged 82-83% for IMPT and 92-93% for passive techniques. The model achieved comparable accuracy when using dMRI and dCT. This study demonstrates that early dose distribution prediction from diagnostic imaging is feasible and accurate using a lightweight CNN model. Despite anatomical variability between diagnostic and planning images, this approach provides timely insights into treatment feasibility, potentially supporting insurance pre-authorization, reducing unnecessary imaging, and optimizing clinical workflows for HCC proton therapy.

Anatomical Considerations for Achieving Optimized Outcomes in Individualized Cochlear Implantation.

Timm ME, Avallone E, Timm M, Salcher RB, Rudnik N, Lenarz T, Schurzig D

pubmed logopapersAug 1 2025
Machine learning models can assist with the selection of electrode arrays required for optimal insertion angles. Cochlea implantation is a successful therapy in patients with severe to profound hearing loss. The effectiveness of a cochlea implant depends on precise insertion and positioning of electrode array within the cochlea, which is known for its variability in shape and size. Preoperative imaging like CT or MRI plays a significant role in evaluating cochlear anatomy and planning the surgical approach to optimize outcomes. In this study, preoperative and postoperative CT and CBCT data of 558 cochlea-implant patients were analyzed in terms of the influence of anatomical factors and insertion depth onto the resulting insertion angle. Machine learning models can predict insertion depths needed for optimal insertion angles, with performance improving by including cochlear dimensions in the models. A simple linear regression using just the insertion depth explained 88% of variability, whereas adding cochlear length or diameter and width further improved predictions up to 94%.

Brain Age Prediction: Deep Models Need a Hand to Generalize.

Rajabli R, Soltaninejad M, Fonov VS, Bzdok D, Collins DL

pubmed logopapersAug 1 2025
Predicting brain age from T1-weighted MRI is a promising marker for understanding brain aging and its associated conditions. While deep learning models have shown success in reducing the mean absolute error (MAE) of predicted brain age, concerns about robust and accurate generalization in new data limit their clinical applicability. The large number of trainable parameters, combined with limited medical imaging training data, contributes to this challenge, often resulting in a generalization gap where there is a significant discrepancy between model performance on training data versus unseen data. In this study, we assess a deep model, SFCN-reg, based on the VGG-16 architecture, and address the generalization gap through comprehensive preprocessing, extensive data augmentation, and model regularization. Using training data from the UK Biobank, we demonstrate substantial improvements in model performance. Specifically, our approach reduces the generalization MAE by 47% (from 5.25 to 2.79 years) in the Alzheimer's Disease Neuroimaging Initiative dataset and by 12% (from 4.35 to 3.75 years) in the Australian Imaging, Biomarker and Lifestyle dataset. Furthermore, we achieve up to 13% reduction in scan-rescan error (from 0.80 to 0.70 years) while enhancing the model's robustness to registration errors. Feature importance maps highlight anatomical regions used to predict age. These results highlight the critical role of high-quality preprocessing and robust training techniques in improving accuracy and narrowing the generalization gap, both necessary steps toward the clinical use of brain age prediction models. Our study makes valuable contributions to neuroimaging research by offering a potential pathway to improve the clinical applicability of deep learning models.
Page 2 of 14131 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.