Sort by:
Page 16 of 19183 results

Uncertainty Estimation for Dual View X-ray Mammographic Image Registration Using Deep Ensembles.

Walton WC, Kim SJ

pubmed logopapersJun 1 2025
Techniques are developed for generating uncertainty estimates for convolutional neural network (CNN)-based methods for registering the locations of lesions between the craniocaudal (CC) and mediolateral oblique (MLO) mammographic X-ray image views. Multi-view lesion correspondence is an important task that clinicians perform for characterizing lesions during routine mammographic exams. Automated registration tools can aid in this task, yet if the tools also provide confidence estimates, they can be of greater value to clinicians, especially in cases involving dense tissue where lesions may be difficult to see. A set of deep ensemble-based techniques, which leverage a negative log-likelihood (NLL)-based cost function, are implemented for estimating uncertainties. The ensemble architectures involve significant modifications to an existing CNN dual-view lesion registration algorithm. Three architectural designs are evaluated, and different ensemble sizes are compared using various performance metrics. The techniques are tested on synthetic X-ray data, real 2D X-ray data, and slices from real 3D X-ray data. The ensembles generate covariance-based uncertainty ellipses that are correlated with registration accuracy, such that the ellipse sizes can give a clinician an indication of confidence in the mapping between the CC and MLO views. The results also show that the ellipse sizes can aid in improving computer-aided detection (CAD) results by matching CC/MLO lesion detects and reducing false alarms from both views, adding to clinical utility. The uncertainty estimation techniques show promise as a means for aiding clinicians in confidently establishing multi-view lesion correspondence, thereby improving diagnostic capability.

Cross-site Validation of AI Segmentation and Harmonization in Breast MRI.

Huang Y, Leotta NJ, Hirsch L, Gullo RL, Hughes M, Reiner J, Saphier NB, Myers KS, Panigrahi B, Ambinder E, Di Carlo P, Grimm LJ, Lowell D, Yoon S, Ghate SV, Parra LC, Sutton EJ

pubmed logopapersJun 1 2025
This work aims to perform a cross-site validation of automated segmentation for breast cancers in MRI and to compare the performance to radiologists. A three-dimensional (3D) U-Net was trained to segment cancers in dynamic contrast-enhanced axial MRIs using a large dataset from Site 1 (n = 15,266; 449 malignant and 14,817 benign). Performance was validated on site-specific test data from this and two additional sites, and common publicly available testing data. Four radiologists from each of the three clinical sites provided two-dimensional (2D) segmentations as ground truth. Segmentation performance did not differ between the network and radiologists on the test data from Sites 1 and 2 or the common public data (median Dice score Site 1, network 0.86 vs. radiologist 0.85, n = 114; Site 2, 0.91 vs. 0.91, n = 50; common: 0.93 vs. 0.90). For Site 3, an affine input layer was fine-tuned using segmentation labels, resulting in comparable performance between the network and radiologist (0.88 vs. 0.89, n = 42). Radiologist performance differed on the common test data, and the network numerically outperformed 11 of the 12 radiologists (median Dice: 0.85-0.94, n = 20). In conclusion, a deep network with a novel supervised harmonization technique matches radiologists' performance in MRI tumor segmentation across clinical sites. We make code and weights publicly available to promote reproducible AI in radiology.

Impact of deep learning reconstruction on radiation dose reduction and cancer risk in CT examinations: a real-world clinical analysis.

Kobayashi N, Nakaura T, Yoshida N, Nagayama Y, Kidoh M, Uetani H, Sakabe D, Kawamata Y, Funama Y, Tsutsumi T, Hirai T

pubmed logopapersJun 1 2025
The purpose of this study is to estimate the extent to which the implementation of deep learning reconstruction (DLR) may reduce the risk of radiation-induced cancer from CT examinations, utilizing real-world clinical data. We retrospectively analyzed scan data of adult patients who underwent body CT during two periods relative to DLR implementation at our facility: a 12-month pre-DLR phase (n = 5553) using hybrid iterative reconstruction and a 12-month post-DLR phase (n = 5494) with routine CT reconstruction transitioning to DLR. To ensure comparability between two groups, we employed propensity score matching 1:1 based on age, sex, and body mass index. Dose data were collected to estimate organ-specific equivalent doses and total effective doses. We assessed the average dose reduction post-DLR implementation and estimated the Lifetime Attributable Risk (LAR) for cancer per CT exam pre- and post-DLR implementation. The number of radiation-induced cancers before and after the implementation of DLR was also estimated. After propensity score matching, 5247 cases from each group were included in the final analysis. Post-DLR, the total effective body CT dose significantly decreased to 15.5 ± 10.3 mSv from 28.1 ± 14.0 mSv pre-DLR (p < 0.001), a 45% reduction. This dose reduction significantly lowered the radiation-induced cancer risk, especially among younger women, with the estimated annual cancer incidence from 0.247% pre-DLR to 0.130% post-DLR. The implementation of DLR has the possibility to reduce radiation dose by 45% and the risk of radiation-induced cancer from 0.247 to 0.130% as compared with the iterative reconstruction. Question Can implementing deep learning reconstruction (DLR) in routine CT scans significantly reduce radiation dose and the risk of radiation-induced cancer compared to hybrid iterative reconstruction? Findings DLR reduced the total effective body CT dose by 45% (from 28.1 ± 14.0 mSv to 15.5 ± 10.3 mSv) and decreased estimated cancer incidence from 0.247 to 0.130%. Clinical relevance Adopting DLR in clinical practice substantially lowers radiation exposure and cancer risk from CT exams, enhancing patient safety, especially for younger women, and underscores the importance of advanced imaging techniques.

Deep Learning-Based Three-Dimensional Analysis Reveals Distinct Patterns of Condylar Remodelling After Orthognathic Surgery in Skeletal Class III Patients.

Barone S, Cevidanes L, Bianchi J, Goncalves JR, Giudice A

pubmed logopapersJun 1 2025
This retrospective study aimed to evaluate morphometric changes in mandibular condyles of patients with skeletal Class III malocclusion following two-jaw orthognathic surgery planned using virtual surgical planning (VSP) and analysed with automated three-dimensional (3D) image analysis based on deep-learning techniques. Pre-operative (T1) and 12-18 months post-operative (T2) Cone-Beam Computed Tomography (CBCT) scans of 17 patients (mean age: 24.8 ± 3.5 years) were analysed using 3DSlicer software. Deep-learning algorithms automated CBCT orientation, registration, bone segmentation, and landmark identification. By utilising voxel-based superimposition of pre- and post-operative CBCT scans and shape correspondence, the overall changes in condylar morphology were assessed, with a focus on bone resorption and apposition at specific regions (superior, lateral and medial poles). The correlation between these modifications and the extent of actual condylar movements post-surgery was investigated. Statistical analysis was conducted with a significance level of α = 0.05. Overall condylar remodelling was minimal, with mean changes of < 1 mm. Small but statistically significant bone resorption occurred at the condylar superior articular surface, while bone apposition was primarily observed at the lateral pole. The bone apposition at the lateral pole and resorption at the superior articular surface were significantly correlated with medial condylar displacement (p < 0.05). The automated 3D analysis revealed distinct patterns of condylar remodelling following orthognathic surgery in skeletal Class III patients, with minimal overall changes but significant regional variations. The correlation between condylar displacements and remodelling patterns highlights the need for precise pre-operative planning to optimise condylar positioning, potentially minimising harmful remodelling and enhancing stability.

CineMA: A Foundation Model for Cine Cardiac MRI

Yunguan Fu, Weixi Yi, Charlotte Manisty, Anish N Bhuva, Thomas A Treibel, James C Moon, Matthew J Clarkson, Rhodri Huw Davies, Yipeng Hu

arxiv logopreprintMay 31 2025
Cardiac magnetic resonance (CMR) is a key investigation in clinical cardiovascular medicine and has been used extensively in population research. However, extracting clinically important measurements such as ejection fraction for diagnosing cardiovascular diseases remains time-consuming and subjective. We developed CineMA, a foundation AI model automating these tasks with limited labels. CineMA is a self-supervised autoencoder model trained on 74,916 cine CMR studies to reconstruct images from masked inputs. After fine-tuning, it was evaluated across eight datasets on 23 tasks from four categories: ventricle and myocardium segmentation, left and right ventricle ejection fraction calculation, disease detection and classification, and landmark localisation. CineMA is the first foundation model for cine CMR to match or outperform convolutional neural networks (CNNs). CineMA demonstrated greater label efficiency than CNNs, achieving comparable or better performance with fewer annotations. This reduces the burden of clinician labelling and supports replacing task-specific training with fine-tuning foundation models in future cardiac imaging applications. Models and code for pre-training and fine-tuning are available at https://github.com/mathpluscode/CineMA, democratising access to high-performance models that otherwise require substantial computational resources, promoting reproducibility and accelerating clinical translation.

Dual-energy CT-based virtual monoenergetic imaging via unsupervised learning.

Liu CK, Chang HY, Huang HM

pubmed logopapersMay 31 2025
Since its development, virtual monoenergetic imaging (VMI) derived from dual-energy computed tomography (DECT) has been shown to be valuable in many clinical applications. However, DECT-based VMI showed increased noise at low keV levels. In this study, we proposed an unsupervised learning method to generate VMI from DECT. This means that we don't require training and labeled (i.e. high-quality VMI) data. Specifically, DECT images were fed into a deep learning (DL) based model expected to output VMI. Based on the theory that VMI obtained from image space data is a linear combination of DECT images, we used the model output (i.e. the predicted VMI) to recalculate DECT images. By minimizing the difference between the measured and recalculated DECT images, the DL-based model can be constrained itself to generate VMI from DECT images. We investigate whether the proposed DL-based method has the ability to improve the quality of VMIs. The experimental results obtained from patient data showed that the DL-based VMIs had better image quality than the conventional DECT-based VMIs. Moreover, the CT number differences between the DECT-based and DL-based VMIs were distributed within <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>±</mo></math> 10 HU for bone and <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>±</mo></math> 5 HU for brain, fat, and muscle. Except for bone, no statistically significant difference in CT number measurements was found between the DECT-based and DL-based VMIs (p > 0.01). Our preliminary results show that DL has the potential to unsupervisedly generate high-quality VMIs directly from DECT.

Edge Computing for Physics-Driven AI in Computational MRI: A Feasibility Study

Yaşar Utku Alçalar, Yu Cao, Mehmet Akçakaya

arxiv logopreprintMay 30 2025
Physics-driven artificial intelligence (PD-AI) reconstruction methods have emerged as the state-of-the-art for accelerating MRI scans, enabling higher spatial and temporal resolutions. However, the high resolution of these scans generates massive data volumes, leading to challenges in transmission, storage, and real-time processing. This is particularly pronounced in functional MRI, where hundreds of volumetric acquisitions further exacerbate these demands. Edge computing with FPGAs presents a promising solution for enabling PD-AI reconstruction near the MRI sensors, reducing data transfer and storage bottlenecks. However, this requires optimization of PD-AI models for hardware efficiency through quantization and bypassing traditional FFT-based approaches, which can be a limitation due to their computational demands. In this work, we propose a novel PD-AI computational MRI approach optimized for FPGA-based edge computing devices, leveraging 8-bit complex data quantization and eliminating redundant FFT/IFFT operations. Our results show that this strategy improves computational efficiency while maintaining reconstruction quality comparable to conventional PD-AI methods, and outperforms standard clinical methods. Our approach presents an opportunity for high-resolution MRI reconstruction on resource-constrained devices, highlighting its potential for real-world deployment.

Self-supervised feature learning for cardiac Cine MR image reconstruction

Siying Xu, Marcel Früh, Kerstin Hammernik, Andreas Lingg, Jens Kübler, Patrick Krumm, Daniel Rueckert, Sergios Gatidis, Thomas Küstner

arxiv logopreprintMay 29 2025
We propose a self-supervised feature learning assisted reconstruction (SSFL-Recon) framework for MRI reconstruction to address the limitation of existing supervised learning methods. Although recent deep learning-based methods have shown promising performance in MRI reconstruction, most require fully-sampled images for supervised learning, which is challenging in practice considering long acquisition times under respiratory or organ motion. Moreover, nearly all fully-sampled datasets are obtained from conventional reconstruction of mildly accelerated datasets, thus potentially biasing the achievable performance. The numerous undersampled datasets with different accelerations in clinical practice, hence, remain underutilized. To address these issues, we first train a self-supervised feature extractor on undersampled images to learn sampling-insensitive features. The pre-learned features are subsequently embedded in the self-supervised reconstruction network to assist in removing artifacts. Experiments were conducted retrospectively on an in-house 2D cardiac Cine dataset, including 91 cardiovascular patients and 38 healthy subjects. The results demonstrate that the proposed SSFL-Recon framework outperforms existing self-supervised MRI reconstruction methods and even exhibits comparable or better performance to supervised learning up to $16\times$ retrospective undersampling. The feature learning strategy can effectively extract global representations, which have proven beneficial in removing artifacts and increasing generalization ability during reconstruction.

Operationalizing postmortem pathology-MRI association studies in Alzheimer's disease and related disorders with MRI-guided histology sampling.

Athalye C, Bahena A, Khandelwal P, Emrani S, Trotman W, Levorse LM, Khodakarami Z, Ohm DT, Teunissen-Bermeo E, Capp N, Sadaghiani S, Arezoumandan S, Lim SA, Prabhakaran K, Ittyerah R, Robinson JL, Schuck T, Lee EB, Tisdall MD, Das SR, Wolk DA, Irwin DJ, Yushkevich PA

pubmed logopapersMay 28 2025
Postmortem neuropathological examination, while the gold standard for diagnosing neurodegenerative diseases, often relies on limited regional sampling that may miss critical areas affected by Alzheimer's disease and related disorders. Ultra-high resolution postmortem MRI can help identify regions that fall outside the diagnostic sampling criteria for additional histopathologic evaluation. However, there are no standardized guidelines for integrating histology and MRI in a traditional brain bank. We developed a comprehensive protocol for whole hemisphere postmortem 7T MRI-guided histopathological sampling with whole-slide digital imaging and histopathological analysis, providing a reliable pipeline for high-volume brain banking in heterogeneous brain tissue. Our method uses patient-specific 3D printed molds built from postmortem MRI, allowing standardized tissue processing with a permanent spatial reference frame. To facilitate pathology-MRI association studies, we created a semi-automated MRI to histology registration pipeline and developed a quantitative pathology scoring system using weakly supervised deep learning. We validated this protocol on a cohort of 29 brains with diagnosis on the AD spectrum that revealed correlations between cortical thickness and phosphorylated tau accumulation. This pipeline has broad applicability across neuropathological research and brain banking, facilitating large-scale studies that integrate histology with neuroimaging. The innovations presented here provide a scalable and reproducible approach to studying postmortem brain pathology, with implications for advancing diagnostic and therapeutic strategies for Alzheimer's disease and related disorders.

SUFFICIENT: A scan-specific unsupervised deep learning framework for high-resolution 3D isotropic fetal brain MRI reconstruction

Jiangjie Wu, Lixuan Chen, Zhenghao Li, Xin Li, Saban Ozturk, Lihui Wang, Rongpin Wang, Hongjiang Wei, Yuyao Zhang

arxiv logopreprintMay 23 2025
High-quality 3D fetal brain MRI reconstruction from motion-corrupted 2D slices is crucial for clinical diagnosis. Reliable slice-to-volume registration (SVR)-based motion correction and super-resolution reconstruction (SRR) methods are essential. Deep learning (DL) has demonstrated potential in enhancing SVR and SRR when compared to conventional methods. However, it requires large-scale external training datasets, which are difficult to obtain for clinical fetal MRI. To address this issue, we propose an unsupervised iterative SVR-SRR framework for isotropic HR volume reconstruction. Specifically, SVR is formulated as a function mapping a 2D slice and a 3D target volume to a rigid transformation matrix, which aligns the slice to the underlying location in the target volume. The function is parameterized by a convolutional neural network, which is trained by minimizing the difference between the volume slicing at the predicted position and the input slice. In SRR, a decoding network embedded within a deep image prior framework is incorporated with a comprehensive image degradation model to produce the high-resolution (HR) volume. The deep image prior framework offers a local consistency prior to guide the reconstruction of HR volumes. By performing a forward degradation model, the HR volume is optimized by minimizing loss between predicted slices and the observed slices. Comprehensive experiments conducted on large-magnitude motion-corrupted simulation data and clinical data demonstrate the superior performance of the proposed framework over state-of-the-art fetal brain reconstruction frameworks.
Page 16 of 19183 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.