Sort by:
Page 10 of 68675 results

Leveraging Representation Learning for Bi-parametric Prostate MRI to Disambiguate PI-RADS 3 and Improve Biopsy Decision Strategies.

Umapathy L, Johnson PM, Dutt T, Tong A, Chopra S, Sodickson DK, Chandarana H

pubmed logopapersJun 30 2025
Despite its high negative predictive value (NPV) for clinically significant prostate cancer (csPCa), MRI suffers from a substantial number of false positives, especially for intermediate-risk cases. In this work, we determine whether a deep learning model trained with PI-RADS-guided representation learning can disambiguate the PI-RADS 3 classification, detect csPCa from bi-parametric prostate MR images, and avoid unnecessary benign biopsies. This study included 28,263 MR examinations and radiology reports from 21,938 men imaged for known or suspected prostate cancer between 2015 and 2023 at our institution (21 imaging locations with 34 readers), with 6352 subsequent biopsies. We trained a deep learning model, a representation learner (RL), to learn how radiologists interpret conventionally acquired T2-weighted and diffusion-weighted MR images, using exams in which the radiologists are confident in their risk assessments (PI-RADS 1 and 2 for the absence of csPCa vs. PI-RADS 4 and 5 for the presence of csPCa, n=21,465). We then trained biopsy-decision models to detect csPCa (Gleason score ≥7) using these learned image representations, and compared them to the performance of radiologists, and of models trained on other clinical variables (age, prostate volume, PSA, and PSA density) for treatment-naïve test cohorts consisting of only PI-RADS 3 (n=253, csPCa=103) and all PI-RADS (n=531, csPCa=300) cases. On the 2 test cohorts (PI-RADS-3-only, all-PI-RADS), RL-based biopsy-decision models consistently yielded higher AUCs in detecting csPCa (AUC=0.73 [0.66, 0.79], 0.88 [0.85, 0.91]) compared with radiologists (equivocal, AUC=0.79 [0.75, 0.83]) and the clinical model (AUCs=0.69 [0.62, 0.75], 0.78 [0.74, 0.82]). In the PIRADS-3-only cohort, all of whom would be biopsied using our institution's standard of care, the RL decision model avoided 41% (62/150) of benign biopsies compared with the clinical model (26%, P<0.001), and improved biopsy yield by 10% compared with the PI-RADS ≥3 decision strategy (0.50 vs. 0.40). Furthermore, on the all-PI-RADS cohort, RL decision model avoided 27% of additional benign biopsies (138/231) compared to radiologists (33%, P<0.001) with comparable sensitivity (93% vs. 92%), higher NPV (0.87 vs. 0.77), and biopsy yield (0.75 vs. 0.64). The combination of clinical and RL decision models further avoided benign biopsies (46% in PI-RADS-3-only and 62% in all-PI-RADS) while improving NPV (0.82, 0.88) and biopsy yields (0.52, 0.76) across the 2 test cohorts. Our PI-RADS-guided deep learning RL model learns summary representations from bi-parametric prostate MR images that can provide additional information to disambiguate intermediate-risk PI-RADS 3 assessments. The resulting RL-based biopsy decision models also outperformed radiologists in avoiding benign biopsies while maintaining comparable sensitivity to csPCa for the all-PI-RADS cohort. Such AI models can easily be integrated into clinical practice to supplement radiologists' reads in general and improve biopsy yield for any equivocal decisions.

Cost-effectiveness analysis of artificial intelligence (AI) in earlier detection of liver lesions in cirrhotic patients at risk of hepatocellular carcinoma in Italy.

Maas L, Contreras-Meca C, Ghezzo S, Belmans F, Corsi A, Cant J, Vos W, Bobowicz M, Rygusik M, Laski DK, Annemans L, Hiligsmann M

pubmed logopapersJun 30 2025
Hepatocellular carcinoma (HCC) is the fifth most common cancer worldwide and the third most common cause of cancer-related death. Cirrhosis is a major contributing factor, accounting for over 90% of HCC cases. With the high mortality rate of HCC, earlier detection of HCC is critical. When added to magnetic resonance imaging (MRI), artificial intelligence (AI) has been shown to improve HCC detection. Nonetheless, to date no cost-effectiveness analyses have been conducted on an AI tool to enhance earlier HCC detection. This study reports on the cost-effectiveness of detection of liver lesions with AI improved MRI in the surveillance for HCC in patients with a cirrhotic liver compared to usual care (UC). The model structure included a decision tree followed by a state-transition Markov model from an Italian healthcare perspective. Lifetime costs and quality-adjusted life years (QALY) were simulated in cirrhotic patients at risk of HCC. One-way sensitivity analyses and two-way sensitivity analyses were performed. Results were presented as incremental cost-effectiveness ratios (ICER). For patients receiving UC, the average lifetime costs per 1,000 patients were €16,604,800 compared to €16,610,250 for patients receiving the AI approach. With a QALY gained of 0.55 and incremental costs of €5,000 for every 1,000 patients, the ICER was €9,888 per QALY gained, indicating cost-effectiveness with the willingness-to-pay threshold of €33,000/QALY gained. Main drivers of cost-effectiveness included the cost and performance (sensitivity and specificity) of the AI tool. This study suggests that an AI-based approach to earlier detect HCC in cirrhotic patients can be cost-effective. By incorporating cost-effective AI-based approaches in clinical practice, patient outcomes and healthcare efficiency are improved.

A Deep Learning-Based De-Artifact Diffusion Model for Removing Motion Artifacts in Knee MRI.

Li Y, Gong T, Zhou Q, Wang H, Yan X, Xi Y, Shi Z, Deng W, Shi F, Wang Y

pubmed logopapersJun 30 2025
Motion artifacts are common for knee MRI, which usually lead to rescanning. Effective removal of motion artifacts would be clinically useful. To construct an effective deep learning-based model to remove motion artifacts for knee MRI using real-world data. Retrospective. Model construction: 90 consecutive patients (1997 2D slices) who had knee MRI images with motion artifacts paired with immediately rescanned images without artifacts served as ground truth. Internal test dataset: 25 patients (795 slices) from another period; external test dataset: 39 patients (813 slices) from another hospital. 3-T/1.5-T knee MRI with T1-weighted imaging, T2-weighted imaging, and proton-weighted imaging. A deep learning-based supervised conditional diffusion model was constructed. Objective metrics (root mean square error [RMSE], peak signal-to-noise ratio [PSNR], structural similarity [SSIM]) and subjective ratings were used for image quality assessment, which were compared with three other algorithms (enhanced super-resolution [ESR], enhanced deep super-resolution, and ESR using a generative adversarial network). Diagnostic performance of the output images was compared with the rescanned images. The Kappa Test, Pearson chi-square test, Fredman's rank-sum test, and the marginal homogeneity test. A p value < 0.05 was considered statistically significant. Subjective ratings showed significant improvements in the output images compared to the input, with no significant difference from the ground truth. The constructed method demonstrated the smallest RMSE (11.44  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  5.47 in the validation cohort; 13.95  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  4.32 in the external test cohort), the largest PSNR (27.61  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  3.20 in the validation cohort; 25.64  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  2.67 in the external test cohort) and SSIM (0.97  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  0.04 in the validation cohort; 0.94  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  0.04 in the external test cohort) compared to the other three algorithms. The output images achieved comparable diagnostic capability as the ground truth for multiple anatomical structures. The constructed model exhibited feasibility and effectiveness, and outperformed multiple other algorithms for removing motion artifacts in knee MRI. Level 3. Stage 2.

Physics informed guided diffusion for accelerated multi-parametric MRI reconstruction

Perla Mayo, Carolin M. Pirkl, Alin Achim, Bjoern Menze, Mohammad Golbabaee

arxiv logopreprintJun 29 2025
We introduce MRF-DiPh, a novel physics informed denoising diffusion approach for multiparametric tissue mapping from highly accelerated, transient-state quantitative MRI acquisitions like Magnetic Resonance Fingerprinting (MRF). Our method is derived from a proximal splitting formulation, incorporating a pretrained denoising diffusion model as an effective image prior to regularize the MRF inverse problem. Further, during reconstruction it simultaneously enforces two key physical constraints: (1) k-space measurement consistency and (2) adherence to the Bloch response model. Numerical experiments on in-vivo brain scans data show that MRF-DiPh outperforms deep learning and compressed sensing MRF baselines, providing more accurate parameter maps while better preserving measurement fidelity and physical model consistency-critical for solving reliably inverse problems in medical imaging.

Perivascular Space Burden in Children With Autism Spectrum Disorder Correlates With Neurodevelopmental Severity.

Frigerio G, Rizzato G, Peruzzo D, Ciceri T, Mani E, Lanteri F, Mariani V, Molteni M, Agarwal N

pubmed logopapersJun 29 2025
Cerebral perivascular spaces (PVS) are involved in cerebrospinal fluid (CSF) circulation and clearance of metabolic waste in adult humans. A high number of PVS has been reported in autism spectrum disorder (ASD) but its relationship with CSF and disease severity is unclear. To quantify PVS in children with ASD through MRI. Retrospective. Sixty six children with ASD (mean age: 4.7 ± 1.5 years; males/females: 59/7). 3T, 3D T1-weighted GRE and 3D T2-weighted turbo spin echo sequences. PVS were segmented using a weakly supervised PVS algorithm. PVS count, white matter-perivascular spaces (WM-PVS<sub>tot</sub>) and normalized volume (WM-PVS<sub>voln</sub>) were analyzed in the entire white matter. Six regions: frontal, parietal, limbic, occipital, temporal, and deep WM (WM-PVS<sub>sr</sub>). WM, GM, CSF, and extra-axial CSF (eaCSF) volumes were also calculated. Autism Diagnostic Observation Schedule, Wechsler Intelligence Scale, and Griffiths Mental Developmental scales were used to assess clinical severity and developmental quotient (DQ). Kendall correlation analysis (continuous variables) and Friedman (categorical variables) tests were used to compare medians of PVS variables across different WM regions. Post hoc pairwise comparisons with Wilcoxon tests were used to evaluate distributions of PVS in WM regions. Generalized linear models were employed to assess DQ, clinical severity, age, and eaCSF volume in relation to PVS variables. A p-value < 0.05 indicated statistical significance. Severe DQ (β = 0.0089), mild form of autism (β = -0.0174), and larger eaCSF (β = 0.0082) volume was significantly associated with greater WM-PVS<sub>tot</sub> count. WM-PVS<sub>voln</sub> was predominantly affected by normalized eaCSF volume (eaCSF<sub>voln</sub>) (β = 0.0242; adjusted for WM volumes). The percentage of WM-PVS<sub>sr</sub> was higher in the frontal areas (32%) and was lowest in the temporal regions (11%). PVS count and volume in ASD are associated with eaCSF<sub>voln</sub>. PVS count is related to clinical severity and DQ. PVS count was higher in frontal regions and lower in temporal regions. 4. Stage 3.

CA-Diff: Collaborative Anatomy Diffusion for Brain Tissue Segmentation

Qilong Xing, Zikai Song, Yuteng Ye, Yuke Chen, Youjia Zhang, Na Feng, Junqing Yu, Wei Yang

arxiv logopreprintJun 28 2025
Segmentation of brain structures from MRI is crucial for evaluating brain morphology, yet existing CNN and transformer-based methods struggle to delineate complex structures accurately. While current diffusion models have shown promise in image segmentation, they are inadequate when applied directly to brain MRI due to neglecting anatomical information. To address this, we propose Collaborative Anatomy Diffusion (CA-Diff), a framework integrating spatial anatomical features to enhance segmentation accuracy of the diffusion model. Specifically, we introduce distance field as an auxiliary anatomical condition to provide global spatial context, alongside a collaborative diffusion process to model its joint distribution with anatomical structures, enabling effective utilization of anatomical features for segmentation. Furthermore, we introduce a consistency loss to refine relationships between the distance field and anatomical structures and design a time adapted channel attention module to enhance the U-Net feature fusion procedure. Extensive experiments show that CA-Diff outperforms state-of-the-art (SOTA) methods.

Hierarchical Characterization of Brain Dynamics via State Space-based Vector Quantization

Yanwu Yang, Thomas Wolfers

arxiv logopreprintJun 28 2025
Understanding brain dynamics through functional Magnetic Resonance Imaging (fMRI) remains a fundamental challenge in neuroscience, particularly in capturing how the brain transitions between various functional states. Recently, metastability, which refers to temporarily stable brain states, has offered a promising paradigm to quantify complex brain signals into interpretable, discretized representations. In particular, compared to cluster-based machine learning approaches, tokenization approaches leveraging vector quantization have shown promise in representation learning with powerful reconstruction and predictive capabilities. However, most existing methods ignore brain transition dependencies and lack a quantification of brain dynamics into representative and stable embeddings. In this study, we propose a Hierarchical State space-based Tokenization network, termed HST, which quantizes brain states and transitions in a hierarchical structure based on a state space-based model. We introduce a refined clustered Vector-Quantization Variational AutoEncoder (VQ-VAE) that incorporates quantization error feedback and clustering to improve quantization performance while facilitating metastability with representative and stable token representations. We validate our HST on two public fMRI datasets, demonstrating its effectiveness in quantifying the hierarchical dynamics of the brain and its potential in disease diagnosis and reconstruction performance. Our method offers a promising framework for the characterization of brain dynamics, facilitating the analysis of metastability.

Hybrid segmentation model and CAViaR -based Xception Maxout network for brain tumor detection using MRI images.

Swapna S, Garapati Y

pubmed logopapersJun 27 2025
Brain tumor (BT) is a rapid growth of brain cells. If the BT is not identified and treated in the first stage, it could cause death. Despite several methods and efforts being developed for segmenting and identifying BT, the detection of BT is complicated due to the distinct position of the tumor and its size. To solve such issues, this paper proposes the Conditional Autoregressive Value-at-Risk_Xception Maxout-Network (Caviar_XM-Net) for BT detection utilizing magnetic resonance imaging (MRI) images. The input MRI image gathered from the dataset is denoised using the adaptive bilateral filter (ABF), and tumor region segmentation is done using BFC-MRFNet-RVSeg. Here, the segmentation is done by the Bayesian fuzzy clustering (BFC) and multi-branch residual fusion network (MRF-Net) separately. Subsequently, outputs from both segmentation techniques are combined using the RV coefficient. Image augmentation is performed to boost the quantity of images in the training process. Afterwards, feature extraction is done, where features, like local optimal oriented pattern (LOOP), convolutional neural network (CNN) features, median binary pattern (MBP) with statistical features, and local Gabor XOR pattern (LGXP), are extracted. Lastly, BT detection is carried out by employing Caviar_XM-Net, which is acquired by the assimilation of the Xception model and deep Maxout network (DMN) with the CAViaR approach. Furthermore, the effectiveness of Caviar_XM-Net is examined using the parameters, namely sensitivity, accuracy, specificity, precision, and F1-score, and the corresponding values of 91.59%, 91.36%, 90.83%, 90.99%, and 91.29% are attained. Hence, the Caviar_XM-Net performs better than the traditional methods with high efficiency.

Deep Learning-Based Prediction of PET Amyloid Status Using MRI.

Kim D, Ottesen JA, Kumar A, Ho BC, Bismuth E, Young CB, Mormino E, Zaharchuk G

pubmed logopapersJun 27 2025
Identifying amyloid-beta (Aβ)-positive patients is essential for Alzheimer's disease (AD) clinical trials and disease-modifying treatments but currently requires PET or cerebrospinal fluid sampling. Previous MRI-based deep learning models, using only T1-weighted (T1w) images, have shown moderate performance. Multi-contrast MRI and PET-based quantitative Aβ deposition were retrospectively obtained from three public datasets: ADNI, OASIS3, and A4. Aβ positivity was defined using each dataset's recommended centiloid threshold. Two EfficientNet models were trained to predict amyloid positivity: one using only T1w images and another incorporating both T1w and T2-FLAIR. Model performance was assessed using an internal held-out test set, evaluating AUC, accuracy, sensitivity, and specificity. External validation was conducted using an independent cohort from Stanford Alzheimer's Disease Research Center. DeLong's and McNemar's tests were used to compare AUC and accuracy, respectively. A total of 4,056 exams (mean [SD] age: 71.6 [6.3] years; 55% female; 55% amyloid-positive) were used for network development, and 149 exams were used for external testing (mean [SD] age: 72.1 [9.6] years; 58% female; 56% amyloid-positive). The multi-contrast model outperformed the single-modality model in the internal held-out test set (AUC: 0.67, 95% CI: 0.65-0.70, <i>P</i> < 0.001; accuracy: 0.63, 95% CI: 0.62-0.65, <i>P</i> < 0.001) compared to the T1w-only model (AUC: 0.61; accuracy: 0.59). Among cognitive subgroups, the highest performance (AUC: 0.71) was observed in mild cognitive impairment. The multi-contrast model also demonstrated consistent performance in the external test set (AUC: 0.65, 95% CI: 0.60-0.71, <i>P</i> = 0.014; accuracy: 0.62, 95% CI: 0.58- 0.65, <i>P</i> < 0.001). The use of multi-contrast MRI, specifically incorporating T2-FLAIR in addition to T1w images, significantly improved the predictive accuracy of PET-determined amyloid status from MRI scans using a deep learning approach. Aβ= amyloid-beta; AD= Alzheimer's disease; AUC= area under the receiver operating characteristic curve; CN= cognitively normal; MCI= mild cognitive impairment; T1w = T1-wegithed; T2-FLAIR = T2-weighted fluid attenuated inversion recovery; FBP=<sup>18</sup>F-florbetapir; FBB=<sup>18</sup>F-florbetaben; SUVR= standard uptake value ratio.

Towards Scalable and Robust White Matter Lesion Localization via Multimodal Deep Learning

Julia Machnio, Sebastian Nørgaard Llambias, Mads Nielsen, Mostafa Mehdipour Ghazi

arxiv logopreprintJun 27 2025
White matter hyperintensities (WMH) are radiological markers of small vessel disease and neurodegeneration, whose accurate segmentation and spatial localization are crucial for diagnosis and monitoring. While multimodal MRI offers complementary contrasts for detecting and contextualizing WM lesions, existing approaches often lack flexibility in handling missing modalities and fail to integrate anatomical localization efficiently. We propose a deep learning framework for WM lesion segmentation and localization that operates directly in native space using single- and multi-modal MRI inputs. Our study evaluates four input configurations: FLAIR-only, T1-only, concatenated FLAIR and T1, and a modality-interchangeable setup. It further introduces a multi-task model for jointly predicting lesion and anatomical region masks to estimate region-wise lesion burden. Experiments conducted on the MICCAI WMH Segmentation Challenge dataset demonstrate that multimodal input significantly improves the segmentation performance, outperforming unimodal models. While the modality-interchangeable setting trades accuracy for robustness, it enables inference in cases with missing modalities. Joint lesion-region segmentation using multi-task learning was less effective than separate models, suggesting representational conflict between tasks. Our findings highlight the utility of multimodal fusion for accurate and robust WMH analysis, and the potential of joint modeling for integrated predictions.
Page 10 of 68675 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.