Sort by:
Page 65 of 1241232 results

Leveraging Representation Learning for Bi-parametric Prostate MRI to Disambiguate PI-RADS 3 and Improve Biopsy Decision Strategies.

Umapathy L, Johnson PM, Dutt T, Tong A, Chopra S, Sodickson DK, Chandarana H

pubmed logopapersJun 30 2025
Despite its high negative predictive value (NPV) for clinically significant prostate cancer (csPCa), MRI suffers from a substantial number of false positives, especially for intermediate-risk cases. In this work, we determine whether a deep learning model trained with PI-RADS-guided representation learning can disambiguate the PI-RADS 3 classification, detect csPCa from bi-parametric prostate MR images, and avoid unnecessary benign biopsies. This study included 28,263 MR examinations and radiology reports from 21,938 men imaged for known or suspected prostate cancer between 2015 and 2023 at our institution (21 imaging locations with 34 readers), with 6352 subsequent biopsies. We trained a deep learning model, a representation learner (RL), to learn how radiologists interpret conventionally acquired T2-weighted and diffusion-weighted MR images, using exams in which the radiologists are confident in their risk assessments (PI-RADS 1 and 2 for the absence of csPCa vs. PI-RADS 4 and 5 for the presence of csPCa, n=21,465). We then trained biopsy-decision models to detect csPCa (Gleason score ≥7) using these learned image representations, and compared them to the performance of radiologists, and of models trained on other clinical variables (age, prostate volume, PSA, and PSA density) for treatment-naïve test cohorts consisting of only PI-RADS 3 (n=253, csPCa=103) and all PI-RADS (n=531, csPCa=300) cases. On the 2 test cohorts (PI-RADS-3-only, all-PI-RADS), RL-based biopsy-decision models consistently yielded higher AUCs in detecting csPCa (AUC=0.73 [0.66, 0.79], 0.88 [0.85, 0.91]) compared with radiologists (equivocal, AUC=0.79 [0.75, 0.83]) and the clinical model (AUCs=0.69 [0.62, 0.75], 0.78 [0.74, 0.82]). In the PIRADS-3-only cohort, all of whom would be biopsied using our institution's standard of care, the RL decision model avoided 41% (62/150) of benign biopsies compared with the clinical model (26%, P<0.001), and improved biopsy yield by 10% compared with the PI-RADS ≥3 decision strategy (0.50 vs. 0.40). Furthermore, on the all-PI-RADS cohort, RL decision model avoided 27% of additional benign biopsies (138/231) compared to radiologists (33%, P<0.001) with comparable sensitivity (93% vs. 92%), higher NPV (0.87 vs. 0.77), and biopsy yield (0.75 vs. 0.64). The combination of clinical and RL decision models further avoided benign biopsies (46% in PI-RADS-3-only and 62% in all-PI-RADS) while improving NPV (0.82, 0.88) and biopsy yields (0.52, 0.76) across the 2 test cohorts. Our PI-RADS-guided deep learning RL model learns summary representations from bi-parametric prostate MR images that can provide additional information to disambiguate intermediate-risk PI-RADS 3 assessments. The resulting RL-based biopsy decision models also outperformed radiologists in avoiding benign biopsies while maintaining comparable sensitivity to csPCa for the all-PI-RADS cohort. Such AI models can easily be integrated into clinical practice to supplement radiologists' reads in general and improve biopsy yield for any equivocal decisions.

Automatic Multiclass Tissue Segmentation Using Deep Learning in Brain MR Images of Tumor Patients.

Kandpal A, Kumar P, Gupta RK, Singh A

pubmed logopapersJun 30 2025
Precise delineation of brain tissues, including lesions, in MR images is crucial for data analysis and objectively assessing conditions like neurological disorders and brain tumors. Existing methods for tissue segmentation often fall short in addressing patients with lesions, particularly those with brain tumors. This study aimed to develop and evaluate a robust pipeline utilizing convolutional neural networks for rapid and automatic segmentation of whole brain tissues, including tumor lesions. The proposed pipeline was developed using BraTS'21 data (1251 patients) and tested on local hospital data (100 patients). Ground truth masks for lesions as well as brain tissues were generated. Two convolutional neural networks based on deep residual U-Net framework were trained for segmenting brain tissues and tumor lesions. The performance of the pipeline was evaluated on independent test data using dice similarity coefficient (DSC) and volume similarity (VS). The proposed pipeline achieved a mean DSC of 0.84 and a mean VS of 0.93 on the BraTS'21 test data set. On the local hospital test data set, it attained a mean DSC of 0.78 and a mean VS of 0.91. The proposed pipeline also generated satisfactory masks in cases where the SPM12 software performed inadequately. The proposed pipeline offers a reliable and automatic solution for segmenting brain tissues and tumor lesions in MR images. Its adaptability makes it a valuable tool for both research and clinical applications, potentially streamlining workflows and enhancing the precision of analyses in neurological and oncological studies.

Development and validation of a prognostic prediction model for lumbar-disc herniation based on machine learning and fusion of clinical text data and radiomic features.

Wang Z, Zhang H, Li Y, Zhang X, Liu J, Ren Z, Qin D, Zhao X

pubmed logopapersJun 30 2025
Based on preoperative clinical text data and lumbar magnetic resonance imaging (MRI), we applied machine learning (ML) algorithms to construct a model that would predict early recurrence in lumbar-disc herniation (LDH) patients who underwent percutaneous endoscopic lumbar discectomy (PELD). We then explored the clinical performance of this prognostic prediction model via multimodal-data fusion. Clinical text data and radiological images of LDH patients who underwent PELD at the Intervertebral Disc Center of the Affiliated Hospital of Gansu University of Traditional Chinese Medicine (AHGUTCM; Lanzhou, China) were retrospectively collected. Two radiologists with clinical-image reading experience independently outlined regions of interest (ROI) on the MRI images and extracted radiomic features using 3D Slicer software. We then randomly separated the samples into a training set and a test set at a 7:3 ratio, used eight ML algorithms to construct predictive radiomic-feature models, evaluated model performance by the area under the curve (AUC), and selected the optimal model for screening radiomic features and calculating radiomic scores (Rad-scores). Finally, after using logistic regression to construct a nomogram for predicting the early-recurrence rate, we evaluated the nomogram's clinical applicability using a clinical-decision curve. We initially extracted 851 radiomic features. After constructing our models, we determined based on AUC values that the optimal ML algorithm was least absolute shrinkage and selection operator (LASSO) regression, which had an AUC of 0.76 and an accuracy rate of 91%. After screening features using the LASSO model, we predicted Rad-score for each sample of recurrent LDH using nine radiomic features. Next, we fused three of these clinical features -age, diabetes, and heavy manual labor-to construct a nomogram with an AUC of 0.86 (95% confidence interval [CI], 0.79-0.94). Analysis of the clinical-decision and impact curves showed that the prognostic prediction model with multimodal-data fusion had good clinical validity and applicability. We developed and analyzed a prognostic prediction model for LDH with multimodal-data fusion. Our model demonstrated good performance in predicting early postoperative recurrence in LDH patients; therefore, it has good prospects for clinical application and can provide clinicians with objective, accurate information to help them decide on presurgical treatment plans. However, external-validation studies are still needed to further validate the model's comprehensive performance and improve its generalization and extrapolation.

Thin-slice T<sub>2</sub>-weighted images and deep-learning-based super-resolution reconstruction: improved preoperative assessment of vascular invasion for pancreatic ductal adenocarcinoma.

Zhou X, Wu Y, Qin Y, Song C, Wang M, Cai H, Zhao Q, Liu J, Wang J, Dong Z, Luo Y, Peng Z, Feng ST

pubmed logopapersJun 30 2025
To evaluate the efficacy of thin-slice T<sub>2</sub>-weighted imaging (T<sub>2</sub>WI) and super-resolution reconstruction (SRR) for preoperative assessment of vascular invasion in pancreatic ductal adenocarcinoma (PDAC). Ninety-five PDACs with preoperative MRI were retrospectively enrolled as a training set, with non-reconstructed T<sub>2</sub>WI (NRT<sub>2</sub>) in different slice thicknesses (NRT<sub>2</sub>-3, 3 mm; NRT<sub>2</sub>-5, ≥ 5 mm). A prospective test set was collected with NRT<sub>2</sub>-5 (n = 125) only. A deep-learning network was employed to generate reconstructed super-resolution T<sub>2</sub>WI (SRT<sub>2</sub>) in different slice thicknesses (SRT<sub>2</sub>-3, 3 mm; SRT<sub>2</sub>-5, ≥ 5 mm). Image quality was assessed, including the signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and signal-intensity ratio (SIR<sub>t/p</sub>, tumor/pancreas; SIR<sub>t/b</sub>, tumor/background). Diagnostic efficacy for vascular invasion was evaluated using the area under the curve (AUC) and compared across different slice thicknesses before and after reconstruction. SRT<sub>2</sub>-5 demonstrated higher SNR and SIR<sub>t/p</sub> compared to NRT<sub>2</sub>-5 (74.18 vs 72.46; 1.42 vs 1.30; p < 0.05). SRT<sub>2</sub>-3 showed increased SIR<sub>t/p</sub> and SIR<sub>t/b</sub> over NRT<sub>2</sub>-3 (1.35 vs 1.31; 2.73 vs 2.58; p < 0.05). SRT<sub>2</sub>-5 showed higher CNR, SIR<sub>t/p</sub> and SIR<sub>t/b</sub> than NRT<sub>2</sub>-3 (p < 0.05). NRT<sub>2</sub>-3 outperformed NRT<sub>2</sub>-5 in evaluating venous invasion (AUC: 0.732 vs 0.597, p = 0.021). SRR improved venous assessment (AUC: NRT<sub>2</sub>-3, 0.927 vs 0.732; NRT<sub>2</sub>-5, 0.823 vs 0.597; p < 0.05), and SRT<sub>2</sub>-5 exhibits comparable efficacy to NRT<sub>2</sub>-3 in venous assessment (AUC: 0.823 vs 0.732, p = 0.162). Thin-slice T<sub>2</sub>WI and SRR effectively improve the image quality and diagnostic efficacy for assessing venous invasion in PDAC. Thick-slice T<sub>2</sub>WI with SRR is a potential alternative to thin-slice T<sub>2</sub>WI. Both thin-slice T<sub>2</sub>-WI and SRR effectively improve image quality and diagnostic performance, providing valuable options for optimizing preoperative vascular assessment in PDAC. Non-invasive and accurate assessment of vascular invasion supports treatment planning and avoids futile surgery. Vascular invasion evaluation is critical for the surgical eligibility of PDAC. SRR improved image quality and vascular assessment in T<sub>2</sub>WI. Utilizing thin-slice T<sub>2</sub>WI and SRR aids in clinical decision making for PDAC.

Efficient Cerebral Infarction Segmentation Using U-Net and U-Net3 + Models.

Yuce E, Sahin ME, Ulutas H, Erkoç MF

pubmed logopapersJun 30 2025
Cerebral infarction remains a leading cause of mortality and long-term disability globally, underscoring the critical importance of early diagnosis and timely intervention to enhance patient outcomes. This study introduces a novel approach to cerebral infarction segmentation using a novel dataset comprising MRI scans of 110 patients, retrospectively collected from Yozgat Bozok University Research Hospital. Two convolutional neural network architectures, the basic U-Net and the advanced U-Net3 + , are employed to segment infarction regions with high precision. Ground-truth annotations are generated under the supervision of an experienced radiologist, and data augmentation techniques are applied to address dataset limitations, resulting in 6732 balanced images for training, validation, and testing. Performance evaluation is conducted using metrics such as the dice score, Intersection over Union (IoU), pixel accuracy, and specificity. The basic U-Net achieved superior performance with a dice score of 0.8947, a mean IoU of 0.8798, a pixel accuracy of 0.9963, and a specificity of 0.9984, outperforming U-Net3 + despite its simpler architecture. U-Net3 + , with its complex structure and advanced features, delivered competitive results, highlighting the potential trade-off between model complexity and performance in medical imaging tasks. This study underscores the significance of leveraging deep learning for precise and efficient segmentation of cerebral infarction. The results demonstrate the capability of CNN-based architectures to support medical decision-making, offering a promising pathway for advancing stroke diagnosis and treatment planning.

GUSL: A Novel and Efficient Machine Learning Model for Prostate Segmentation on MRI

Jiaxin Yang, Vasileios Magoulianitis, Catherine Aurelia Christie Alexander, Jintang Xue, Masatomo Kaneko, Giovanni Cacciamani, Andre Abreu, Vinay Duddalwar, C. -C. Jay Kuo, Inderbir S. Gill, Chrysostomos Nikias

arxiv logopreprintJun 30 2025
Prostate and zonal segmentation is a crucial step for clinical diagnosis of prostate cancer (PCa). Computer-aided diagnosis tools for prostate segmentation are based on the deep learning (DL) paradigm. However, deep neural networks are perceived as "black-box" solutions by physicians, thus making them less practical for deployment in the clinical setting. In this paper, we introduce a feed-forward machine learning model, named Green U-shaped Learning (GUSL), suitable for medical image segmentation without backpropagation. GUSL introduces a multi-layer regression scheme for coarse-to-fine segmentation. Its feature extraction is based on a linear model, which enables seamless interpretability during feature extraction. Also, GUSL introduces a mechanism for attention on the prostate boundaries, which is an error-prone region, by employing regression to refine the predictions through residue correction. In addition, a two-step pipeline approach is used to mitigate the class imbalance, an issue inherent in medical imaging problems. After conducting experiments on two publicly available datasets and one private dataset, in both prostate gland and zonal segmentation tasks, GUSL achieves state-of-the-art performance among other DL-based models. Notably, GUSL features a very energy-efficient pipeline, since it has a model size several times smaller and less complexity than the rest of the solutions. In all datasets, GUSL achieved a Dice Similarity Coefficient (DSC) performance greater than $0.9$ for gland segmentation. Considering also its lightweight model size and transparency in feature extraction, it offers a competitive and practical package for medical imaging applications.

Artificial Intelligence-assisted Pixel-level Lung (APL) Scoring for Fast and Accurate Quantification in Ultra-short Echo-time MRI

Bowen Xin, Rohan Hickey, Tamara Blake, Jin Jin, Claire E Wainwright, Thomas Benkert, Alto Stemmer, Peter Sly, David Coman, Jason Dowling

arxiv logopreprintJun 30 2025
Lung magnetic resonance imaging (MRI) with ultrashort echo-time (UTE) represents a recent breakthrough in lung structure imaging, providing image resolution and quality comparable to computed tomography (CT). Due to the absence of ionising radiation, MRI is often preferred over CT in paediatric diseases such as cystic fibrosis (CF), one of the most common genetic disorders in Caucasians. To assess structural lung damage in CF imaging, CT scoring systems provide valuable quantitative insights for disease diagnosis and progression. However, few quantitative scoring systems are available in structural lung MRI (e.g., UTE-MRI). To provide fast and accurate quantification in lung MRI, we investigated the feasibility of novel Artificial intelligence-assisted Pixel-level Lung (APL) scoring for CF. APL scoring consists of 5 stages, including 1) image loading, 2) AI lung segmentation, 3) lung-bounded slice sampling, 4) pixel-level annotation, and 5) quantification and reporting. The results shows that our APL scoring took 8.2 minutes per subject, which was more than twice as fast as the previous grid-level scoring. Additionally, our pixel-level scoring was statistically more accurate (p=0.021), while strongly correlating with grid-level scoring (R=0.973, p=5.85e-9). This tool has great potential to streamline the workflow of UTE lung MRI in clinical settings, and be extended to other structural lung MRI sequences (e.g., BLADE MRI), and for other lung diseases (e.g., bronchopulmonary dysplasia).

U-Net-based architecture with attention mechanisms and Bayesian Optimization for brain tumor segmentation using MR images.

Ramalakshmi K, Krishna Kumari L

pubmed logopapersJun 30 2025
As technological innovation in computers has advanced, radiologists may now diagnose brain tumors (BT) with the use of artificial intelligence (AI). In the medical field, early disease identification enables further therapies, where the use of AI systems is essential for time and money savings. The difficulties presented by various forms of Magnetic Resonance (MR) imaging for BT detection are frequently not addressed by conventional techniques. To get around frequent problems with traditional tumor detection approaches, deep learning techniques have been expanded. Thus, for BT segmentation utilizing MR images, a U-Net-based architecture combined with Attention Mechanisms has been developed in this work. Moreover, by fine-tuning essential variables, Hyperparameter Optimization (HPO) is used using the Bayesian Optimization Algorithm to strengthen the segmentation model's performance. Tumor regions are pinpointed for segmentation using Region-Adaptive Thresholding technique, and the segmentation results are validated against ground truth annotated images to assess the performance of the suggested model. Experiments are conducted using the LGG, Healthcare, and BraTS 2021 MRI brain tumor datasets. Lastly, the importance of the suggested model has been demonstrated through comparing several metrics, such as IoU, accuracy, and DICE Score, with current state-of-the-art methods. The U-Net-based method gained a higher DICE score of 0.89687 in the segmentation of MRI-BT.

Perivascular Space Burden in Children With Autism Spectrum Disorder Correlates With Neurodevelopmental Severity.

Frigerio G, Rizzato G, Peruzzo D, Ciceri T, Mani E, Lanteri F, Mariani V, Molteni M, Agarwal N

pubmed logopapersJun 29 2025
Cerebral perivascular spaces (PVS) are involved in cerebrospinal fluid (CSF) circulation and clearance of metabolic waste in adult humans. A high number of PVS has been reported in autism spectrum disorder (ASD) but its relationship with CSF and disease severity is unclear. To quantify PVS in children with ASD through MRI. Retrospective. Sixty six children with ASD (mean age: 4.7 ± 1.5 years; males/females: 59/7). 3T, 3D T1-weighted GRE and 3D T2-weighted turbo spin echo sequences. PVS were segmented using a weakly supervised PVS algorithm. PVS count, white matter-perivascular spaces (WM-PVS<sub>tot</sub>) and normalized volume (WM-PVS<sub>voln</sub>) were analyzed in the entire white matter. Six regions: frontal, parietal, limbic, occipital, temporal, and deep WM (WM-PVS<sub>sr</sub>). WM, GM, CSF, and extra-axial CSF (eaCSF) volumes were also calculated. Autism Diagnostic Observation Schedule, Wechsler Intelligence Scale, and Griffiths Mental Developmental scales were used to assess clinical severity and developmental quotient (DQ). Kendall correlation analysis (continuous variables) and Friedman (categorical variables) tests were used to compare medians of PVS variables across different WM regions. Post hoc pairwise comparisons with Wilcoxon tests were used to evaluate distributions of PVS in WM regions. Generalized linear models were employed to assess DQ, clinical severity, age, and eaCSF volume in relation to PVS variables. A p-value < 0.05 indicated statistical significance. Severe DQ (β = 0.0089), mild form of autism (β = -0.0174), and larger eaCSF (β = 0.0082) volume was significantly associated with greater WM-PVS<sub>tot</sub> count. WM-PVS<sub>voln</sub> was predominantly affected by normalized eaCSF volume (eaCSF<sub>voln</sub>) (β = 0.0242; adjusted for WM volumes). The percentage of WM-PVS<sub>sr</sub> was higher in the frontal areas (32%) and was lowest in the temporal regions (11%). PVS count and volume in ASD are associated with eaCSF<sub>voln</sub>. PVS count is related to clinical severity and DQ. PVS count was higher in frontal regions and lower in temporal regions. 4. Stage 3.

Physics informed guided diffusion for accelerated multi-parametric MRI reconstruction

Perla Mayo, Carolin M. Pirkl, Alin Achim, Bjoern Menze, Mohammad Golbabaee

arxiv logopreprintJun 29 2025
We introduce MRF-DiPh, a novel physics informed denoising diffusion approach for multiparametric tissue mapping from highly accelerated, transient-state quantitative MRI acquisitions like Magnetic Resonance Fingerprinting (MRF). Our method is derived from a proximal splitting formulation, incorporating a pretrained denoising diffusion model as an effective image prior to regularize the MRF inverse problem. Further, during reconstruction it simultaneously enforces two key physical constraints: (1) k-space measurement consistency and (2) adherence to the Bloch response model. Numerical experiments on in-vivo brain scans data show that MRF-DiPh outperforms deep learning and compressed sensing MRF baselines, providing more accurate parameter maps while better preserving measurement fidelity and physical model consistency-critical for solving reliably inverse problems in medical imaging.
Page 65 of 1241232 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.