Sort by:
Page 31 of 2252246 results

Assessment of Robustness of MRI Radiomic Features in the Abdomen: Impact of Deep Learning Reconstruction and Accelerated Acquisition.

Zhong J, Xing Y, Hu Y, Liu X, Dai S, Ding D, Lu J, Yang J, Song Y, Lu M, Nickel D, Lu W, Zhang H, Yao W

pubmed logopapersJun 25 2025
The objective of this study is to investigate the impact of deep learning reconstruction and accelerated acquisition on reproducibility and variability of radiomic features in abdominal MRI. Seventeen volunteers were prospectively included to undergo abdominal MRI on a 3-T scanner for axial T2-weighted, axial T2-weighted fat-suppressed, and coronal T2-weighted sequences. Each sequence was scanned for four times using clinical reference acquisition with standard reconstruction, clinical reference acquisition with deep learning reconstruction, accelerated acquisition with standard reconstruction, and accelerated acquisition with deep learning reconstruction, respectively. The regions of interest were drawn for ten anatomical sites with rigid registrations. Ninety-three radiomic features were extracted via PyRadiomics after z-score normalization. The reproducibility was evaluated using clinical reference acquisition with standard reconstruction as reference by intraclass correlation coefficient (ICC) and concordance correlation coefficient (CCC). The variability among four scans was assessed by coefficient of variation (CV) and quartile coefficient of dispersion (QCD). Our study found that the median (first and third quartile) of overall ICC and CCC values were 0.451 (0.305, 0.583) and 0.450 (0.304, 0.582). The overall percentage of radiomic features with ICC > 0.90 and CCC > 0.90 was 8.1% and 8.1%, and was considered acceptable. The median (first and third quartile) of overall CV and QCD values was 9.4% (4.9%, 17.2%) and 4.9% (2.5%, 9.7%). The overall percentage of radiomic features with CV < 10% and QCD < 10% was 51.9% and 75.0%, and was considered acceptable. Without respect to clinical significance, deep learning reconstruction and accelerated acquisition led to a poor reproducibility of radiomic features, but more than a half of the radiomic features varied within an acceptable range.

Few-Shot Learning for Prostate Cancer Detection on MRI: Comparative Analysis with Radiologists' Performance.

Yamagishi Y, Baba Y, Suzuki J, Okada Y, Kanao K, Oyama M

pubmed logopapersJun 25 2025
Deep-learning models for prostate cancer detection typically require large datasets, limiting clinical applicability across institutions due to domain shift issues. This study aimed to develop a few-shot learning deep-learning model for prostate cancer detection on multiparametric MRI that requires minimal training data and to compare its diagnostic performance with experienced radiologists. In this retrospective study, we used 99 cases (80 positive, 19 negative) of biopsy-confirmed prostate cancer (2017-2022), with 20 cases for training, 5 for validation, and 74 for testing. A 2D transformer model was trained on T2-weighted, diffusion-weighted, and apparent diffusion coefficient map images. Model predictions were compared with two radiologists using Matthews correlation coefficient (MCC) and F1 score, with 95% confidence intervals (CIs) calculated via bootstrap method. The model achieved an MCC of 0.297 (95% CI: 0.095-0.474) and F1 score of 0.707 (95% CI: 0.598-0.847). Radiologist 1 had an MCC of 0.276 (95% CI: 0.054-0.484) and F1 score of 0.741; Radiologist 2 had an MCC of 0.504 (95% CI: 0.289-0.703) and F1 score of 0.871, showing that the model performance was comparable to Radiologist 1. External validation on the Prostate158 dataset revealed that ImageNet pretraining substantially improved model performance, increasing study-level ROC-AUC from 0.464 to 0.636 and study-level PR-AUC from 0.637 to 0.773 across all architectures. Our findings demonstrate that few-shot deep-learning models can achieve clinically relevant performance when using pretrained transformer architectures, offering a promising approach to address domain shift challenges across institutions.

Machine Learning-Based Risk Assessment of Myasthenia Gravis Onset in Thymoma Patients and Analysis of Their Correlations and Causal Relationships.

Liu W, Wang W, Zhang H, Guo M

pubmed logopapersJun 25 2025
The study aims to utilize interpretable machine learning models to predict the risk of myasthenia gravis onset in thymoma patients and investigate the intrinsic correlations and causal relationships between them. A comprehensive retrospective analysis was conducted on 172 thymoma patients diagnosed at two medical centers between 2018 and 2024. The cohort was bifurcated into a training set (n = 134) and test set (n = 38) to develop and validate risk predictive models. Radiomic and deep features were extracted from tumor regions across three CT phases: non-enhanced, arterial, and venous. Through rigorous feature selection employing Spearman's rank correlation coefficient and LASSO (Least Absolute Shrinkage and Selection Operator) regularization, 12 optimal imaging features were identified. These were integrated with 11 clinical parameters and one pathological subtype variable to form a multi-dimensional feature matrix. Six machine learning algorithms were subsequently implemented for model construction and comparative analysis. We utilized SHAP (SHapley Additive exPlanation) to interpret the model and employed doubly robust learner to perform a potential causal analysis between thymoma and myasthenia gravis (MG). All six models demonstrated satisfactory predictive capabilities, with the support vector machine (SVM) model exhibiting superior performance on the test cohort. It achieved an area under the curve (AUC) of 0.904 (95% confidence interval [CI] 0.798-1.000), outperforming other models such as logistic regression, multilayer perceptron (MLP), and others. The model's predictive result substantiates the strong correlation between thymoma and MG. Additionally, our analysis revealed the existence of a significant causal relationship between them, and high-risk tumors significantly elevated the risk of MG by an average treatment effect (ATE) of 9.2%. This implies that thymoma patients with types B2 and B3 face a considerably high risk of developing MG compared to those with types A, AB, and B1. The model provides a novel and effective tool for evaluating the risk of MG development in patients with thymoma. Furthermore, correlation and causal analysis have unveiled pathways that connect tumor to the risk of MG, with a notably higher incidence of MG observed in high risk pathological subtypes. These insights contribute to a deeper understanding of MG and drive a paradigm shift in medical practice from passive treatment to proactive intervention.

A New Aortic Valve Calcium Scoring Framework for Automatic Calcification Detection in Echocardiography.

Cakir M, Kablan EB, Ekinci M, Sahin M

pubmed logopapersJun 25 2025
Aortic valve calcium scoring is an essential tool for diagnosing, treating, monitoring, and assessing the risk of aortic stenosis. The current gold standard for determining the aortic valve calcium score is computed tomography (CT). However, CT is costly and exposes patients to ionizing radiation, making it less ideal for frequent monitoring. Echocardiography, a safer and more affordable alternative that avoids radiation, is more widely accessible, but its variability between and within experts leads to subjective interpretations. Given these limitations, there is a clear need for an automated, objective method to measure the aortic valve calcium score from echocardiography, which could reduce costs and improve patient safety. In this paper, we first employ the YOLOv5 method to detect the region of interest in the aorta within echocardiography images. Building on this, we propose a novel approach that combines UNet and diffusion model architectures to segment calcified areas within the identified region, forming the foundation for automated aortic valve calcium scoring. This architecture leverages UNet's localization capabilities and the diffusion model's strengths in capturing fine-grained structures, enhancing both segmentation accuracy and consistency. The proposed method achieves 85.08% precision, 80.01% recall, and 71.13% Dice score on a novel dataset comprising 160 echocardiography images from 86 distinct patients. This system enables cardiologists to focus more on critical aspects of diagnosis by providing a faster, more objective, and cost-effective method for aortic valve calcium scoring and eliminating the risk of radiation exposure.

Efficacy of an Automated Pulmonary Embolism (PE) Detection Algorithm on Routine Contrast-Enhanced Chest CT Imaging for Non-PE Studies.

Troutt HR, Huynh KN, Joshi A, Ling J, Refugio S, Cramer S, Lopez J, Wei K, Imanzadeh A, Chow DS

pubmed logopapersJun 25 2025
The urgency to accelerate PE management and minimize patient risk has driven the development of artificial intelligence (AI) algorithms designed to provide a swift and accurate diagnosis in dedicated chest imaging (computed tomography pulmonary angiogram; CTPA) for suspected PE; however, the accuracy of AI algorithms in the detection of incidental PE in non-dedicated CT imaging studies remains unclear and untested. This study explores the potential for a commercial AI algorithm to identify incidental PE in non-dedicated contrast-enhanced CT chest imaging studies. The Viz PE algorithm was deployed to identify the presence of PE on 130 dedicated and 63 non-dedicated contrast-enhanced CT chest exams. The predictions for non-dedicated contrast-enhanced chest CT imaging studies were 90.48% accurate, with a sensitivity of 0.14 and specificity of 1.00. Our findings reflect that the Viz PE algorithm demonstrated an overall accuracy of 90.16%, with a specificity of 96% and a sensitivity of 41%. Although the high specificity is promising for ruling in PE, the low sensitivity highlights a limitation, as it indicates the algorithm may miss a substantial number of true-positive incidental PEs. This study demonstrates that commercial AI detection tools hold promise as integral support for detecting PE, particularly when there is a strong clinical indication for their use; however, current limitations in sensitivity, especially for incidental cases, underscore the need for ongoing radiologist oversight.

Computed tomography-derived quantitative imaging biomarkers enable the prediction of disease manifestations and survival in patients with systemic sclerosis.

Sieren MM, Grasshoff H, Riemekasten G, Berkel L, Nensa F, Hosch R, Barkhausen J, Kloeckner R, Wegner F

pubmed logopapersJun 25 2025
Systemic sclerosis (SSc) is a complex inflammatory vasculopathy with diverse symptoms and variable disease progression. Despite its known impact on body composition (BC), clinical decision-making has yet to incorporate these biomarkers. This study aims to extract quantitative BC imaging biomarkers from CT scans to assess disease severity, define BC phenotypes, track changes over time and predict survival. CT exams were extracted from a prospectively maintained cohort of 452 SSc patients. 128 patients with at least one CT exam were included. An artificial intelligence-based 3D body composition analysis (BCA) algorithm assessed muscle volume, different adipose tissue compartments, and bone mineral density. These parameters were analysed with regard to various clinical, laboratory, functional parameters and survival. Phenotypes were identified performing K-means cluster analysis. Longitudinal evaluation of BCA changes employed regression analyses. A regression model using BCA parameters outperformed models based on Body Mass Index and clinical parameters in predicting survival (area under the curve (AUC)=0.75). Longitudinal development of the cardiac marker enabled prediction of survival with an AUC=0.82. Patients with altered BCA parameters had increased ORs for various complications, including interstitial lung disease (p<0.05). Two distinct BCA phenotypes were identified, showing significant differences in gastrointestinal disease manifestations (p<0.01). This study highlights several parameters with the potential to reshape clinical pathways for SSc patients. Quantitative BCA biomarkers offer a means to predict survival and individual disease manifestations, in part outperforming established parameters. These insights open new avenues for research into the mechanisms driving body composition changes in SSc and for developing enhanced disease management tools, ultimately leading to more personalised and effective patient care.

Generalizable medical image enhancement using structure-preserved diffusion models.

Chen L, Yu X, Li H, Lin H, Niu K, Li H

pubmed logopapersJun 25 2025
Clinical medical images often suffer from compromised quality, which negatively impacts the diagnostic process by both clinicians and AI algorithms. While GAN-based enhancement methods have been commonly developed in recent years, delicate model training is necessary due to issues with artifacts, mode collapse, and instability. Diffusion models have shown promise in generating high-quality images superior to GANs, but challenges in training data collection and domain gaps hinder applying them for medical image enhancement. Additionally, preserving fine structures in enhancing medical images with diffusion models is still an area that requires further exploration. To overcome these challenges, we propose structure-preserved diffusion models for generalizable medical image enhancement (GEDM). GEDM leverages joint supervision from enhancement and segmentation to boost structure preservation and generalizability. Specifically, synthetic data is used to collect high-low quality paired training data with structure masks, and the Laplace transform is employed to reduce domain gaps and introduce multi-scale conditions. GEDM conducts medical image enhancement and segmentation jointly, supervised by high-quality references and structure masks from the training data. Four datasets of two medical imaging modalities were collected to implement the experiments, where GEDM outperformed state-of-the-art methods in image enhancement, as well as follow-up medical analysis tasks.

Optimization-based image reconstruction regularized with inter-spectral structural similarity for limited-angle dual-energy cone-beam CT.

Peng J, Wang T, Xie H, Qiu RLJ, Chang CW, Roper J, Yu DS, Tang X, Yang X

pubmed logopapersJun 25 2025
&#xD;Limited-angle dual-energy (DE) cone-beam CT (CBCT) is considered as a potential solution to achieve fast and low-dose DE imaging on current CBCT scanners without hardware modification. However, its clinical implementations are hindered by the challenging image reconstruction from limited-angle projections. While optimization-based and deep learning-based methods have been proposed for image reconstruction, their utilization is limited by the requirement for X-ray spectra measurement or paired datasets for model training. This work aims to facilitate the clinical applications of fast and low-dose DE-CBCT by developing a practical solution for image reconstruction in limited-angle DE-CBCT.&#xD;Methods:&#xD;An inter-spectral structural similarity-based regularization was integrated into the iterative image reconstruction in limited-angle DE-CBCT. By enforcing the similarity between the DE images, limited-angle artifacts were efficiently reduced in the reconstructed DECBCT images. The proposed method was evaluated using two physical phantoms and three digital phantoms, demonstrating its efficacy in quantitative DECBCT imaging.&#xD;Results:&#xD;In all the studies, the proposed method achieves accurate image reconstruction without visible residual artifacts from limited-angle DE-CBCT projection data. In the digital phantom studies, the proposed method reduces the mean-absolute-error (MAE) from 309/290 HU to 14/20 HU, increases the peak signal-to-noise ratio (PSNR) from 40/39 dB to 70/67 dB, and improves the structural similarity index measurement (SSIM) from 0.74/0.72 to 1.00/1.00.&#xD;Conclusions:&#xD;The proposed method achieves accurate optimization-based image reconstruction in limited-angle DE-CBCT, showing great practical value in clinical implementations of limited-angle DE-CBCT.&#xD.

Contrast-enhanced image synthesis using latent diffusion model for precise online tumor delineation in MRI-guided adaptive radiotherapy for brain metastases.

Ma X, Ma Y, Wang Y, Li C, Liu Y, Chen X, Dai J, Bi N, Men K

pubmed logopapersJun 25 2025
&#xD;Magnetic resonance imaging-guided adaptive radiotherapy (MRIgART) is a promising technique for long-course RT of large-volume brain metastasis (BM), due to the capacity to track tumor changes throughout treatment course. Contrast-enhanced T1-weighted (T1CE) MRI is essential for BM delineation, yet is often unavailable during online treatment concerning the requirement of contrast agent injection. This study aims to develop a synthetic T1CE (sT1CE) generation method to facilitate accurate online adaptive BM delineation.&#xD;Approach:&#xD;We developed a novel ControlNet-coupled latent diffusion model (CTN-LDM) combined with a personalized transfer learning strategy and a denoising diffusion implicit model (DDIM) inversion method to generate high quality sT1CE images from online T2-weighted (T2) or fluid attenuated inversion recovery (FLAIR) images. Visual quality of sT1CE images generated by the CTN-LDM was compared with classical deep learning models. BM delineation results using the combination of our sT1CE images and online T2/FLAIR images were compared with the results solely using online T2/FLAIR images, which is the current clinical method.&#xD;Main results:&#xD;Visual quality of sT1CE images from our CTN-LDM was superior to classical models both quantitatively and qualitatively. Leveraging sT1CE images, radiation oncologists achieved significant higher precision of adaptive BM delineation, with average Dice similarity coefficient of 0.93 ± 0.02 vs. 0.86 ± 0.04 (p < 0.01), compared with only using online T2/FLAIR images. &#xD;Significance:&#xD;The proposed method could generate high quality sT1CE images and significantly improve accuracy of online adaptive tumor delineation for long-course MRIgART of large-volume BM, potentially enhancing treatment outcomes and minimizing toxicity.

Self-supervised learning for low-dose CT image denoising method based on guided image filtering.

He Y, Luo X, Wang C, Yu W

pubmed logopapersJun 25 2025
Low-dose computed tomography (LDCT) images suffer from severe noise due to reduced radiation exposure. Most existing deep learning-based denoising methods require supervised learning with paired training data that is difficult to obtain. To address this limitation, we aim to develop a denoising method that does not rely on paired normal-dose CT (NDCT) data.&#xD;Approach: We propose a self-supervised denoising method based on guided image filtering (GIF)&#xD;that requires only LDCT images for training. The method first applies GIF to generate pseudo-labels from LDCT images, enabling the network to learn noise distributions between inputs and pseudo-labels for denoising, without paired data. Then, an attention gate mechanism is embedded in the decoder stage of a residual network to further enhance denoising performance.&#xD;Main Results: Experimental results demonstrate that the proposed method achieves superior performance compared to state-of-the-art unsupervised denoising networks, transformer-based denoising model and post-processing methods, in terms of both visual quality and quantitative metrics. Furthermore, ablation studies are conducted to analyze the impact of different attention mechanisms and the number of attention gate mechanisms, showing that the proposed network architecture achieves optimal performance.&#xD;Significance: This work leverages self-supervised learning with GIF to generate pseudo-labels, enabling LDCT denoising without paired data. The embedded attention gate mechanism, supported by detailed ablation analysis, further enhances denoising performance by improving feature focus and structural preservation.
Page 31 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.