Sort by:
Page 250 of 6576562 results

Zhao R, Wang Y, Wang J, Wang Z, Xiao R, Ming Y, Piao S, Wang J, Song L, Xu Y, Ma Z, Fan P, Sui X, Song W

pubmed logopapersAug 19 2025
Timely intervention of interstitial lung disease (ILD) was promising for attenuating the lung function decline and improving clinical outcomes. The prone position HRCT is essential for early diagnosis of ILD, but limited by its high radiation exposure. This study was aimed to explore whether deep learning reconstruction (DLR) could keep the image quality and reduce the radiation dose compared with hybrid iterative reconstruction (HIR) in prone position scanning for patients of early-stage ILD. This study prospectively enrolled 21 patients with early-stage ILD. All patients underwent high-resolution CT (HRCT) and low-dose CT (LDCT) scans. HRCT images were reconstructed with HIR using standard settings, and LDCT images were reconstructed with DLR (lung/bone kernel) in a mild, standard, or strong setting. Overall image quality, image noise, streak artifacts, and visualization of normal and abnormal ILD features were analysed. The effective dose of LDCT was 1.22 ± 0.09 mSv, 63.7% less than the HRCT dose. The objective noise of the LDCT DLR images was 35.9-112.6% that of the HRCT HIR images. The LDCT DLR was comparable to the HRCT HIR in terms of overall image quality. LDCT DLR (bone, strong) visualization of bronchiectasis and/or bronchiolectasis was significantly weaker than that of HRCT HIR (p = 0.046). The LDCT DLR (all settings) did not significantly differ from the HRCT HIR in the evaluation of other abnormal features, including ground glass opacities (GGOs), architectural distortion, reticulation and honeycombing. With 63.7% reduction of radiation dose, the overall image quality of LDCT DLR was comparable to HRCT HIR in prone scanning for early ILD patients. This study supported that DLR was promising for maintaining image quality under a lower radiation dose in prone scanning, and it offered valuable insights for the selection of images reconstruction algorithms for the diagnosis and follow-up of early ILD.

A Glemser P, Netzer N, H Ziener C, Wilhelm M, Hielscher T, Sun Zhang K, Görtz M, Schütz V, Stenzinger A, Hohenfellner M, Schlemmer HP, Bonekamp D

pubmed logopapersAug 19 2025
According to PI-RADS v2.1, peripheral PI-RADS 3 lesions are upgraded to PI-RADS 4 if dynamic contrast-enhanced MRI is positive (3+1 lesions), however those lesions are radiologically challenging. We aimed to define criteria by expert consensus and test applicability by other radiologists for sPC prediction of PI-RADS 3+1 lesions and determine their value in integrated regression models. From consecutive 3 Tesla MR examinations performed between 08/2016 to 12/2018 we identified 85 MRI examinations from 83 patients with a total of 94 PI-RADS 3+1 lesions in the official clinical report. Lesions were retrospectively assessed by expert consensus with construction of a newly devised feature catalogue which was utilized subsequently by two additional radiologists specialized in prostate MRI for independent lesion assessment. With reference to extended fused targeted and systematic TRUS/MRI-biopsy histopathological correlation, relevant catalogue features were identified by univariate analysis and put into context to typically available clinical features and automated AI image assessment utilizing lasso-penalized logistic regression models, also focusing on the contribution of DCE imaging (feature-based, bi- and multiparametric AI-enhanced and solely bi- and multiparametric AI-driven). The feature catalog enabled image-based lesional risk stratification for all readers. Expert consensus provided 3 significant features in univariate analysis (adj. p-value <0.05; most relevant feature T2w configuration: "irregular/microlobulated/spiculated", OR 9.0 (95%CI 2.3-44.3); adj. p-value: 0.016). These remained after lasso penalized regression based feature reduction, while the only selected clinical feature was prostate volume (OR<1), enabling nomogram construction. While DCE-derived consensus features did not enhance model performance (bootstrapped AUC), there was a trend for increased performance by including multiparametric AI, but not biparametric AI into models, both for combined and AI-only models. PI-RADS 3+1 lesions can be risk-stratified using lexicon terms and a key feature nomogram. AI potentially benefits more from DCE imaging than experienced prostate radiologists. Not applicable.

Yao F, Lin H, Xue YN, Zhuang YD, Bian SY, Zhang YY, Yang YJ, Pan KH

pubmed logopapersAug 19 2025
This study aimed to construct a multimodal imaging deep learning (DL) model integrating mpMRI and <sup>18</sup>F-PSMA-PET/CT for the prediction of extraprostatic extension (EPE) in prostate cancer, and to assess its effectiveness in enhancing the diagnostic accuracy of radiologists. Clinical and imaging data were retrospectively collected from patients with pathologically confirmed prostate cancer (PCa) who underwent radical prostatectomy (RP). Data were collected from a primary institution (Center 1, n = 197) between January 2019 and June 2022 and an external institution (Center 2, n = 36) between July 2021 and November 2022. A multimodal DL model incorporating mpMRI and <sup>18</sup>F-PSMA-PET/CT was developed to support radiologists in assessing EPE using the EPE-grade scoring system. The predictive performance of the DL model was compared with that of single-modality models, as well as with radiologist assessments with and without model assistance. Clinical net benefit of the model was also assessed. For patients in Center 1, the area under the curve (AUC) for predicting EPE was 0.76 (0.72-0.80), 0.77 (0.70-0.82), and 0.82 (0.78-0.87) for the mpMRI-based DL model, PET/CT-based DL model, and the combined mpMRI + PET/CT multimodal DL model, respectively. In the external test set (Center 2), the AUCs for these models were 0.75 (0.60-0.88), 0.77 (0.72-0.88), and 0.81 (0.63-0.97), respectively. The multimodal DL model demonstrated superior predictive accuracy compared to single-modality models in both internal and external validations. The deep learning-assisted EPE-grade scoring model significantly improved AUC and sensitivity compared to radiologist EPE-grade scoring alone (P < 0.05), with a modest reduction in specificity. Additionally, the deep learning-assisted scoring model provided greater clinical net benefit than the radiologist EPE-grade score used by radiologists alone. The multimodal imaging deep learning model, integrating mpMRI and 18 F-PSMA PET/CT, demonstrates promising predictive performance for EPE in prostate cancer and enhances the accuracy of radiologists in EPE assessment. The model holds potential as a supportive tool for more individualized and precise therapeutic decision-making.

Chen S, Zhong L, Zhang Z, Zhang X

pubmed logopapersAug 19 2025
Aiming at the difficulty of knee MRI bone and cartilage subregion segmentation caused by numerous subregions and unclear subregion boundary, a fully automatic knee subregion segmentation network based on tissue segmentation and anatomical geometry is proposed. Specifically, first, we use a transformer-based multilevel region and edge aggregation network to achieve precise segmentation of bone and cartilage tissue edges in knee MRI. Then, we designed a fibula detection module, which determines the medial and lateral of the knee by detecting the position of the fibula. Afterwards, a subregion segmentation module based on boundary information was designed, which divides bone and cartilage tissues into subregions by detecting the boundaries. In addition, in order to provide data support for the proposed model, fibula classification dataset and knee MRI bone and cartilage subregion dataset were established respectively. Testing on the fibula classification dataset we established, the proposed method achieved a detection accuracy of 1.000 in detecting the medial and lateral of the knee. On the knee MRI bone and cartilage subregion dataset we established, the proposed method attained an average dice score of 0.953 for bone subregions and 0.831 for cartilage subregions, which verifies the correctness of the proposed method.

Wang C, Liu L, Fan C, Zhang Y, Mai Z, Li L, Liu Z, Tian Y, Hu J, Elazab A

pubmed logopapersAug 19 2025
Accurately identifying the stages of lung adenocarcinoma is essential for selecting the most appropriate treatment plans. Nonetheless, this task is complicated due to challenges such as integrating diverse data, similarities among subtypes, and the need to capture contextual features, making precise differentiation difficult. We address these challenges and propose a multimodal deep neural network that integrates computed tomography (CT) images, annotated lesion bounding boxes, and electronic health records. Our model first combines bounding boxes with precise lesion location data and CT scans, generating a richer semantic representation through feature extraction from regions of interest to enhance localization accuracy using a vision transformer module. Beyond imaging data, the model also incorporates clinical information encoded using a fully connected encoder. Features extracted from both CT and clinical data are optimized for cosine similarity using a contrastive language-image pre-training module, ensuring they are cohesively integrated. In addition, we introduce an attention-based feature fusion module that harmonizes these features into a unified representation to fuse features from different modalities further. This integrated feature set is then fed into a classifier that effectively distinguishes among the three types of adenocarcinomas. Finally, we employ focal loss to mitigate the effects of unbalanced classes and contrastive learning loss to enhance feature representation and improve the model's performance. Our experiments on public and proprietary datasets demonstrate the efficiency of our model, achieving a superior validation accuracy of 81.42% and an area under the curve of 0.9120. These results significantly outperform recent multimodal classification approaches. The code is available at https://github.com/fancccc/LungCancerDC .

Gurumurthy G, Kisiel F, Reynolds L, Thomas W, Othman M, Arachchillage DJ, Thachil J

pubmed logopapersAug 19 2025
Venous thromboembolism (VTE) remains a leading cause of cardiovascular morbidity and mortality, despite advances in imaging and anticoagulation. VTE arises from diverse and overlapping risk factors, such as inherited thrombophilia, immobility, malignancy, surgery or trauma, pregnancy, hormonal therapy, obesity, chronic medical conditions (e.g., heart failure, inflammatory disease), and advancing age. Clinicians, therefore, face challenges in balancing the benefits of thromboprophylaxis against the bleeding risk. Existing clinical risk scores often exhibit only modest discrimination and calibration across heterogeneous patient populations. Machine learning (ML) has emerged as a promising tool to address these limitations. In imaging, convolutional neural networks and hybrid algorithms can detect VTE on CT pulmonary angiography with areas under the curves (AUCs) of 0.85 to 0.96. In surgical cohorts, gradient-boosting models outperform traditional risk scores, achieving AUCs between 0.70 and 0.80 in predicting postoperative VTE. In cancer-associated venous thrombosis, advanced ML models demonstrate AUCs between 0.68 and 0.82. However, concerns about bias and external validation persist. Bleeding risk prediction models remain challenging in extended anticoagulation settings, often matching conventional models. Predicting recurrent VTE using neural networks showed AUCs of 0.93 to 0.99 in initial studies. However, these lack transparency and prospective validation. Most ML models suffer from limited external validation, "black box" algorithms, and integration hurdles within clinical workflows. Future efforts should focus on standardized reporting (e.g., Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis [TRIPOD]-ML), transparent model interpretation, prospective impact assessments, and seamless incorporation into electronic health records to realize the full potential of ML in VTE.

Chawla V, Cizmeci MN, Sullivan KM, Gritz EC, Q Cardona V, Menkiti O, Natarajan G, Rao R, McAdams RM, Dizon ML

pubmed logopapersAug 19 2025
Neonatal Encephalopathy (NE) from presumed hypoxic-ischemic encephalopathy (pHIE) is a leading cause of morbidity and mortality in infants worldwide. Recent advancements in HIE research have introduced promising tools for improved screening of high-risk infants, time to diagnosis, and accuracy of assessment of neurologic injury to guide management and predict outcomes, some of which integrate artificial intelligence (AI) and machine learning (ML). This review begins with an overview of AI/ML before examining emerging prognostic approaches for predicting outcomes in pHIE. It explores various modalities including placental and fetal biomarkers, gene expression, electroencephalography, brain magnetic resonance imaging and other advanced neuroimaging techniques, clinical video assessment tools, and transcranial magnetic stimulation paired with electromyography. Each of these approaches may come to play a crucial role in predicting outcomes in pHIE. We also discuss the application of AI/ML to enhance these emerging prognostic tools. While further validation is needed for widespread clinical adoption, these tools and their multimodal integration hold the potential to better leverage neuroplasticity windows of affected infants. IMPACT: This article provides an overview of placental pathology, biomarkers, gene expression, electroencephalography, motor assessments, brain imaging, and transcranial magnetic stimulation tools for long-term neurodevelopmental outcome prediction following neonatal encephalopathy, that lend themselves to augmentation by artificial intelligence/machine learning (AI/ML). Emerging AI/ML tools may create opportunities for enhanced prognostication through multimodal analyses.

Rizwan Ahamed, Annahita Amireskandari, Joel Palko, Carol Laxson, Binod Bhattarai, Prashnna Gyawali

arxiv logopreprintAug 19 2025
The clinical deployment of deep learning models for high-stakes tasks such as diabetic retinopathy (DR) grading requires demonstrable reliability. While models achieve high accuracy, their clinical utility is limited by a lack of robust uncertainty quantification. Conformal prediction (CP) offers a distribution-free framework to generate prediction sets with statistical guarantees of coverage. However, the interaction between standard training practices like data augmentation and the validity of these guarantees is not well understood. In this study, we systematically investigate how different data augmentation strategies affect the performance of conformal predictors for DR grading. Using the DDR dataset, we evaluate two backbone architectures -- ResNet-50 and a Co-Scale Conv-Attentional Transformer (CoaT) -- trained under five augmentation regimes: no augmentation, standard geometric transforms, CLAHE, Mixup, and CutMix. We analyze the downstream effects on conformal metrics, including empirical coverage, average prediction set size, and correct efficiency. Our results demonstrate that sample-mixing strategies like Mixup and CutMix not only improve predictive accuracy but also yield more reliable and efficient uncertainty estimates. Conversely, methods like CLAHE can negatively impact model certainty. These findings highlight the need to co-design augmentation strategies with downstream uncertainty quantification in mind to build genuinely trustworthy AI systems for medical imaging.

Sebastian Ibarra, Javier del Riego, Alessandro Catanese, Julian Cuba, Julian Cardona, Nataly Leon, Jonathan Infante, Karim Lekadir, Oliver Diaz, Richard Osuala

arxiv logopreprintAug 19 2025
Dynamic contrast-enhanced (DCE) MRI is essential for breast cancer diagnosis and treatment. However, its reliance on contrast agents introduces safety concerns, contraindications, increased cost, and workflow complexity. To this end, we present pre-contrast conditioned denoising diffusion probabilistic models to synthesize DCE-MRI, introducing, evaluating, and comparing a total of 22 generative model variants in both single-breast and full breast settings. Towards enhancing lesion fidelity, we introduce both tumor-aware loss functions and explicit tumor segmentation mask conditioning. Using a public multicenter dataset and comparing to respective pre-contrast baselines, we observe that subtraction image-based models consistently outperform post-contrast-based models across five complementary evaluation metrics. Apart from assessing the entire image, we also separately evaluate the region of interest, where both tumor-aware losses and segmentation mask inputs improve evaluation metrics. The latter notably enhance qualitative results capturing contrast uptake, albeit assuming access to tumor localization inputs that are not guaranteed to be available in screening settings. A reader study involving 2 radiologists and 4 MRI technologists confirms the high realism of the synthetic images, indicating an emerging clinical potential of generative contrast-enhancement. We share our codebase at https://github.com/sebastibar/conditional-diffusion-breast-MRI.

Justin Yiu, Kushank Arora, Daniel Steinberg, Rohit Ghiya

arxiv logopreprintAug 19 2025
Magnetic Resonance Imaging (MRI) is an essential diagnostic tool for assessing knee injuries. However, manual interpretation of MRI slices remains time-consuming and prone to inter-observer variability. This study presents a systematic evaluation of various deep learning architectures combined with explainable AI (xAI) techniques for automated region of interest (ROI) detection in knee MRI scans. We investigate both supervised and self-supervised approaches, including ResNet50, InceptionV3, Vision Transformers (ViT), and multiple U-Net variants augmented with multi-layer perceptron (MLP) classifiers. To enhance interpretability and clinical relevance, we integrate xAI methods such as Grad-CAM and Saliency Maps. Model performance is assessed using AUC for classification and PSNR/SSIM for reconstruction quality, along with qualitative ROI visualizations. Our results demonstrate that ResNet50 consistently excels in classification and ROI identification, outperforming transformer-based models under the constraints of the MRNet dataset. While hybrid U-Net + MLP approaches show potential for leveraging spatial features in reconstruction and interpretability, their classification performance remains lower. Grad-CAM consistently provided the most clinically meaningful explanations across architectures. Overall, CNN-based transfer learning emerges as the most effective approach for this dataset, while future work with larger-scale pretraining may better unlock the potential of transformer models.
Page 250 of 6576562 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.