Sort by:
Page 47 of 2352345 results

Noncontrast CT-based deep learning for predicting intracerebral hemorrhage expansion incorporating growth of intraventricular hemorrhage.

Ning Y, Yu Q, Fan X, Jiang W, Chen X, Jiang H, Xie K, Liu R, Zhou Y, Zhang X, Lv F, Xu X, Peng J

pubmed logopapersAug 31 2025
Intracerebral hemorrhage (ICH) is a severe form of stroke with high mortality and disability, where early hematoma expansion (HE) critically influences prognosis. Previous studies suggest that revised hematoma expansion (rHE), defined to include intraventricular hemorrhage (IVH) growth, provides improved prognostic accuracy. Therefore, this study aimed to develop a deep learning model based on noncontrast CT (NCCT) to predict high-risk rHE in ICH patients, enabling timely intervention. A retrospective dataset of 775 spontaneous ICH patients with baseline and follow-up CT scans was collected from two centers and split into training (n = 389), internal-testing (n = 167), and external-testing (n = 219) cohorts. 2D/3D convolutional neural network (CNN) models based on ResNet-101, ResNet-152, DenseNet-121, and DenseNet-201 were separately developed using baseline NCCT images, and the activation areas of the optimal deep learning model were visualized using gradient-weighted class activation mapping (Grad-CAM). Two baseline logistic regression clinical models based on the BRAIN score and independent clinical-radiologic predictors were also developed, along with combined-logistic and combined-SVM models incorporating handcrafted radiomics features and clinical-radiologic factors. Model performance was assessed using the area under the receiver operating characteristic curve (AUC). The 2D-ResNet-101 model outperformed others, with an AUC of 0.777 (95%CI, 0.716-0.830) in the external-testing set, surpassing the baseline clinical-radiologic model and the BRAIN score (AUC increase of 0.087, p = 0.022; 0.119, p = 0.003). Compared to the combined-logistic and combined-SVM models, AUC increased by 0.083 (p = 0.029) and 0.074 (p < 0.058), respectively. The deep learning model can identify ICH patients with high-risk rHE with favorable predictive performance than traditional baseline models based on clinical-radiologic variables and radiomics features.

Utilisation of artificial intelligence to enhance the detection rates of renal cancer on cross-sectional imaging: protocol for a systematic review and meta-analysis.

Ofagbor O, Bhardwaj G, Zhao Y, Baana M, Arkwazi M, Lami M, Bolton E, Heer R

pubmed logopapersAug 31 2025
The incidence of renal cell carcinoma has steadily been on the increase due to the increased use of imaging to identify incidental masses. Although survival has also improved because of early detection, overdiagnosis and overtreatment of benign renal masses are associated with significant morbidity, as patients with a suspected renal malignancy on imaging undergo invasive and risky procedures for a definitive diagnosis. Therefore, accurately characterising a renal mass as benign or malignant on imaging is paramount to improving patient outcomes. Artificial intelligence (AI) poses an exciting solution to the problem, augmenting traditional radiological diagnosis to increase detection accuracy. This report aims to investigate and summarise the current evidence about the diagnostic accuracy of AI in characterising renal masses on imaging. This will involve systematically searching PubMed, MEDLINE, Embase, Web of Science, Scopus and Cochrane databases. Publications of research that have evaluated the use of automated AI, fully or to some extent, in cross-sectional imaging for diagnosing or characterising malignant renal tumours will be included if published between July 2016 and June 2025 and in English. The protocol adheres to the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols 2015 checklist. The Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) score will be used to evaluate the quality and risk of bias across included studies. Furthermore, in line with Checklist for Artificial Intelligence in Medical Imaging recommendations, studies will be evaluated for including the minimum necessary information on AI research reporting. Ethical clearance will not be necessary for conducting this systematic review, and results will be disseminated through peer-reviewed publications and presentations at both national and international conferences. CRD42024529929.

Ultrasound-based detection and malignancy prediction of breast lesions eligible for biopsy: A multi-center clinical-scenario study using nomograms, large language models, and radiologist evaluation

Ali Abbasian Ardakani, Afshin Mohammadi, Taha Yusuf Kuzan, Beyza Nur Kuzan, Hamid Khorshidi, Ashkan Ghorbani, Alisa Mohebbi, Fariborz Faeghi, Sepideh Hatamikia, U Rajendra Acharya

arxiv logopreprintAug 31 2025
To develop and externally validate integrated ultrasound nomograms combining BIRADS features and quantitative morphometric characteristics, and to compare their performance with expert radiologists and state of the art large language models in biopsy recommendation and malignancy prediction for breast lesions. In this retrospective multicenter, multinational study, 1747 women with pathologically confirmed breast lesions underwent ultrasound across three centers in Iran and Turkey. A total of 10 BIRADS and 26 morphological features were extracted from each lesion. A BIRADS, morphometric, and fused nomogram integrating both feature sets was constructed via logistic regression. Three radiologists (one senior, two general) and two ChatGPT variants independently interpreted deidentified breast lesion images. Diagnostic performance for biopsy recommendation (BIRADS 4,5) and malignancy prediction was assessed in internal and two external validation cohorts. In pooled analysis, the fused nomogram achieved the highest accuracy for biopsy recommendation (83.0%) and malignancy prediction (83.8%), outperforming the morphometric nomogram, three radiologists and both ChatGPT models. Its AUCs were 0.901 and 0.853 for the two tasks, respectively. In addition, the performance of the BIRADS nomogram was significantly higher than the morphometric nomogram, three radiologists and both ChatGPT models for biopsy recommendation and malignancy prediction. External validation confirmed the robust generalizability across different ultrasound platforms and populations. An integrated BIRADS morphometric nomogram consistently outperforms standalone models, LLMs, and radiologists in guiding biopsy decisions and predicting malignancy. These interpretable, externally validated tools have the potential to reduce unnecessary biopsies and enhance personalized decision making in breast imaging.

Adaptive Contrast Adjustment Module: A Clinically-Inspired Plug-and-Play Approach for Enhanced Fetal Plane Classification

Yang Chen, Sanglin Zhao, Baoyu Chen, Mans Gustaf

arxiv logopreprintAug 31 2025
Fetal ultrasound standard plane classification is essential for reliable prenatal diagnosis but faces inherent challenges, including low tissue contrast, boundary ambiguity, and operator-dependent image quality variations. To overcome these limitations, we propose a plug-and-play adaptive contrast adjustment module (ACAM), whose core design is inspired by the clinical practice of doctors adjusting image contrast to obtain clearer and more discriminative structural information. The module employs a shallow texture-sensitive network to predict clinically plausible contrast parameters, transforms input images into multiple contrast-enhanced views through differentiable mapping, and fuses them within downstream classifiers. Validated on a multi-center dataset of 12,400 images across six anatomical categories, the module consistently improves performance across diverse models, with accuracy of lightweight models increasing by 2.02 percent, accuracy of traditional models increasing by 1.29 percent, and accuracy of state-of-the-art models increasing by 1.15 percent. The innovation of the module lies in its content-aware adaptation capability, replacing random preprocessing with physics-informed transformations that align with sonographer workflows while improving robustness to imaging heterogeneity through multi-view fusion. This approach effectively bridges low-level image features with high-level semantics, establishing a new paradigm for medical image analysis under real-world image quality variations.

Resting-state fMRI Analysis using Quantum Time-series Transformer

Junghoon Justin Park, Jungwoo Seo, Sangyoon Bae, Samuel Yen-Chi Chen, Huan-Hsin Tseng, Jiook Cha, Shinjae Yoo

arxiv logopreprintAug 31 2025
Resting-state functional magnetic resonance imaging (fMRI) has emerged as a pivotal tool for revealing intrinsic brain network connectivity and identifying neural biomarkers of neuropsychiatric conditions. However, classical self-attention transformer models--despite their formidable representational power--struggle with quadratic complexity, large parameter counts, and substantial data requirements. To address these barriers, we introduce a Quantum Time-series Transformer, a novel quantum-enhanced transformer architecture leveraging Linear Combination of Unitaries and Quantum Singular Value Transformation. Unlike classical transformers, Quantum Time-series Transformer operates with polylogarithmic computational complexity, markedly reducing training overhead and enabling robust performance even with fewer parameters and limited sample sizes. Empirical evaluation on the largest-scale fMRI datasets from the Adolescent Brain Cognitive Development Study and the UK Biobank demonstrates that Quantum Time-series Transformer achieves comparable or superior predictive performance compared to state-of-the-art classical transformer models, with especially pronounced gains in small-sample scenarios. Interpretability analyses using SHapley Additive exPlanations further reveal that Quantum Time-series Transformer reliably identifies clinically meaningful neural biomarkers of attention-deficit/hyperactivity disorder (ADHD). These findings underscore the promise of quantum-enhanced transformers in advancing computational neuroscience by more efficiently modeling complex spatio-temporal dynamics and improving clinical interpretability.

Diagnostic Performance of CT-Based Artificial Intelligence for Early Recurrence of Cholangiocarcinoma: A Systematic Review and Meta-Analysis.

Chen J, Xi J, Chen T, Yang L, Liu K, Ding X

pubmed logopapersAug 30 2025
Despite AI models demonstrating high predictive accuracy for early cholangiocarcinoma(CCA) recurrence, their clinical application faces challenges such as reproducibility, generalizability, hidden biases, and uncertain performance across diverse datasets and populations, raising concerns about their practical applicability. This meta-analysis aims to systematically assess the diagnostic performance of artificial intelligence (AI) models utilizing computed tomography (CT) imaging to predict early recurrence of CCA. A systematic search was conducted in PubMed, Embase, and Web of Science for studies published up to May 2025. Studies were selected based on the PIRTOS framework. Participants (P): Patients diagnosed with CCA (including intrahepatic and extrahepatic locations). Index test (I): AI techniques applied to CT imaging for early recurrence prediction (defined as within 1 year). Reference standard (R): Pathological diagnosis or imaging follow-up confirming recurrence. Target condition (T): Early recurrence of CCA (positive group: recurrence, negative group: no recurrence). Outcomes (O): Sensitivity, specificity, diagnostic odds ratio (DOR), and area under the receiver operating characteristic curve (AUC), assessed in both internal and external validation cohorts. Setting (S): Retrospective or prospective studies using hospital datasets. Methodological quality was assessed using an optimized version of the revised QUADAS-2 tool. Heterogeneity was assessed using the I² statistic. Pooled sensitivity, specificity, DOR and AUC were calculated using a bivariate random-effects model. Nine studies with 30 datasets involving 1,537 patients were included. In internal validation cohorts, CT-based AI models showed a pooled sensitivity of 0.87 (95% CI: 0.81-0.92), specificity of 0.85 (95% CI: 0.79-0.89), DOR of 37.71 (95% CI: 18.35-77.51), and AUC of 0.93 (95% CI: 0.90-0.94). In external validation cohorts, pooled sensitivity was 0.87 (95% CI: 0.81-0.91), specificity was 0.82 (95% CI: 0.77-0.86), DOR was 30.81 (95% CI: 18.79-50.52), and AUC was 0.85 (95% CI: 0.82-0.88). The AUC was significantly lower in external validation cohorts compared to internal validation cohorts (P < .001). Our results show that CT-based AI models predict early CCA recurrence with high performance in internal validation sets and moderate performance in external validation sets. However, the high heterogeneity observed may impact the robustness of these results. Future research should focus on prospective studies and establishing standardized gold standards to further validate the clinical applicability and generalizability of AI models.

Brain Atrophy Does Not Predict Clinical Progression in Progressive Supranuclear Palsy.

Quattrone A, Franzmeier N, Huppertz HJ, Seneca N, Petzold GC, Spottke A, Levin J, Prudlo J, Düzel E, Höglinger GU

pubmed logopapersAug 30 2025
Clinical progression rate is the typical primary endpoint measure in progressive supranuclear palsy (PSP) clinical trials. This longitudinal multicohort study investigated whether baseline clinical severity and regional brain atrophy could predict clinical progression in PSP-Richardson's syndrome (PSP-RS). PSP-RS patients (n = 309) from the placebo arms of clinical trials (NCT03068468, NCT01110720, NCT02985879, NCT01049399) and DescribePSP cohort were included. We investigated associations of baseline clinical and volumetric magnetic resonance imaging (MRI) data with 1-year longitudinal PSP rating scale (PSPRS) change. Machine learning (ML) models were tested to predict individual clinical trajectories. PSP-RS patients showed a mean PSPRS score increase of 10.3 points/yr. The frontal lobe volume showed the strongest association with subsequent clinical progression (β: -0.34, P < 0.001). However, ML models did not accurately predict individual progression rates (R<sup>2</sup> <0.15). Baseline clinical severity and brain atrophy could not predict individual clinical progression, suggesting no need for MRI-based stratification of patients in future PSP trials. © 2025 The Author(s). Movement Disorders published by Wiley Periodicals LLC on behalf of International Parkinson and Movement Disorder Society.

Clinical Radiomics Nomogram Based on Ultrasound: A Tool for Preoperative Prediction of Uterine Sarcoma.

Zheng W, Lu A, Tang X, Chen L

pubmed logopapersAug 30 2025
This study aims to develop a noninvasive preoperative predictive model utilizing ultrasound radiomics combined with clinical characteristics to differentiate uterine sarcoma from leiomyoma. This study included 212 patients with uterine mesenchymal lesions (102 sarcomas and 110 leiomyomas). Clinical characteristics were systematically selected through both univariate and multivariate logistic regression analyses. A clinical model was constructed using the selected clinical characteristics. Radiomics features were extracted from transvaginal ultrasound images, and 6 machine learning algorithms were used to construct radiomics models. Then, a clinical radiomics nomogram was developed integrating clinical characteristics with radiomics signature. The effectiveness of these models in predicting uterine sarcoma was thoroughly evaluated. The area under the curve (AUC) was used to compare the predictive efficacy of the different models. The AUC of the clinical model was 0.835 (95% confidence interval [CI]: 0.761-0.883) and 0.791 (95% CI: 0.652-0.869) in the training and testing sets, respectively. The logistic regression model performed best in the radiomics model construction, with AUC values of 0.878 (95% CI: 0.811-0.918) and 0.818 (95% CI: 0.681-0.895) in the training and testing sets, respectively. The clinical radiomics nomogram performed well in differentiation, with AUC values of 0.955 (95% CI: 0.911-0.973) and 0.882 (95% CI: 0.767-0.936) in the training and testing sets, respectively. The clinical radiomics nomogram can provide more comprehensive and personalized diagnostic information, which is highly important for selecting treatment strategies and ultimately improving patient outcomes in the management of uterine mesenchymal tumors.

Multi-DECT Image-based Interpretable Model Incorporating Habitat Radiomics and Vision Transformer Deep Learning for Preoperative Prediction of Muscle Invasion in Bladder Cancer.

Du C, Wei W, Hu M, He J, Shen J, Liu Y, Li J, Liu L

pubmed logopapersAug 30 2025
The research aims to evaluate the effectiveness of a multi-dual-energy CT (DECT) image-based interpretable model that integrates habitat radiomics with a 3D Vision Transformer (ViT) deep learning (DL) for preoperatively predicting muscle invasion in bladder cancer (BCa). This retrospective study analyzed 200 BCa patients, who were divided into a training cohort (n=140) and a test cohort (n=60) in a 7:3 ratio. Univariate and multivariate analyses were performed on the DECT quantitative parameters to identify independent predictors, which were subsequently used to develop a DECT model. The K-means algorithm was employed to generate habitat sub-regions of BCa. Traditional radiomics (Rad) model, habitat model, ResNet 18 model, ViT model, and fusion models were constructed from the 40, 70, and 100 keV virtual monochromatic images (VMIs) in DECT. The evaluation of all models used the area under the receiver operating characteristic curve (AUC), calibration curve, decision curve analysis (DCA), net reclassification index (NRI), and integrated discrimination improvement (IDI). The SHAP method was employed to interpret the optimal model and visualize its decision-making process. The Habitat-ViT model demonstrated superior performance compared to other single models, achieving an AUC of 0.997 (95% CI 0.992, 1.000) in the training cohort and 0.892 (95% CI 0.814, 0.971) in the test cohort. The incorporation of DECT quantitative parameters did not improve the performance. DCA and calibration curve assessments indicated that the Habitat-ViT model provided a favorable net benefit and demonstrated strong calibration. Furthermore, SHAP clarified the decision-making processes underlying the model's predicted outcomes. Multi-DECT image-based interpretable model that integrates habitat radiomics with a ViT DL holds promise for predicting muscle invasion status in BCa, providing valuable insights for personalized treatment planning and prognostic assessment.

MSFE-GallNet-X: a multi-scale feature extraction-based CNN Model for gallbladder disease analysis with enhanced explainability.

Nabil HR, Ahmed I, Das A, Mridha MF, Kabir MM, Aung Z

pubmed logopapersAug 30 2025
This study introduces MSFE-GallNet-X, a domain-adaptive deep learning model utilizing multi-scale feature extraction (MSFE) to improve the classification accuracy of gallbladder diseases from grayscale ultrasound images, while integrating explainable artificial intelligence (XAI) methods to enhance clinical interpretability. We developed a convolutional neural network-based architecture that automatically learns multi-scale features from a dataset comprising 10,692 high-resolution ultrasound images from 1,782 patients, covering nine gallbladder disease classes, including gallstones, cholecystitis, and carcinoma. The model incorporated Gradient-Weighted Class Activation Mapping (Grad-CAM) and Local Interpretable Model-Agnostic Explanations (LIME) to provide visual interpretability of diagnostic predictions. Model performance was evaluated using standard metrics, including accuracy and F1 score. The MSFE-GallNet-X achieved a classification accuracy of 99.63% and an F1 score of 99.50%, outperforming state-of-the-art models including VGG-19 (98.89%) and DenseNet121 (91.81%), while maintaining greater parameter efficiency, only 1·91 M parameters in gallbladder disease classification. Visualization through Grad-CAM and LIME highlighted critical image regions influencing model predictions, supporting explainability for clinical use. MSFE-GallNet-X demonstrates strong performance on a controlled and balanced dataset, suggesting its potential as an AI-assisted tool for clinical decision-making in gallbladder disease management. Not applicable.
Page 47 of 2352345 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.