Sort by:
Page 212 of 6546537 results

Ning Y, Yu Q, Fan X, Jiang W, Chen X, Jiang H, Xie K, Liu R, Zhou Y, Zhang X, Lv F, Xu X, Peng J

pubmed logopapersAug 31 2025
Intracerebral hemorrhage (ICH) is a severe form of stroke with high mortality and disability, where early hematoma expansion (HE) critically influences prognosis. Previous studies suggest that revised hematoma expansion (rHE), defined to include intraventricular hemorrhage (IVH) growth, provides improved prognostic accuracy. Therefore, this study aimed to develop a deep learning model based on noncontrast CT (NCCT) to predict high-risk rHE in ICH patients, enabling timely intervention. A retrospective dataset of 775 spontaneous ICH patients with baseline and follow-up CT scans was collected from two centers and split into training (n = 389), internal-testing (n = 167), and external-testing (n = 219) cohorts. 2D/3D convolutional neural network (CNN) models based on ResNet-101, ResNet-152, DenseNet-121, and DenseNet-201 were separately developed using baseline NCCT images, and the activation areas of the optimal deep learning model were visualized using gradient-weighted class activation mapping (Grad-CAM). Two baseline logistic regression clinical models based on the BRAIN score and independent clinical-radiologic predictors were also developed, along with combined-logistic and combined-SVM models incorporating handcrafted radiomics features and clinical-radiologic factors. Model performance was assessed using the area under the receiver operating characteristic curve (AUC). The 2D-ResNet-101 model outperformed others, with an AUC of 0.777 (95%CI, 0.716-0.830) in the external-testing set, surpassing the baseline clinical-radiologic model and the BRAIN score (AUC increase of 0.087, p = 0.022; 0.119, p = 0.003). Compared to the combined-logistic and combined-SVM models, AUC increased by 0.083 (p = 0.029) and 0.074 (p < 0.058), respectively. The deep learning model can identify ICH patients with high-risk rHE with favorable predictive performance than traditional baseline models based on clinical-radiologic variables and radiomics features.

Ofagbor O, Bhardwaj G, Zhao Y, Baana M, Arkwazi M, Lami M, Bolton E, Heer R

pubmed logopapersAug 31 2025
The incidence of renal cell carcinoma has steadily been on the increase due to the increased use of imaging to identify incidental masses. Although survival has also improved because of early detection, overdiagnosis and overtreatment of benign renal masses are associated with significant morbidity, as patients with a suspected renal malignancy on imaging undergo invasive and risky procedures for a definitive diagnosis. Therefore, accurately characterising a renal mass as benign or malignant on imaging is paramount to improving patient outcomes. Artificial intelligence (AI) poses an exciting solution to the problem, augmenting traditional radiological diagnosis to increase detection accuracy. This report aims to investigate and summarise the current evidence about the diagnostic accuracy of AI in characterising renal masses on imaging. This will involve systematically searching PubMed, MEDLINE, Embase, Web of Science, Scopus and Cochrane databases. Publications of research that have evaluated the use of automated AI, fully or to some extent, in cross-sectional imaging for diagnosing or characterising malignant renal tumours will be included if published between July 2016 and June 2025 and in English. The protocol adheres to the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols 2015 checklist. The Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) score will be used to evaluate the quality and risk of bias across included studies. Furthermore, in line with Checklist for Artificial Intelligence in Medical Imaging recommendations, studies will be evaluated for including the minimum necessary information on AI research reporting. Ethical clearance will not be necessary for conducting this systematic review, and results will be disseminated through peer-reviewed publications and presentations at both national and international conferences. CRD42024529929.

Hartline CPTAD, Hartvickson MAJS, Perdue CPTMJ, Sandoval CPTC, Walker LTCJD, Soules CPTA, Mitchell COLCA

pubmed logopapersAug 31 2025
Noncompressible truncal hemorrhage is a leading cause of preventable death in military prehospital settings, particularly in combat environments where advanced imaging is unavailable. The Focused Assessment with Sonography in Trauma (FAST) exam is critical for diagnosing intra-abdominal bleeding. However, Army medics typically lack formal ultrasound training. This study examines whether artificial intelligence (AI) assistance can enhance medics' proficiency in performing FAST exams, thereby improving the speed and accuracy of trauma triage in austere conditions. This is a prospective, randomized controlled trial that involved 60 Army medics who performed 3-view abdominal FAST exams, both with and without AI assistance, using the EchoNous Kosmos device. Investigators randomized participants into 2 groups and evaluated based on time to completion, adequacy of imaging, and confidence in using the device. Two trained investigators assessed adequacy and the participants reported confidence in the device using a 5-point Likert scale. We then analyzed data using the t-test for parametric data, the Wilcoxon rank-sum test, and Cohen's Kappa test for interrater reliability. The AI-assisted group completed the FAST exam in an average of 142.57 seconds compared to 143.87 seconds (P = .9) for the non-AI-assisted group, demonstrating no statistically significant difference in time. However, the AI-assisted group demonstrated significantly higher adequacy in the left upper quadrant and pelvic views (P = .008 and P = .004, respectively). Participants reported significantly higher confidence in the AI-assisted group, with a median score of 4.00 versus 2.50 (P = .006). Interrater agreement was moderate to substantial, with Cohen's Kappa values indicating significant reliability. AI assistance did not significantly reduce the time required to complete a FAST exam but improved image adequacy and user confidence. These findings suggest that AI tools can enhance the quality of FAST exams conducted by minimally trained medics in combat settings. Further research is needed to explore integrating AI-assisted ultrasound training in military medic curricula to optimize trauma care in austere environments.

Olivé-Gadea M, Mayol J, Requena M, Rodrigo-Gisbert M, Rizzo F, Garcia-Tornel A, Simonetti R, Diana F, Muchada M, Pagola J, Rodriguez-Luna D, Rodriguez-Villatoro N, Rubiera M, Molina CA, Tomasello A, Hernandez D, de Dios Lascuevas M, Ribo M

pubmed logopapersAug 31 2025
Rapid identification of large vessel occlusion (LVO) in acute ischemic stroke (AIS) is essential for reperfusion therapy. Screening tools, including Artificial Intelligence (AI) based algorithms, have been developed to accelerate detection but rely heavily on pre-test LVO prevalence. This study aimed to review LVO prevalence across clinical contexts and analyze its impact on AI-algorithm performance. We systematically reviewed studies reporting consecutive suspected AIS cohorts. Cohorts were grouped into four clinical scenarios based on patient selection criteria: (a) high suspicion of LVO by stroke specialists (direct-to-angiosuite candidates), (b) high suspicion of LVO according to pre-hospital scales, (c) and (d) any suspected AIS without considering severity cut-off in a hospital or pre-hospital setting, respectively. We analyzed LVO prevalence in each scenario and assessed the false discovery rate (FDR) - number of positive studies needed to encounter a false positive, if applying eight commercially available LVO-detecting algorithms. We included 87 cohorts from 80 studies. Median LVO prevalence was: (a) 84% (77-87%), (b) 35% (26-42%), (c) 19% (14-25%), and (d) 14% (8-22%). At high prevalence levels: (a) FDR ranged between 0.007 (1 false positive in 142 positives) and 0.023 (1 in 43), whereas in low prevalence scenarios (Ccand d), FDR ranged between 0.168 (1 in 6) and 0.543 (over 1 in 2). To ensure meaningful clinical impact, AI algorithms must be evaluated within the specific populations and care pathways where they are applied.

Yizhe Zhang, Qiang Chen, Tao Zhou

arxiv logopreprintAug 31 2025
The emergence of powerful, general-purpose omnimodels capable of processing diverse data modalities has raised a critical question: can these ``jack-of-all-trades'' systems perform on par with highly specialized models in knowledge-intensive domains? This work investigates this question within the high-stakes field of medical image segmentation. We conduct a comparative study analyzing the zero-shot performance of a state-of-the-art omnimodel (Gemini 2.5 Pro, the ``Nano Banana'' model) against domain-specific deep learning models on three distinct tasks: polyp (endoscopy), retinal vessel (fundus), and breast tumor segmentation (ultrasound). Our study focuses on performance at the extremes by curating subsets of the ``easiest'' and ``hardest'' cases based on the specialist models' accuracy. Our findings reveal a nuanced and task-dependent landscape. For polyp and breast tumor segmentation, specialist models excel on easy samples, but the omnimodel demonstrates greater robustness on hard samples where specialists fail catastrophically. Conversely, for the fine-grained task of retinal vessel segmentation, the specialist model maintains superior performance across both easy and hard cases. Intriguingly, qualitative analysis suggests omnimodels may possess higher sensitivity, identifying subtle anatomical features missed by human annotators. Our results indicate that while current omnimodels are not yet a universal replacement for specialists, their unique strengths suggest a potential complementary role with specialist models, particularly in enhancing robustness on challenging edge cases.

Ali Abbasian Ardakani, Afshin Mohammadi, Taha Yusuf Kuzan, Beyza Nur Kuzan, Hamid Khorshidi, Ashkan Ghorbani, Alisa Mohebbi, Fariborz Faeghi, Sepideh Hatamikia, U Rajendra Acharya

arxiv logopreprintAug 31 2025
To develop and externally validate integrated ultrasound nomograms combining BIRADS features and quantitative morphometric characteristics, and to compare their performance with expert radiologists and state of the art large language models in biopsy recommendation and malignancy prediction for breast lesions. In this retrospective multicenter, multinational study, 1747 women with pathologically confirmed breast lesions underwent ultrasound across three centers in Iran and Turkey. A total of 10 BIRADS and 26 morphological features were extracted from each lesion. A BIRADS, morphometric, and fused nomogram integrating both feature sets was constructed via logistic regression. Three radiologists (one senior, two general) and two ChatGPT variants independently interpreted deidentified breast lesion images. Diagnostic performance for biopsy recommendation (BIRADS 4,5) and malignancy prediction was assessed in internal and two external validation cohorts. In pooled analysis, the fused nomogram achieved the highest accuracy for biopsy recommendation (83.0%) and malignancy prediction (83.8%), outperforming the morphometric nomogram, three radiologists and both ChatGPT models. Its AUCs were 0.901 and 0.853 for the two tasks, respectively. In addition, the performance of the BIRADS nomogram was significantly higher than the morphometric nomogram, three radiologists and both ChatGPT models for biopsy recommendation and malignancy prediction. External validation confirmed the robust generalizability across different ultrasound platforms and populations. An integrated BIRADS morphometric nomogram consistently outperforms standalone models, LLMs, and radiologists in guiding biopsy decisions and predicting malignancy. These interpretable, externally validated tools have the potential to reduce unnecessary biopsies and enhance personalized decision making in breast imaging.

Yang Chen, Sanglin Zhao, Baoyu Chen, Mans Gustaf

arxiv logopreprintAug 31 2025
Fetal ultrasound standard plane classification is essential for reliable prenatal diagnosis but faces inherent challenges, including low tissue contrast, boundary ambiguity, and operator-dependent image quality variations. To overcome these limitations, we propose a plug-and-play adaptive contrast adjustment module (ACAM), whose core design is inspired by the clinical practice of doctors adjusting image contrast to obtain clearer and more discriminative structural information. The module employs a shallow texture-sensitive network to predict clinically plausible contrast parameters, transforms input images into multiple contrast-enhanced views through differentiable mapping, and fuses them within downstream classifiers. Validated on a multi-center dataset of 12,400 images across six anatomical categories, the module consistently improves performance across diverse models, with accuracy of lightweight models increasing by 2.02 percent, accuracy of traditional models increasing by 1.29 percent, and accuracy of state-of-the-art models increasing by 1.15 percent. The innovation of the module lies in its content-aware adaptation capability, replacing random preprocessing with physics-informed transformations that align with sonographer workflows while improving robustness to imaging heterogeneity through multi-view fusion. This approach effectively bridges low-level image features with high-level semantics, establishing a new paradigm for medical image analysis under real-world image quality variations.

Junghoon Justin Park, Jungwoo Seo, Sangyoon Bae, Samuel Yen-Chi Chen, Huan-Hsin Tseng, Jiook Cha, Shinjae Yoo

arxiv logopreprintAug 31 2025
Resting-state functional magnetic resonance imaging (fMRI) has emerged as a pivotal tool for revealing intrinsic brain network connectivity and identifying neural biomarkers of neuropsychiatric conditions. However, classical self-attention transformer models--despite their formidable representational power--struggle with quadratic complexity, large parameter counts, and substantial data requirements. To address these barriers, we introduce a Quantum Time-series Transformer, a novel quantum-enhanced transformer architecture leveraging Linear Combination of Unitaries and Quantum Singular Value Transformation. Unlike classical transformers, Quantum Time-series Transformer operates with polylogarithmic computational complexity, markedly reducing training overhead and enabling robust performance even with fewer parameters and limited sample sizes. Empirical evaluation on the largest-scale fMRI datasets from the Adolescent Brain Cognitive Development Study and the UK Biobank demonstrates that Quantum Time-series Transformer achieves comparable or superior predictive performance compared to state-of-the-art classical transformer models, with especially pronounced gains in small-sample scenarios. Interpretability analyses using SHapley Additive exPlanations further reveal that Quantum Time-series Transformer reliably identifies clinically meaningful neural biomarkers of attention-deficit/hyperactivity disorder (ADHD). These findings underscore the promise of quantum-enhanced transformers in advancing computational neuroscience by more efficiently modeling complex spatio-temporal dynamics and improving clinical interpretability.

Zhang Y, Chen M, Zhang Z

pubmed logopapersAug 30 2025
Whether during the early days of popularization or in the present, the window setting in Computed Tomography (CT) has always been an indispensable part of the CT analysis process. Although research has investigated the capabilities of CT multi-window fusion in enhancing neural networks, there remains a paucity of domain-invariant, intuitively interpretable methodologies for Auto Window Setting. In this work, we propose plug-and-play module derived from Tanh activation function. This module enables the deployment of medical imaging neural network backbones without requiring manual CT window configuration. Domain-invariant design facilitates observation of the preference decisions rendered by the adaptive mechanism from a clinically intuitive perspective. We confirm the effectiveness of the proposed method on multiple open-source datasets, allowing for direct training without the need for manual window setting and yielding improvements with 54%∼127%+ Dice, 14%∼32%+ Recall and 94%∼200%+ Precision on hard segmentation targets. Experimental results conducted in NVIDIA NGC environment demonstrate that the module facilitates efficient deployment of AI-powered medical imaging tasks. The proposed method enables automatic determination of CT window settings for specific downstream tasks in the development and deployment of mainstream medical imaging neural networks, demonstrating the potential to reduce associated deployment costs.

Chen J, Xi J, Chen T, Yang L, Liu K, Ding X

pubmed logopapersAug 30 2025
Despite AI models demonstrating high predictive accuracy for early cholangiocarcinoma(CCA) recurrence, their clinical application faces challenges such as reproducibility, generalizability, hidden biases, and uncertain performance across diverse datasets and populations, raising concerns about their practical applicability. This meta-analysis aims to systematically assess the diagnostic performance of artificial intelligence (AI) models utilizing computed tomography (CT) imaging to predict early recurrence of CCA. A systematic search was conducted in PubMed, Embase, and Web of Science for studies published up to May 2025. Studies were selected based on the PIRTOS framework. Participants (P): Patients diagnosed with CCA (including intrahepatic and extrahepatic locations). Index test (I): AI techniques applied to CT imaging for early recurrence prediction (defined as within 1 year). Reference standard (R): Pathological diagnosis or imaging follow-up confirming recurrence. Target condition (T): Early recurrence of CCA (positive group: recurrence, negative group: no recurrence). Outcomes (O): Sensitivity, specificity, diagnostic odds ratio (DOR), and area under the receiver operating characteristic curve (AUC), assessed in both internal and external validation cohorts. Setting (S): Retrospective or prospective studies using hospital datasets. Methodological quality was assessed using an optimized version of the revised QUADAS-2 tool. Heterogeneity was assessed using the I² statistic. Pooled sensitivity, specificity, DOR and AUC were calculated using a bivariate random-effects model. Nine studies with 30 datasets involving 1,537 patients were included. In internal validation cohorts, CT-based AI models showed a pooled sensitivity of 0.87 (95% CI: 0.81-0.92), specificity of 0.85 (95% CI: 0.79-0.89), DOR of 37.71 (95% CI: 18.35-77.51), and AUC of 0.93 (95% CI: 0.90-0.94). In external validation cohorts, pooled sensitivity was 0.87 (95% CI: 0.81-0.91), specificity was 0.82 (95% CI: 0.77-0.86), DOR was 30.81 (95% CI: 18.79-50.52), and AUC was 0.85 (95% CI: 0.82-0.88). The AUC was significantly lower in external validation cohorts compared to internal validation cohorts (P < .001). Our results show that CT-based AI models predict early CCA recurrence with high performance in internal validation sets and moderate performance in external validation sets. However, the high heterogeneity observed may impact the robustness of these results. Future research should focus on prospective studies and establishing standardized gold standards to further validate the clinical applicability and generalizability of AI models.
Page 212 of 6546537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.