Sort by:
Page 138 of 1401393 results

Artificial Intelligence in Vascular Neurology: Applications, Challenges, and a Review of AI Tools for Stroke Imaging, Clinical Decision Making, and Outcome Prediction Models.

Alqadi MM, Vidal SGM

pubmed logopapersMay 9 2025
Artificial intelligence (AI) promises to compress stroke treatment timelines, yet its clinical return on investment remains uncertain. We interrogate state‑of‑the‑art AI platforms across imaging, workflow orchestration, and outcome prediction to clarify value drivers and execution risks. Convolutional, recurrent, and transformer architectures now trigger large‑vessel‑occlusion alerts, delineate ischemic core in seconds, and forecast 90‑day function. Commercial deployments-RapidAI, Viz.ai, Aidoc-report double‑digit reductions in door‑to‑needle metrics and expanded thrombectomy eligibility. However, dataset bias, opaque reasoning, and limited external validation constrain scalability. Hybrid image‑plus‑clinical models elevate predictive accuracy but intensify data‑governance demands. AI can operationalize precision stroke care, but enterprise‑grade adoption requires federated data pipelines, explainable‑AI dashboards, and fit‑for‑purpose regulation. Prospective multicenter trials and continuous lifecycle surveillance are mandatory to convert algorithmic promise into reproducible, equitable patient benefit.

Comparison between multimodal foundation models and radiologists for the diagnosis of challenging neuroradiology cases with text and images.

Le Guellec B, Bruge C, Chalhoub N, Chaton V, De Sousa E, Gaillandre Y, Hanafi R, Masy M, Vannod-Michel Q, Hamroun A, Kuchcinski G

pubmed logopapersMay 9 2025
The purpose of this study was to compare the ability of two multimodal models (GPT-4o and Gemini 1.5 Pro) with that of radiologists to generate differential diagnoses from textual context alone, key images alone, or a combination of both using complex neuroradiology cases. This retrospective study included neuroradiology cases from the "Diagnosis Please" series published in the Radiology journal between January 2008 and September 2024. The two multimodal models were asked to provide three differential diagnoses from textual context alone, key images alone, or the complete case. Six board-certified neuroradiologists solved the cases in the same setting, randomly assigned to two groups: context alone first and images alone first. Three radiologists solved the cases without, and then with the assistance of Gemini 1.5 Pro. An independent radiologist evaluated the quality of the image descriptions provided by GPT-4o and Gemini for each case. Differences in correct answers between multimodal models and radiologists were analyzed using McNemar test. GPT-4o and Gemini 1.5 Pro outperformed radiologists using clinical context alone (mean accuracy, 34.0 % [18/53] and 44.7 % [23.7/53] vs. 16.4 % [8.7/53]; both P < 0.01). Radiologists outperformed GPT-4o and Gemini 1.5 Pro using images alone (mean accuracy, 42.0 % [22.3/53] vs. 3.8 % [2/53], and 7.5 % [4/53]; both P < 0.01) and the complete cases (48.0 % [25.6/53] vs. 34.0 % [18/53], and 38.7 % [20.3/53]; both P < 0.001). While radiologists improved their accuracy when combining multimodal information (from 42.1 % [22.3/53] for images alone to 50.3 % [26.7/53] for complete cases; P < 0.01), GPT-4o and Gemini 1.5 Pro did not benefit from the multimodal context (from 34.0 % [18/53] for text alone to 35.2 % [18.7/53] for complete cases for GPT-4o; P = 0.48, and from 44.7 % [23.7/53] to 42.8 % [22.7/53] for Gemini 1.5 Pro; P = 0.54). Radiologists benefited significantly from the suggestion of Gemini 1.5 Pro, increasing their accuracy from 47.2 % [25/53] to 56.0 % [27/53] (P < 0.01). Both GPT-4o and Gemini 1.5 Pro correctly identified the imaging modality in 53/53 (100 %) and 51/53 (96.2 %) cases, respectively, but frequently failed to identify key imaging findings (43/53 cases [81.1 %] with incorrect identification of key imaging findings for GPT-4o and 50/53 [94.3 %] for Gemini 1.5). Radiologists show a specific ability to benefit from the integration of textual and visual information, whereas multimodal models mostly rely on the clinical context to suggest diagnoses.

Robust & Precise Knowledge Distillation-based Novel Context-Aware Predictor for Disease Detection in Brain and Gastrointestinal

Saif Ur Rehman Khan, Muhammad Nabeel Asim, Sebastian Vollmer, Andreas Dengel

arxiv logopreprintMay 9 2025
Medical disease prediction, particularly through imaging, remains a challenging task due to the complexity and variability of medical data, including noise, ambiguity, and differing image quality. Recent deep learning models, including Knowledge Distillation (KD) methods, have shown promising results in brain tumor image identification but still face limitations in handling uncertainty and generalizing across diverse medical conditions. Traditional KD methods often rely on a context-unaware temperature parameter to soften teacher model predictions, which does not adapt effectively to varying uncertainty levels present in medical images. To address this issue, we propose a novel framework that integrates Ant Colony Optimization (ACO) for optimal teacher-student model selection and a novel context-aware predictor approach for temperature scaling. The proposed context-aware framework adjusts the temperature based on factors such as image quality, disease complexity, and teacher model confidence, allowing for more robust knowledge transfer. Additionally, ACO efficiently selects the most appropriate teacher-student model pair from a set of pre-trained models, outperforming current optimization methods by exploring a broader solution space and better handling complex, non-linear relationships within the data. The proposed framework is evaluated using three publicly available benchmark datasets, each corresponding to a distinct medical imaging task. The results demonstrate that the proposed framework significantly outperforms current state-of-the-art methods, achieving top accuracy rates: 98.01% on the MRI brain tumor (Kaggle) dataset, 92.81% on the Figshare MRI dataset, and 96.20% on the GastroNet dataset. This enhanced performance is further evidenced by the improved results, surpassing existing benchmarks of 97.24% (Kaggle), 91.43% (Figshare), and 95.00% (GastroNet).

Automated Emergent Large Vessel Occlusion Detection Using Viz.ai Software and Its Impact on Stroke Workflow Metrics and Patient Outcomes in Stroke Centers: A Systematic Review and Meta-analysis.

Sarhan K, Azzam AY, Moawad MHED, Serag I, Abbas A, Sarhan AE

pubmed logopapersMay 8 2025
The implementation of artificial intelligence (AI), particularly Viz.ai software in stroke care, has emerged as a promising tool to enhance the detection of large vessel occlusion (LVO) and to improve stroke workflow metrics and patient outcomes. The aim of this systematic review and meta-analysis is to evaluate the impact of Viz.ai on stroke workflow efficiency in hospitals and on patients' outcomes. Following the PRISMA guidelines, we conducted a comprehensive search on electronic databases, including PubMed, Web of Science, and Scopus databases, to obtain relevant studies until 25 October 2024. Our primary outcomes were door-to-groin puncture (DTG) time, CT scan-to-start of endovascular treatment (EVT) time, CT scan-to-recanalization time, and door-in-door-out time. Secondary outcomes included symptomatic intracranial hemorrhage (ICH), any ICH, mortality, mRS score < 2 at 90 days, and length of hospital stay. A total of 12 studies involving 15,595 patients were included in our analysis. The pooled analysis demonstrated that the implementation of the Viz.ai algorithm was associated with lesser CT scan to EVT time (SMD -0.71, 95% CI [-0.98, -0.44], p < 0.001) and DTG time (SMD -0.50, 95% CI [-0.66, -0.35], p < 0.001) as well as CT to recanalization time (SMD -0.55, 95% CI [-0.76, -0.33], p < 0.001). Additionally, patients in the post-AI group had significantly lower door-in door-out time than the pre-AI group (SMD -0.49, 95% CI [-0.71, -0.28], p < 0.001). Despite the workflow metrics improvement, our analysis did not reveal statistically significant differences in patient clinical outcomes (p > 0.05). Our results suggest that the integration of the Viz.ai platform in stroke care holds significant potential for reducing EVT delays in patients with LVO and optimizing stroke flow metrics in comprehensive stroke centers. Further studies are required to validate its efficacy in improving clinical outcomes in patients with LVO.

Artificial intelligence applied to ultrasound diagnosis of pelvic gynecological tumors: a systematic review and meta-analysis.

Geysels A, Garofalo G, Timmerman S, Barreñada L, De Moor B, Timmerman D, Froyman W, Van Calster B

pubmed logopapersMay 8 2025
To perform a systematic review on artificial intelligence (AI) studies focused on identifying and differentiating pelvic gynecological tumors on ultrasound scans. Studies developing or validating AI models for diagnosing gynecological pelvic tumors on ultrasound scans were eligible for inclusion. We systematically searched PubMed, Embase, Web of Science, and Cochrane Central from their database inception until April 30th, 2024. To assess the quality of the included studies, we adapted the QUADAS-2 risk of bias tool to address the unique challenges of AI in medical imaging. Using multi-level random effects models, we performed a meta-analysis to generate summary estimates of the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. To provide a reference point of current diagnostic support tools for ultrasound examiners, we descriptively compared the pooled performance to that of the well-recognized ADNEX model on external validation. Subgroup analyses were performed to explore sources of heterogeneity. From 9151 records retrieved, 44 studies were eligible: 40 on ovarian, three on endometrial, and one on myometrial pathology. Overall, 95% were at high risk of bias - primarily due to inappropriate study inclusion criteria, the absence of a patient-level split of training and testing image sets, and no calibration assessment. For ovarian tumors, the summary AUC for AI models distinguishing benign from malignant tumors was 0.89 (95% CI: 0.85-0.92). In lower-risk studies (at least three low-risk domains), the summary AUC dropped to 0.87 (0.83-0.90), with deep learning models outperforming radiomics-based machine learning approaches in this subset. Only five studies included an external validation, and six evaluated calibration performance. In a recent systematic review of external validation studies, the ADNEX model had a pooled AUC of 0.93 (0.91-0.94) in studies at low risk of bias. Studies on endometrial and myometrial pathologies were reported individually. Although AI models show promising discriminative performances for diagnosing gynecological tumors on ultrasound, most studies have methodological shortcomings that result in a high risk of bias. In addition, the ADNEX model appears to outperform most AI approaches for ovarian tumors. Future research should emphasize robust study designs - ideally large, multicenter, and prospective cohorts that mirror real-world populations - along with external validation, proper calibration, and standardized reporting. This study was pre-registered with Open Science Framework (OSF): https://doi.org/10.17605/osf.io/bhkst.

Radiomics-based machine learning in prediction of response to neoadjuvant chemotherapy in osteosarcoma: A systematic review and meta-analysis.

Salimi M, Houshi S, Gholamrezanezhad A, Vadipour P, Seifi S

pubmed logopapersMay 8 2025
Osteosarcoma (OS) is the most common primary bone malignancy, and neoadjuvant chemotherapy (NAC) improves survival rates. However, OS heterogeneity results in variable treatment responses, highlighting the need for reliable, non-invasive tools to predict NAC response. Radiomics-based machine learning (ML) offers potential for identifying imaging biomarkers to predict treatment outcomes. This systematic review and meta-analysis evaluated the accuracy and reliability of radiomics models for predicting NAC response in OS. A systematic search was conducted in PubMed, Embase, Scopus, and Web of Science up to November 2024. Studies using radiomics-based ML for NAC response prediction in OS were included. Pooled sensitivity, specificity, and AUC for training and validation cohorts were calculated using bivariate random-effects modeling, with clinical-combined models analyzed separately. Quality assessment was performed using the QUADAS-2 tool, radiomics quality score (RQS), and METRICS scores. Sixteen studies were included, with 63 % using MRI and 37 % using CT. Twelve studies, comprising 1639 participants, were included in the meta-analysis. Pooled metrics for training cohorts showed an AUC of 0.93, sensitivity of 0.89, and specificity of 0.85. Validation cohorts achieved an AUC of 0.87, sensitivity of 0.81, and specificity of 0.82. Clinical-combined models outperformed radiomics-only models. The mean RQS score was 9.44 ± 3.41, and the mean METRICS score was 60.8 % ± 17.4 %. Radiomics-based ML shows promise for predicting NAC response in OS, especially when combined with clinical indicators. However, limitations in external validation and methodological consistency must be addressed.

FF-PNet: A Pyramid Network Based on Feature and Field for Brain Image Registration

Ying Zhang, Shuai Guo, Chenxi Sun, Yuchen Zhu, Jinhai Xiang

arxiv logopreprintMay 8 2025
In recent years, deformable medical image registration techniques have made significant progress. However, existing models still lack efficiency in parallel extraction of coarse and fine-grained features. To address this, we construct a new pyramid registration network based on feature and deformation field (FF-PNet). For coarse-grained feature extraction, we design a Residual Feature Fusion Module (RFFM), for fine-grained image deformation, we propose a Residual Deformation Field Fusion Module (RDFFM). Through the parallel operation of these two modules, the model can effectively handle complex image deformations. It is worth emphasizing that the encoding stage of FF-PNet only employs traditional convolutional neural networks without any attention mechanisms or multilayer perceptrons, yet it still achieves remarkable improvements in registration accuracy, fully demonstrating the superior feature decoding capabilities of RFFM and RDFFM. We conducted extensive experiments on the LPBA and OASIS datasets. The results show our network consistently outperforms popular methods in metrics like the Dice Similarity Coefficient.

Systematic review and epistemic meta-analysis to advance binomial AI-radiomics integration for predicting high-grade glioma progression and enhancing patient management.

Chilaca-Rosas MF, Contreras-Aguilar MT, Pallach-Loose F, Altamirano-Bustamante NF, Salazar-Calderon DR, Revilla-Monsalve C, Heredia-Gutiérrez JC, Conde-Castro B, Medrano-Guzmán R, Altamirano-Bustamante MM

pubmed logopapersMay 8 2025
High-grade gliomas, particularly glioblastoma (MeSH:Glioblastoma), are among the most aggressive and lethal central nervous system tumors, necessitating advanced diagnostic and prognostic strategies. This systematic review and epistemic meta-analysis explore the integration of Artificial Intelligence (AI) and Radiomics Inter-field (AIRI) to enhance predictive modeling for tumor progression. A comprehensive literature search identified 19 high-quality studies, which were analyzed to evaluate radiomic features and machine learning models in predicting overall survival (OS) and progression-free survival (PFS). Key findings highlight the predictive strength of specific MRI-derived radiomic features such as log-filter and Gabor textures and the superior performance of Support Vector Machines (SVM) and Random Forest (RF) models, achieving high accuracy and AUC scores (e.g., 98% AUC and 98.7% accuracy for OS). This research demonstrates the current state of the AIRI field and shows that current articles report their results with different performance indicators and metrics, making outcomes heterogenous and hard to integrate knowledge. Additionally, it was explored that today some articles use biased methodologies. This study proposes a structured AIRI development roadmap and guidelines, to avoid bias and make results comparable, emphasizing standardized feature extraction and AI model training to improve reproducibility across clinical settings. By advancing precision medicine, AIRI integration has the potential to refine clinical decision-making and enhance patient outcomes.

Automated Detection of Black Hole Sign for Intracerebral Hemorrhage Patients Using Self-Supervised Learning.

Wang H, Schwirtlich T, Houskamp EJ, Hutch MR, Murphy JX, do Nascimento JS, Zini A, Brancaleoni L, Giacomozzi S, Luo Y, Naidech AM

pubmed logopapersMay 7 2025
Intracerebral Hemorrhage (ICH) is a devastating form of stroke. Hematoma expansion (HE), growth of the hematoma on interval scans, predicts death and disability. Accurate prediction of HE is crucial for targeted interventions to improve patient outcomes. The black hole sign (BHS) on non-contrast computed tomography (CT) scans is a predictive marker for HE. An automated method to recognize the BHS and predict HE could speed precise patient selection for treatment. In. this paper, we presented a novel framework leveraging self-supervised learning (SSL) techniques for BHS identification on head CT images. A ResNet-50 encoder model was pre-trained on over 1.7 million unlabeled head CT images. Layers for binary classification were added on top of the pre-trained model. The resulting model was fine-tuned using the training data and evaluated on the held-out test set to collect AUC and F1 scores. The evaluations were performed on scan and slice levels. We ran different panels, one using two multi-center datasets for external validation and one including parts of them in the pre-training RESULTS: Our model demonstrated strong performance in identifying BHS when compared with the baseline model. Specifically, the model achieved scan-level AUC scores between 0.75-0.89 and F1 scores between 0.60-0.70. Furthermore, it exhibited robustness and generalizability across an external dataset, achieving a scan-level AUC score of up to 0.85 and an F1 score of up to 0.60, while it performed less well on another dataset with more heterogeneous samples. The negative effects could be mitigated after including parts of the external datasets in the fine-tuning process. This study introduced a novel framework integrating SSL into medical image classification, particularly on BHS identification from head CT scans. The resulting pre-trained head CT encoder model showed potential to minimize manual annotation, which would significantly reduce labor, time, and costs. After fine-tuning, the framework demonstrated promising performance for a specific downstream task, identifying the BHS to predict HE, upon comprehensive evaluation on diverse datasets. This approach holds promise for enhancing medical image analysis, particularly in scenarios with limited data availability. ICH = Intracerebral Hemorrhage; HE = Hematoma Expansion; BHS = Black Hole Sign; CT = Computed Tomography; SSL = Self-supervised Learning; AUC = Area Under the receiver operator Curve; CNN = Convolutional Neural Network; SimCLR = Simple framework for Contrastive Learning of visual Representation; HU = Hounsfield Unit; CLAIM = Checklist for Artificial Intelligence in Medical Imaging; VNA = Vendor Neutral Archive; DICOM = Digital Imaging and Communications in Medicine; NIfTI = Neuroimaging Informatics Technology Initiative; INR = International Normalized Ratio; GPU= Graphics Processing Unit; NIH= National Institutes of Health.

MRI-based multimodal AI model enables prediction of recurrence risk and adjuvant therapy in breast cancer.

Yu Y, Ren W, Mao L, Ouyang W, Hu Q, Yao Q, Tan Y, He Z, Ban X, Hu H, Lin R, Wang Z, Chen Y, Wu Z, Chen K, Ouyang J, Li T, Zhang Z, Liu G, Chen X, Li Z, Duan X, Wang J, Yao H

pubmed logopapersMay 7 2025
Timely intervention and improved prognosis for breast cancer patients rely on early metastasis risk detection and accurate treatment predictions. This study introduces an advanced multimodal MRI and AI-driven 3D deep learning model, termed the 3D-MMR-model, designed to predict recurrence risk in non-metastatic breast cancer patients. We conducted a multicenter study involving 1199 non-metastatic breast cancer patients from four institutions in China, with comprehensive MRI and clinical data retrospectively collected. Our model employed multimodal-data fusion, utilizing contrast-enhanced T1-weighted imaging (T1 + C) and T2-weighted imaging (T2WI) volumes, processed through a modified 3D-UNet for tumor segmentation and a DenseNet121-based architecture for disease-free survival (DFS) prediction. Additionally, we performed RNA-seq analysis to delve further into the relationship between concentrated hotspots within the tumor region and the tumor microenvironment. The 3D-MR-model demonstrated superior predictive performance, with time-dependent ROC analysis yielding AUC values of 0.90, 0.89, and 0.88 for 2-, 3-, and 4-year DFS predictions, respectively, in the training cohort. External validation cohorts corroborated these findings, highlighting the model's robustness across diverse clinical settings. Integration of clinicopathological features further enhanced the model's accuracy, with a multimodal approach significantly improving risk stratification and decision-making in clinical practice. Visualization techniques provided insights into the decision-making process, correlating predictions with tumor microenvironment characteristics. In summary, the 3D-MMR-model represents a significant advancement in breast cancer prognosis, combining cutting-edge AI technology with multimodal imaging to deliver precise and clinically relevant predictions of recurrence risk. This innovative approach holds promise for enhancing patient outcomes and guiding individualized treatment plans in breast cancer care.
Page 138 of 1401393 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.