Sort by:
Page 118 of 6426411 results

Hope TMH, Bowman H, Leff AP, Price CJ

pubmed logopapersSep 29 2025
Current medicine cannot confidently predict patients' language skills after stroke. In recent years, researchers have sought to bridge this gap with machine learning. These models appear to benefit from access to features describing where and how much brain damage these patients have suffered. Given the very high dimensionality of structural brain imaging data, those brain lesion features are typically post-processed from the images themselves into tabular features. With the introduction of deep Convolutional Neural Networks (CNN), which appear to be much more robust to high dimensional data, it is natural to hope that much of this image post-processing might be unnecessary. But prior attempts to demonstrate this (in the area of post-stroke prognostics) have so far yielded only equivocal results - perhaps because the datasets that those studies could deploy were too small to properly constrain CNNs, which are famously 'data-hungry'. The study draws on a much larger dataset than has been employed in previous work like this, referring to patients whose language outcomes were assessed once during the chronic phase post-stroke, on or around the same days as they underwent high resolution MRI brain scans. Following the model of our own and others' past work, we use state of the art 'vanilla' machine learning models (boosted ensembles) to predict a variety of language and cognitive outcomes scores. These models employ both demographic variables and features derived from the brain imaging data, which represent where brain damage has occurred. These are our baseline models. Next, we use deep CNNs to predict the same language scores for the same patients, drawing on both the demographic variables, and post-processed brain lesion images: i.e., multi-input models with one input for tabular features and another for 3-dimensional images. We compare the models using 5 × 2-fold cross-validation, with consistent folds. The CNN models consistently outperform the vanilla machine learning models, in this domain. Deep CNNs offer state of the art performance when predicting language outcomes after stroke, outperforming vanilla machine learning and obviating the need to post-process lesion images into lesion features.

Erekat, A., Downes, M. H., Stein, L. K., Delman, B. N., Karp, A. M., Tripathi, A., Nadkarni, G. N., Kupersmith, M. J., Kummer, B. R.

medrxiv logopreprintSep 29 2025
BackgroundAcute stroke alerts are often activated for non-cerebrovascular conditions, leading to false positives that strain clinical resources and promote diagnostic uncertainty. We sought to develop machine learning (ML) models integrating large-language models (LLMs), structured electronic health record data, and clinical time series data to predict the presence of acute cerebrovascular disease (ACD) at stroke alert activation. MethodsWe derived a series of ML models using retrospective data from stroke alerts activated at Mount Sinai Health System between 2011 and 2021. We extracted structured data (demographics, medical comorbidities, medications, and engineered time-series features from vital signs and lab results) as well as unstructured clinical notes available prior to the time of stroke alert. We processed clinical notes using three embedding approaches: word embeddngs, biomedical embeddings (BioWordVec), and LLMs. Using a radiographic gold standard for acute intracranial vascular event, we used an auto-ML approach to train one model based on unstructured data and five models based on different combinations of structured data. We evaluated models individually using the area under the receiver operating characteristic curve (AUROC), mean positive predictive value (PPV), sensitivity, and F1-score. We then combined the 6 model logits into a multimodal ensemble by weighting their logits based on F1-score, determining ensemble performance using the same metrics. ResultsWe identified 16,512 stroke alerts corresponding to 14,233 unique patients over the study period, of which 9,013 (54.6%) were due to ACD. The multi-modal model (AUROC 0.72, PPV 0.68, sensitivity 0.76, F1 0.72) outperformed all individual models by AUROC. One structured model based on demographics, comorbidities, and medications demonstrated the highest sensitivity (0.95). ConclusionsWe developed a multi-modal ML model to predict ACD at stroke alert activation. This approach has promise to optimize stroke triage and reduce false-positive activations.

Mousavi, A.

medrxiv logopreprintSep 29 2025
Ischemic stroke, caused by arterial occlusion, leads to hypoxia and cellular necrosis. Rapid and accurate delineation of ischemic lesions is essential for treatment planning but remains challenging due to variations in lesion size, shape, and appearance. We propose Residual Attention Xception Network, a deep learning architecture that integrates residual attention connections with Xception for three-dimensional magnetic resonance imaging lesion segmentation. The framework includes three stages: (i) decomposition of three-dimensional scans into axial, sagittal, and coronal planes, (ii) independent model training on each plane, and (iii) voxel-wise majority voting to generate the final three-dimensional segmentation. In addition, we introduce a variant of the focal Tversky loss designed to mitigate class imbalance and improve sensitivity to small or irregular lesion boundaries. Experiments on the ATLAS v2.0 dataset with five-fold cross-validation demonstrate that Residual Attention Xception Network achieves a Dice coefficient of 0.61, precision of 0.68, and recall of 0.63. These results surpass baseline models while requiring fewer trainable parameters and enabling faster inference, highlighting both accuracy and efficiency. source codehttps://github.com/liamirpy/RAX-NET_ISCHEMIC_STROKE_SEGMENTATION.

Zamanitajeddin, N., Jahanifar, M., Eastwood, M., Gunesli, G., Arends, M. J., Rajpoot, N.

medrxiv logopreprintSep 29 2025
Microsatellite instability (MSI) is a key biomarker for immunotherapy response and prognosis across multiple cancers, yet its identification from routine Hematoxylin and Eosin (H&E) slides remains challenging. Current deep learning predictors often operate as black-box, weakly supervised models trained on individual slides, limiting interpretability, biological insight, and generalization; particularly in low-data regimes. Importantly, systematic quantitative analysis of shared MSI-associated characteristics across different cancer types has not been performed, representing a major gap in understanding conserved tumor microenvironment (TME) patterns linked to MSI. Here, we present a multi-cancer MSI prediction model that leverages pathology foundation models for robust feature extraction and cell-level social network analysis (SNA) to uncover TME patterns associated with MSI. For the MSI prediction task, we introduce a novel transformer-based embedding aggregation method, leveraging attention-guided, multi-case batch training to improve learning efficiency, stability, and interpretability. Our method achieves high predictive performance, with mean AUROCs of 0.86{+/-}0.06 (colorectal cancer), 0.89{+/-}0.06 (stomach adenocarcinoma), and 0.73{+/-}0.06 (uterine corpus endometrial carcinoma) in internal cross-validation on TCGA dataset and AUROC of 0.99 on external PAIP dataset, outperforming state-of-the-art weakly supervised methods (particularly in AUPRC with an average of 0.65 across three cancers). Multi-cancer training further improved generalization (by 3%) via exposing the model to diverse MSI manifestations, enabling robust learning of transferable, domain-invariant histological patterns. To investigate the TME, we constructed cell graphs from high-attention regions, classifying cells as epithelial, inflammatory, mitotic, or connective, and applied SNA metrics to quantify spatial interactions. Across cancers, MSI tumors exhibited increased epithelial cell density and stronger epithelial-inflammatory connectivity, with subtle, context-dependent changes in stromal organization. These features were consistent across univariate and multivariate analyses and supported by expert pathologist review, suggesting the presence of a conserved MSI-associated microenvironmental phenotype. Our proposed prediction algorithm and SNA-driven interpretation advance MSI prediction and uncover interpretable, biologically meaningful MSI signatures shared across colorectal, gastric, and endometrial cancers.

Ieki, H., Sahashi, Y., Vukadinovic, M., Rawlani, M., Binder, C., Yuan, N., Ambrosy, A. P., Go, A. S., Chen, W., Lee, M.-S., He, B., Cheng, P., Ouyang, D.

medrxiv logopreprintSep 29 2025
Background and AimsAccurate assessment of aortic stenosis (AS) requires integration of both structural and functional information characterized by visual traits as well as quantitation of gradients. Existing artificial intelligence (AI) models utilize solely either structural or functional information. MethodsWe developed EchoNet-AS, an open-source end-to-end integrated approach combining video based convolutional neural networks to assess valve motion as well as segmentation models to automate the measurement of aortic valve peak velocity to classify AS severity. ResultsEchoNet-AS was trained on 210,193 images from 16,076 studies from Kaiser Permanente Northern California (KPNC) and validated on 1,589 held-out test studies and a temporally distinct cohort of 19,206 studies. The final model was also externally validated on 2,415 studies from Stanford Healthcare (SHC) and 9,038 studies from Cedars-Sinai Medical Center (CSMC). Combining assessments from multiple echocardiographic videos and Doppler measurements, EchoNet-AS achieved excellent discrimination of severe AS with AUC 0.964 [95% CI: 0.952 - 0.973] in the KPNC held-out cohort and 0.985 [0.981 - 0.988] in the temporally distinct cohort, which was superior to models using single views or only Doppler measurements. The performance was consistently robust in distinct external cohorts with an AUC 0.985 [0.975 - 0.992] at SHC and 0.989 [0.986 - 0.992] at CSMC. ConclusionsEchoNet-AS synthesizes information from both B-mode videos and Doppler images to accurately assess AS severity. Its strong performance generalizes robustly to external validation cohorts and shows potential as an automated clinical decision support tool.

Fan R, Shi YR, Chen L, Wang CX, Qian YS, Gao YH, Wang CY, Fan XT, Liu XL, Bai HL, Zheng D, Jiang GQ, Yu YL, Liang XE, Chen JJ, Xie WF, Du LT, Yan HD, Gao YJ, Wen H, Liu JF, Liang MF, Kong F, Sun J, Ju SH, Wang HY, Hou JL

pubmed logopapersSep 28 2025
Given the high burden of hepatocellular carcinoma (HCC), risk stratification in patients with cirrhosis is critical but remains inadequate. In this study, we aimed to develop and validate an HCC prediction model by integrating radiomics and deep learning features from liver and spleen computed tomography (CT) images into the established age-male-ALBI-platelet (aMAP) clinical model. Patients were enrolled between 2018 and 2023 from a Chinese multicenter, prospective, observational cirrhosis cohort, all of whom underwent 3-phase contrast-enhanced abdominal CT scans at enrollment. The aMAP clinical score was calculated, and radiomic (PyRadiomics) and deep learning (ResNet-18) features were extracted from liver and spleen regions of interest. Feature selection was performed using the least absolute shrinkage and selection operator. Among 2,411 patients (median follow-up: 42.7 months [IQR: 32.9-54.1]), 118 developed HCC (three-year cumulative incidence: 3.59%). Chronic hepatitis B virus infection was the main etiology, accounting for 91.5% of cases. The aMAP-CT model, which incorporates CT signatures, significantly outperformed existing models (area under the receiver-operating characteristic curve: 0.809-0.869 in three cohorts). It stratified patients into high-risk (three-year HCC incidence: 26.3%) and low-risk (1.7%) groups. Stepwise application (aMAP → aMAP-CT) further refined stratification (three-year incidences: 1.8% [93.0% of the cohort] vs. 27.2% [7.0%]). The aMAP-CT model improves HCC risk prediction by integrating CT-based liver and spleen signatures, enabling precise identification of high-risk cirrhosis patients. This approach personalizes surveillance strategies, potentially facilitating earlier detection and improved outcomes.

Bhavsar S, Gowda BB, Bhavsar M, Patole S, Rao S, Rath C

pubmed logopapersSep 28 2025
Deep learning (DL), a branch of artificial intelligence (AI), has been applied to diagnose developmental dysplasia of the hip (DDH) on pelvic radiographs and ultrasound (US) images. This technology can potentially assist in early screening, enable timely intervention and improve cost-effectiveness. We conducted a systematic review to evaluate the diagnostic accuracy of the DL algorithm in detecting DDH. PubMed, Medline, EMBASE, EMCARE, the clinicaltrials.gov (clinical trial registry), IEEE Xplore and Cochrane Library databases were searched in October 2024. Prospective and retrospective cohort studies that included children (< 16 years) at risk of or suspected to have DDH and reported hip ultrasonography (US) or X-ray images using AI were included. A review was conducted using the guidelines of the Cochrane Collaboration Diagnostic Test Accuracy Working Group. Risk of bias was assessed using the QUADAS-2 tool. Twenty-three studies met inclusion criteria, with 15 (n = 8315) evaluating DDH on US images and eight (n = 7091) on pelvic radiographs. The area under the curve of the included studies ranged from 0.80 to 0.99 for pelvic radiographs and 0.90-0.99 for US images. Sensitivity and specificity for detecting DDH on radiographs ranged from 92.86% to 100% and 95.65% to 99.82%, respectively. For US images, sensitivity ranged from 86.54% to 100% and specificity from 62.5% to 100%. AI demonstrated comparable effectiveness to physicians in detecting DDH. However, limited evaluation on external datasets restricts its generalisability. Further research incorporating diverse datasets and real-world applications is needed to assess its broader clinical impact on DDH diagnosis.

Wang DY, Yang T, Zhang CT, Zhan PC, Miao ZX, Li BL, Yang H

pubmed logopapersSep 28 2025
The application of artificial intelligence (AI) in carotid atherosclerotic plaque detection <i>via</i> computed tomography angiography (CTA) has significantly advanced over the past decade. This mini-review consolidates recent innovations in deep learning architectures, domain adaptation techniques, and automated plaque characterization methodologies. Hybrid models, such as residual U-Net-Pyramid Scene Parsing Network, exhibit a remarkable precision of 80.49% in plaque segmentation, outperforming radiologists in diagnostic efficiency by reducing analysis time from minutes to mere seconds. Domain-adaptive frameworks, such as Lesion Assessment through Tracklet Evaluation, demonstrate robust performance across heterogeneous imaging datasets, achieving an area under the curve (AUC) greater than 0.88. Furthermore, novel approaches integrating U-Net and Efficient-Net architectures, enhanced by Bayesian optimization, have achieved impressive correlation coefficients (0.89) for plaque quantification. AI-powered CTA also enables high-precision three-dimensional vascular segmentation, with a Dice coefficient of 0.9119, and offers superior cardiovascular risk stratification compared to traditional Agatston scoring, yielding AUC values of 0.816 <i>vs</i> 0.729 at a 15-year follow-up. These breakthroughs address key challenges in plaque motion analysis, with systolic retractive motion biomarkers successfully identifying 80% of vulnerable plaques. Looking ahead, future directions focus on enhancing the interpretability of AI models through explainable AI and leveraging federated learning to mitigate data heterogeneity. This mini-review underscores the transformative potential of AI in carotid plaque assessment, offering substantial implications for stroke prevention and personalized cerebrovascular management strategies.

Hirata A, Hayano K, Tochigi T, Kurata Y, Shiraishi T, Sekino N, Nakano A, Matsumoto Y, Toyozumi T, Uesato M, Ohira G

pubmed logopapersSep 28 2025
Advanced esophageal squamous cell carcinoma (ESCC) has an extremely poor prognosis. Preoperative chemoradiotherapy (CRT) can significantly prolong survival, especially in those who achieve pathological complete response (pCR). However, the pretherapeutic prediction of pCR remains challenging. To predict pCR and survival in ESCC patients undergoing CRT using an artificial intelligence (AI)-based diffusion-weighted magnetic resonance imaging (DWI-MRI) radiomics model. We retrospectively analyzed 70 patients with ESCC who underwent curative surgery following CRT. For each patient, pre-treatment tumors were semi-automatically segmented in three dimensions from DWI-MRI images (<i>b</i> = 0, 1000 second/mm²), and a total of 76 radiomics features were extracted from each segmented tumor. Using these features as explanatory variables and pCR as the objective variable, machine learning models for predicting pCR were developed using AutoGluon, an automated machine learning library, and validated by stratified double cross-validation. pCR was achieved in 15 patients (21.4%). Apparent diffusion coefficient skewness demonstrated the highest predictive performance [area under the curve (AUC) = 0.77]. Gray-level co-occurrence matrix (GLCM) entropy (<i>b</i> = 1000 second/mm²) was an independent prognostic factor for relapse-free survival (RFS) (hazard ratio = 0.32, <i>P</i> = 0.009). In Kaplan-Meier analysis, patients with high GLCM entropy showed significantly better RFS (<i>P</i> < 0.001, log-rank). The best-performing machine learning model achieved an AUC of 0.85. The predicted pCR-positive group showed significantly better RFS than the predicted pCR-negative group (<i>P</i> = 0.007, log-rank). AI-based radiomics analysis of DWI-MRI images in ESCC has the potential to accurately predict the effect of CRT before treatment and contribute to constructing optimal treatment strategies.

Wang YY, Liu B, Wang JH

pubmed logopapersSep 28 2025
Gastrointestinal (GI) diseases, including gastric and colorectal cancers, significantly impact global health, necessitating accurate and efficient diagnostic methods. Endoscopic examination is the primary diagnostic tool; however, its accuracy is limited by operator dependency and interobserver variability. Advancements in deep learning, particularly convolutional neural networks (CNNs), show great potential for enhancing GI disease detection and classification. This review explores the application of CNNs in endoscopic imaging, focusing on polyp and tumor detection, disease classification, endoscopic ultrasound, and capsule endoscopy analysis. We discuss the performance of CNN models with traditional diagnostic methods, highlighting their advantages in accuracy and real-time decision support. Despite promising results, challenges remain, including data availability, model interpretability, and clinical integration. Future directions include improving model generalization, enhancing explainability, and conducting large-scale clinical trials. With continued advancements, CNN-powered artificial intelligence systems could revolutionize GI endoscopy by enhancing early disease detection, reducing diagnostic errors, and improving patient outcomes.
Page 118 of 6426411 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.