Sort by:
Page 131 of 1401395 results

Stroke prediction in elderly patients with atrial fibrillation using machine learning combined clinical and left atrial appendage imaging phenotypic features.

Huang H, Xiong Y, Yao Y, Zeng J

pubmed logopapersMay 24 2025
Atrial fibrillation (AF) is one of the primary etiologies for ischemic stroke, and it is of paramount importance to delineate the risk phenotypes among elderly AF patients and to investigate more efficacious models for predicting stroke risk. This single-center prospective cohort study collected clinical data and cardiac computed tomography angiography (CTA) images from elderly AF patients. The clinical phenotypes and left atrial appendage (LAA) radiomic phenotypes of elderly AF patients were identified through K-means clustering. The independent correlations between these phenotypes and stroke risk were subsequently analyzed. Machine learning algorithms-Logistic Regression, Naive Bayes, Support Vector Machine (SVM), Random Forest, and Extreme Gradient Boosting-were selected to develop a predictive model for stroke risk in this patient cohort. The model was assessed using the Area Under the Receiver Operating Characteristic Curve, Hosmer-Lemeshow tests, and Decision Curve Analysis. A total of 419 elderly AF patients (≥ 65 years old) were included. K-means clustering identified three clinical phenotypes: Group A (cardiac enlargement/dysfunction), Group B (normal phenotype), and Group C (metabolic/coagulation abnormalities). Stroke incidence was highest in Group A (19.3%) and Group C (14.5%) versus Group B (3.3%). Similarly, LAA radiomic phenotypes revealed elevated stroke risk in patients with enlarged LAA structure (Group B: 20.0%) and complex LAA morphology (Group C: 14.0%) compared to normal LAA (Group A: 2.9%). Among the five machine learning models, the SVM model achieved superior prediction performance (AUROC: 0.858 [95% CI: 0.830-0.887]). The stroke-risk prediction model for elderly AF patients constructed based on the SVM algorithm has strong predictive efficacy.

Symbolic and hybrid AI for brain tissue segmentation using spatial model checking.

Belmonte G, Ciancia V, Massink M

pubmed logopapersMay 24 2025
Segmentation of 3D medical images, and brain segmentation in particular, is an important topic in neuroimaging and in radiotherapy. Overcoming the current, time consuming, practise of manual delineation of brain tumours and providing an accurate, explainable, and replicable method of segmentation of the tumour area and related tissues is therefore an open research challenge. In this paper, we first propose a novel symbolic approach to brain segmentation and delineation of brain lesions based on spatial model checking. This method has its foundations in the theory of closure spaces, a generalisation of topological spaces, and spatial logics. At its core is a high-level declarative logic language for image analysis, ImgQL, and an efficient spatial model checker, VoxLogicA, exploiting state-of-the-art image analysis libraries in its model checking algorithm. We then illustrate how this technique can be combined with Machine Learning techniques leading to a hybrid AI approach that provides accurate and explainable segmentation results. We show the results of the application of the symbolic approach on several public datasets with 3D magnetic resonance (MR) images. Three datasets are provided by the 2017, 2019 and 2020 international MICCAI BraTS Challenges with 210, 259 and 293 MR images, respectively, and the fourth is the BrainWeb dataset with 20 (synthetic) 3D patient images of the normal brain. We then apply the hybrid AI method to the BraTS 2020 training set. Our segmentation results are shown to be in line with the state-of-the-art with respect to other recent approaches, both from the accuracy point of view as well as from the view of computational efficiency, but with the advantage of them being explainable.

Relational Bi-level aggregation graph convolutional network with dynamic graph learning and puzzle optimization for Alzheimer's classification.

Raajasree K, Jaichandran R

pubmed logopapersMay 24 2025
Alzheimer's disease (AD) is a neurodegenerative disorder characterized by a progressive cognitive decline, necessitating early diagnosis for effective treatment. This study presents the Relational Bi-level Aggregation Graph Convolutional Network with Dynamic Graph Learning and Puzzle Optimization for Alzheimer's Classification (RBAGCN-DGL-PO-AC), using denoised T1-weighted Magnetic Resonance Images (MRIs) collected from Alzheimer's Disease Neuroimaging Initiative (ADNI) repository. Addressing the impact of noise in medical imaging, the method employs advanced denoising techniques includes: the Modified Spline-Kernelled Chirplet Transform (MSKCT), Jump Gain Integral Recurrent Neural Network (JGIRNN), and Newton Time Extracting Wavelet Transform (NTEWT), to enhance the image quality. Key brain regions, crucial for classification such as hippocampal, lateral ventricle and posterior cingulate cortex are segmented using Attention Guided Generalized Intuitionistic Fuzzy C-Means Clustering (AG-GIFCMC). Feature extraction and classification using segmented outputs are performed with RBAGCN-DGL and puzzle optimization, categorize input images into Healthy Controls (HC), Early Mild Cognitive Impairment (EMCI), Late Mild Cognitive Impairment (LMCI), and Alzheimer's Disease (AD). To assess the effectiveness of the proposed method, we systematically examined the structural modifications to the RBAGCN-DGL-PO-AC model through extensive ablation studies. Experimental findings highlight that RBAGCN-DGL-PO-AC state-of-the art performance, with 99.25 % accuracy, outperforming existing methods including MSFFGCN_ADC, CNN_CAD_DBMRI, and FCNN_ADC, while reducing training time by 28.5 % and increasing inference speed by 32.7 %. Hence, the RBAGCN-DGL-PO-AC method enhances AD classification by integrating denoising, segmentation, and dynamic graph-based feature extraction, achieving superior accuracy and making it a valuable tool for clinical applications, ultimately improving patient outcomes and disease management.

Explainable Anatomy-Guided AI for Prostate MRI: Foundation Models and In Silico Clinical Trials for Virtual Biopsy-based Risk Assessment

Danial Khan, Zohaib Salahuddin, Yumeng Zhang, Sheng Kuang, Shruti Atul Mali, Henry C. Woodruff, Sina Amirrajab, Rachel Cavill, Eduardo Ibor-Crespo, Ana Jimenez-Pastor, Adrian Galiana-Bordera, Paula Jimenez Gomez, Luis Marti-Bonmati, Philippe Lambin

arxiv logopreprintMay 23 2025
We present a fully automated, anatomically guided deep learning pipeline for prostate cancer (PCa) risk stratification using routine MRI. The pipeline integrates three key components: an nnU-Net module for segmenting the prostate gland and its zones on axial T2-weighted MRI; a classification module based on the UMedPT Swin Transformer foundation model, fine-tuned on 3D patches with optional anatomical priors and clinical data; and a VAE-GAN framework for generating counterfactual heatmaps that localize decision-driving image regions. The system was developed using 1,500 PI-CAI cases for segmentation and 617 biparametric MRIs with metadata from the CHAIMELEON challenge for classification (split into 70% training, 10% validation, and 20% testing). Segmentation achieved mean Dice scores of 0.95 (gland), 0.94 (peripheral zone), and 0.92 (transition zone). Incorporating gland priors improved AUC from 0.69 to 0.72, with a three-scale ensemble achieving top performance (AUC = 0.79, composite score = 0.76), outperforming the 2024 CHAIMELEON challenge winners. Counterfactual heatmaps reliably highlighted lesions within segmented regions, enhancing model interpretability. In a prospective multi-center in-silico trial with 20 clinicians, AI assistance increased diagnostic accuracy from 0.72 to 0.77 and Cohen's kappa from 0.43 to 0.53, while reducing review time per case by 40%. These results demonstrate that anatomy-aware foundation models with counterfactual explainability can enable accurate, interpretable, and efficient PCa risk assessment, supporting their potential use as virtual biopsies in clinical practice.

Detection, Classification, and Segmentation of Rib Fractures From CT Data Using Deep Learning Models: A Review of Literature and Pooled Analysis.

Den Hengst S, Borren N, Van Lieshout EMM, Doornberg JN, Van Walsum T, Wijffels MME, Verhofstad MHJ

pubmed logopapersMay 23 2025
Trauma-induced rib fractures are common injuries. The gold standard for diagnosing rib fractures is computed tomography (CT), but the sensitivity in the acute setting is low, and interpreting CT slices is labor-intensive. This has led to the development of new diagnostic approaches leveraging deep learning (DL) models. This systematic review and pooled analysis aimed to compare the performance of DL models in the detection, segmentation, and classification of rib fractures based on CT scans. A literature search was performed using various databases for studies describing DL models detecting, segmenting, or classifying rib fractures from CT data. Reported performance metrics included sensitivity, false-positive rate, F1-score, precision, accuracy, and mean average precision. A meta-analysis was performed on the sensitivity scores to compare the DL models with clinicians. Of the 323 identified records, 25 were included. Twenty-one studies reported on detection, four on segmentation, and 10 on classification. Twenty studies had adequate data for meta-analysis. The gold standard labels were provided by clinicians who were radiologists and orthopedic surgeons. For detecting rib fractures, DL models had a higher sensitivity (86.7%; 95% CI: 82.6%-90.2%) than clinicians (75.4%; 95% CI: 68.1%-82.1%). In classification, the sensitivity of DL models for displaced rib fractures (97.3%; 95% CI: 95.6%-98.5%) was significantly better than that of clinicians (88.2%; 95% CI: 84.8%-91.3%). DL models for rib fracture detection and classification achieved promising results. With better sensitivities than clinicians for detecting and classifying displaced rib fractures, the future should focus on implementing DL models in daily clinics. Level III-systematic review and pooled analysis.

How We Won the ISLES'24 Challenge by Preprocessing

Tianyi Ren, Juampablo E. Heras Rivera, Hitender Oswal, Yutong Pan, William Henry, Jacob Ruzevick, Mehmet Kurt

arxiv logopreprintMay 23 2025
Stroke is among the top three causes of death worldwide, and accurate identification of stroke lesion boundaries is critical for diagnosis and treatment. Supervised deep learning methods have emerged as the leading solution for stroke lesion segmentation but require large, diverse, and annotated datasets. The ISLES'24 challenge addresses this need by providing longitudinal stroke imaging data, including CT scans taken on arrival to the hospital and follow-up MRI taken 2-9 days from initial arrival, with annotations derived from follow-up MRI. Importantly, models submitted to the ISLES'24 challenge are evaluated using only CT inputs, requiring prediction of lesion progression that may not be visible in CT scans for segmentation. Our winning solution shows that a carefully designed preprocessing pipeline including deep-learning-based skull stripping and custom intensity windowing is beneficial for accurate segmentation. Combined with a standard large residual nnU-Net architecture for segmentation, this approach achieves a mean test Dice of 28.5 with a standard deviation of 21.27.

A Foundation Model Framework for Multi-View MRI Classification of Extramural Vascular Invasion and Mesorectal Fascia Invasion in Rectal Cancer

Yumeng Zhang, Zohaib Salahuddin, Danial Khan, Shruti Atul Mali, Henry C. Woodruff, Sina Amirrajab, Eduardo Ibor-Crespo, Ana Jimenez-Pastor, Luis Marti-Bonmati, Philippe Lambin

arxiv logopreprintMay 23 2025
Background: Accurate MRI-based identification of extramural vascular invasion (EVI) and mesorectal fascia invasion (MFI) is pivotal for risk-stratified management of rectal cancer, yet visual assessment is subjective and vulnerable to inter-institutional variability. Purpose: To develop and externally evaluate a multicenter, foundation-model-driven framework that automatically classifies EVI and MFI on axial and sagittal T2-weighted MRI. Methods: This retrospective study used 331 pre-treatment rectal cancer MRI examinations from three European hospitals. After TotalSegmentator-guided rectal patch extraction, a self-supervised frequency-domain harmonization pipeline was trained to minimize scanner-related contrast shifts. Four classifiers were compared: ResNet50, SeResNet, the universal biomedical pretrained transformer (UMedPT) with a lightweight MLP head, and a logistic-regression variant using frozen UMedPT features (UMedPT_LR). Results: UMedPT_LR achieved the best EVI detection when axial and sagittal features were fused (AUC = 0.82; sensitivity = 0.75; F1 score = 0.73), surpassing the Chaimeleon Grand-Challenge winner (AUC = 0.74). The highest MFI performance was attained by UMedPT on axial harmonized images (AUC = 0.77), surpassing the Chaimeleon Grand-Challenge winner (AUC = 0.75). Frequency-domain harmonization improved MFI classification but variably affected EVI performance. Conventional CNNs (ResNet50, SeResNet) underperformed, especially in F1 score and balanced accuracy. Conclusion: These findings demonstrate that combining foundation model features, harmonization, and multi-view fusion significantly enhances diagnostic performance in rectal MRI.

Pixels to Prognosis: Harmonized Multi-Region CT-Radiomics and Foundation-Model Signatures Across Multicentre NSCLC Data

Shruti Atul Mali, Zohaib Salahuddin, Danial Khan, Yumeng Zhang, Henry C. Woodruff, Eduardo Ibor-Crespo, Ana Jimenez-Pastor, Luis Marti-Bonmati, Philippe Lambin

arxiv logopreprintMay 23 2025
Purpose: To evaluate the impact of harmonization and multi-region CT image feature integration on survival prediction in non-small cell lung cancer (NSCLC) patients, using handcrafted radiomics, pretrained foundation model (FM) features, and clinical data from a multicenter dataset. Methods: We analyzed CT scans and clinical data from 876 NSCLC patients (604 training, 272 test) across five centers. Features were extracted from the whole lung, tumor, mediastinal nodes, coronary arteries, and coronary artery calcium (CAC). Handcrafted radiomics and FM deep features were harmonized using ComBat, reconstruction kernel normalization (RKN), and RKN+ComBat. Regularized Cox models predicted overall survival; performance was assessed using the concordance index (C-index), 5-year time-dependent area under the curve (t-AUC), and hazard ratio (HR). SHapley Additive exPlanations (SHAP) values explained feature contributions. A consensus model used agreement across top region of interest (ROI) models to stratify patient risk. Results: TNM staging showed prognostic utility (C-index = 0.67; HR = 2.70; t-AUC = 0.85). The clinical + tumor radiomics model with ComBat achieved a C-index of 0.7552 and t-AUC of 0.8820. FM features (50-voxel cubes) combined with clinical data yielded the highest performance (C-index = 0.7616; t-AUC = 0.8866). An ensemble of all ROIs and FM features reached a C-index of 0.7142 and t-AUC of 0.7885. The consensus model, covering 78% of valid test cases, achieved a t-AUC of 0.92, sensitivity of 97.6%, and specificity of 66.7%. Conclusion: Harmonization and multi-region feature integration improve survival prediction in multicenter NSCLC data. Combining interpretable radiomics, FM features, and consensus modeling enables robust risk stratification across imaging centers.

How We Won the ISLES'24 Challenge by Preprocessing

Tianyi Ren, Juampablo E. Heras Rivera, Hitender Oswal, Yutong Pan, William Henry, Sophie Walters, Mehmet Kurt

arxiv logopreprintMay 23 2025
Stroke is among the top three causes of death worldwide, and accurate identification of stroke lesion boundaries is critical for diagnosis and treatment. Supervised deep learning methods have emerged as the leading solution for stroke lesion segmentation but require large, diverse, and annotated datasets. The ISLES'24 challenge addresses this need by providing longitudinal stroke imaging data, including CT scans taken on arrival to the hospital and follow-up MRI taken 2-9 days from initial arrival, with annotations derived from follow-up MRI. Importantly, models submitted to the ISLES'24 challenge are evaluated using only CT inputs, requiring prediction of lesion progression that may not be visible in CT scans for segmentation. Our winning solution shows that a carefully designed preprocessing pipeline including deep-learning-based skull stripping and custom intensity windowing is beneficial for accurate segmentation. Combined with a standard large residual nnU-Net architecture for segmentation, this approach achieves a mean test Dice of 28.5 with a standard deviation of 21.27.

Deep Learning-Based Multimodal Feature Interaction-Guided Fusion: Enhancing the Evaluation of EGFR in Advanced Lung Adenocarcinoma.

Xu J, Feng B, Chen X, Wu F, Liu Y, Yu Z, Lu S, Duan X, Chen X, Li K, Zhang W, Dai X

pubmed logopapersMay 22 2025
The aim of this study is to develop a deep learning-based multimodal feature interaction-guided fusion (DL-MFIF) framework that integrates macroscopic information from computed tomography (CT) images with microscopic information from whole-slide images (WSIs) to predict the epidermal growth factor receptor (EGFR) mutations of primary lung adenocarcinoma in patients with advanced-stage disease. Data from 396 patients with lung adenocarcinoma across two medical institutions were analyzed. The data from 243 cases were divided into a training set (n=145) and an internal validation set (n=98) in a 6:4 ratio, and data from an additional 153 cases from another medical institution were included as an external validation set. All cases included CT scan images and WSIs. To integrate multimodal information, we developed the DL-MFIF framework, which leverages deep learning techniques to capture the interactions between radiomic macrofeatures derived from CT images and microfeatures obtained from WSIs. Compared to other classification models, the DL-MFIF model achieved significantly higher area under the curve (AUC) values. Specifically, the model outperformed others on both the internal validation set (AUC=0.856, accuracy=0.750) and the external validation set (AUC=0.817, accuracy=0.708). Decision curve analysis (DCA) demonstrated that the model provided superior net benefits(range 0.15-0.87). Delong's test for external validation confirmed the statistical significance of the results (P<0.05). The DL-MFIF model demonstrated excellent performance in evaluating and distinguishing the EGFR in patients with advanced lung adenocarcinoma. This model effectively aids radiologists in accurately classifying EGFR mutations in patients with primary lung adenocarcinoma, thereby improving treatment outcomes for this population.
Page 131 of 1401395 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.