Sort by:
Page 41 of 2352345 results

Interpretable Transformer Models for rs-fMRI Epilepsy Classification and Biomarker Discovery

Jeyabose Sundar, A., Boerwinkle, V. L., Robinson Vimala, B., Leggio, O., Kazemi, M.

medrxiv logopreprintSep 4 2025
BackgroundAutomated interpretation of resting-state fMRI (rs-fMRI) for epilepsy diagnosis remains a challenge. We developed a regularized transformer that models parcel-wise spatial patterns and long-range temporal dynamics to classify epilepsy and generate interpretable, network-level candidate biomarkers. MethodsInputs were Schaefer-200 parcel time series extracted after standardized preprocessing (fMRIPrep). The Regularized Transformer is an attention-based sequence model with learned positional encoding and multi-head self-attention, combined with fMRI-specific regularization (dropout, weight decay, gradient clipping) and augmentation to improve robustness on modest clinical cohorts. Training used stratified group 4-fold cross-validation on n=65 (30 epilepsy, 35 controls) with fMRI-specific augmentation (time-warping, adaptive noise, structured masking). We compared the transformer to seven baselines (MLP, 1D-CNN, LSTM, CNN-LSTM, GCN, GAT, Attention-Only). External validation used an independent set (10 UNC epilepsy cohort, 10 controls). Biomarker discovery combined gradient-based attributions with parcelwise statistics and connectivity contrasts. ResultsOn an illustrative best-performing fold, the transformer attained Accuracy 0.77, Sensitivity 0.83, Specificity 0.88, F1-Score 0.77, and AUC 0.76. Averaged cross-validation performance was lower but consistent with these findings. External testing yielded Accuracy 0.60, AUC 0.64, Specificity 0.80, Sensitivity 0.40. Attribution-guided analysis identified distributed, network-level candidate biomarkers concentrated in limbic, somatomotor, default-mode and salience systems. ConclusionsA regularized transformer on parcel-level rs-fMRI can achieve strong within-fold discrimination and produce interpretable candidate biomarkers. Results are encouraging but preliminary larger multi-site validation, stability testing and multiple-comparison control are required prior to clinical translation.

Deep Learning Based Multiomics Model for Risk Stratification of Postoperative Distant Metastasis in Colorectal Cancer.

Yao X, Han X, Huang D, Zheng Y, Deng S, Ning X, Yuan L, Ao W

pubmed logopapersSep 4 2025
To develop deep learning-based multiomics models for predicting postoperative distant metastasis (DM) and evaluating survival prognosis in colorectal cancer (CRC) patients. This retrospective study included 521 CRC patients who underwent curative surgery at two centers. Preoperative CT and postoperative hematoxylin-eosin (HE) stained slides were collected. A total of 381 patients from Center 1 were split (7:3) into training and internal validation sets; 140 patients from Center 2 formed the independent external validation set. Patients were grouped based on DM status during follow-up. Radiological and pathological models were constructed using independent imaging and pathological predictors. Deep features were extracted with a ResNet-101 backbone to build deep learning radiomics (DLRS) and deep learning pathomics (DLPS) models. Two integrated models were developed: Nomogram 1 (radiological + DLRS) and Nomogram 2 (pathological + DLPS). CT- reported T (cT) stage (OR=2.00, P=0.006) and CT-reported N (cN) stage (OR=1.63, P=0.023) were identified as independent radiologic predictors for building the radiological model; pN stage (OR=1.91, P=0.003) and perineural invasion (OR=2.07, P=0.030) were identified as pathological predictors for building the pathological model. DLRS and DLPS incorporated 28 and 30 deep features, respectively. In the training set, area under the curve (AUC) for radiological, pathological, DLRS, DLPS, Nomogram 1, and Nomogram 2 models were 0.657, 0.687, 0.931, 0.914, 0.938, and 0.930. DeLong's test showed DLRS, DLPS, and both nomograms significantly outperformed conventional models (P<.05). Kaplan-Meier analysis confirmed effective 3-year disease-free survival (DFS) stratification by the nomograms. Deep learning-based multiomics models provided high accuracy for postoperative DM prediction. Nomogram models enabled reliable DFS risk stratification in CRC patients.

Convolutional neural network application for automated lung cancer detection on chest CT using Google AI Studio.

Aljneibi Z, Almenhali S, Lanca L

pubmed logopapersSep 3 2025
This study aimed to evaluate the diagnostic performance of an artificial intelligence (AI)-enhanced model for detecting lung cancer on computed tomography (CT) images of the chest. It assessed diagnostic accuracy, sensitivity, specificity, and interpretative consistency across normal, benign, and malignant cases. An exploratory analysis was performed using the publicly available IQ-OTH/NCCD dataset, comprising 110 CT cases (55 normal, 15 benign, 40 malignant). A pre-trained convolutional neural network in Google AI Studio was fine-tuned using 25 training images and tested on a separate image from each case. Quantitative evaluation of diagnostic accuracy and qualitative content analysis of AI-generated reports was conducted to assess diagnostic patterns and interpretative behavior. The AI model achieved an overall accuracy of 75.5 %, with a sensitivity of 74.5 % and specificity of 76.4 %. The area under the ROC curve (AUC) for all cases was 0.824 (95 % CI: 0.745-0.897), indicating strong discriminative power. Malignant cases had the highest classification performance (AUC = 0.902), while benign cases were more challenging to classify (AUC = 0.615). Qualitative analysis showed the AI used consistent radiological terminology, but demonstrated oversensitivity to ground-glass opacities, contributing to false positives in non-malignant cases. The AI model showed promising diagnostic potential, particularly in identifying malignancies. However, specificity limitations and interpretative errors in benign and normal cases underscore the need for human oversight and continued model refinement. AI-enhanced CT interpretation can improve efficiency in high-volume settings but should serve as a decision-support tool rather than a replacement for expert image review.

Deep learning mammography-based breast cancer risk model, its serial change, and breast cancer mortality.

Shin S, Chang Y, Ryu S

pubmed logopapersSep 3 2025
Although numerous breast cancer risk prediction models have been developed to categorize individuals by risk, a substantial gap persists in evaluating how well these models predict actual mortality outcomes. This study aimed to investigate the association between Mirai, a deep learning model for risk prediction based on mammography, and breast cancer-specific mortality in a large cohort of Korean women. This retrospective cohort study examined 124,653 cancer-free women aged ≥ 34 years who underwent mammography screening between 2009-2020. Participants were stratified into tertiles by Mirai risk scores and categorized into four groups based on risk changes over time. Cox proportional hazards regression models were used to evaluate the associations of both baseline Mirai scores and temporal risk changes with breast cancer-specific mortality. Over 1,075,177 person-years of follow-up, 31 breast cancer-related deaths occurred. The highest Mirai risk tertile showed significantly higher breast cancer-specific mortality than the lowest tertile (hazard ratio [HR], 5.34; 95% confidence interval [CI] 1.17-24.39; p for trend = 0.020). Temporal Mirai score changes were associated with mortality risk: those remaining in the high-risk (HR, 5.92; 95% CI 1.43-24.49) or moving from low to high risk (HR, 5.57; 95% CI 1.31-23.63) had higher mortality rates than those staying in low-risk. The Mirai model, developed to predict breast cancer incidence, was significantly associated with breast cancer-specific mortality. Changes in Mirai risk scores over time were also linked to breast cancer-specific mortality, supporting AI-based risk models in guiding risk-stratified screening and prevention of breast cancer-related deaths.

MRI-based deep learning radiomics in predicting histological differentiation of oropharyngeal cancer: a multicenter cohort study.

Pan Z, Lu W, Yu C, Fu S, Ling H, Liu Y, Zhang X, Gong L

pubmed logopapersSep 3 2025
The primary aim of this research was to create and rigorously assess a deep learning radiomics (DLR) framework utilizing magnetic resonance imaging (MRI) to forecast the histological differentiation grades of oropharyngeal cancer. This retrospective analysis encompassed 122 patients diagnosed with oropharyngeal cancer across three medical institutions in China. The participants were divided at random into two groups: a training cohort comprising 85 individuals and a test cohort of 37. Radiomics features derived from MRI scans, along with deep learning (DL) features, were meticulously extracted and carefully refined. These two sets of features were then integrated to build the DLR model, designed to assess the histological differentiation of oropharyngeal cancer. The model's predictive efficacy was gaged through the area under the receiver operating characteristic curve (AUC) and decision curve analysis (DCA). The DLR model demonstrated impressive performance, achieving strong AUC scores of 0.871 on the training cohort and 0.803 on the test cohort, outperforming both the standalone radiomics and DL models. Additionally, the DCA curve highlighted the significance of the DLR model in forecasting the histological differentiation of oropharyngeal cancer. The MRI-based DLR model demonstrated high predictive ability for histological differentiation of oropharyngeal cancer, which might be important for accurate preoperative diagnosis and clinical decision-making.

MetaPredictomics: A Comprehensive Approach to Predict Postsurgical Non-Small Cell Lung Cancer Recurrence Using Clinicopathologic, Radiomics, and Organomics Data.

Amini M, Hajianfar G, Salimi Y, Mansouri Z, Zaidi H

pubmed logopapersSep 3 2025
Non-small cell lung cancer (NSCLC) is a complex disease characterized by diverse clinical, genetic, and histopathologic traits, necessitating personalized treatment approaches. While numerous biomarkers have been introduced for NSCLC prognostication, no single source of information can provide a comprehensive understanding of the disease. However, integrating biomarkers from multiple sources may offer a holistic view of the disease, enabling more accurate predictions. In this study, we present MetaPredictomics, a framework that integrates clinicopathologic data with PET/CT radiomics from the primary tumor and presumed healthy organs (referred to as "organomics") to predict postsurgical recurrence. A fully automated deep learning-based segmentation model was employed to delineate 19 affected (whole lung and the affected lobe) and presumed healthy organs from CT images of the presurgical PET/CT scans of 145 NSCLC patients sourced from a publicly available data set. Using PyRadiomics, 214 features (107 from CT, 107 from PET) were extracted from the gross tumor volume (GTV) and each segmented organ. In addition, a clinicopathologic feature set was constructed, incorporating clinical characteristics, histopathologic data, gene mutation status, conventional PET imaging biomarkers, and patients' treatment history. GTV Radiomics, each of the organomics, and the clinicopathologic feature sets were each fed to a time-to-event prediction machine, based on glmboost, to establish first-level models. The risk scores obtained from the first-level models were then used as inputs for meta models developed using a stacked ensemble approach. Questing optimized performance, we assessed meta models established upon all combinations of first-level models with concordance index (C-index) ≥0.6. The performance of all the models was evaluated using the average C-index across a unique 3-fold cross-validation scheme for fair comparison. The clinicopathologic model outperformed other first-level models with a C-index of 0.67, followed closely by GTV radiomics model with C-index of 0.65. Among the organomics models, whole-lung and aorta models achieved top performance with a C-index of 0.65, while 12 organomics models achieved C-indices of ≥0.6. Meta models significantly outperformed the first-level models with the top 100 achieving C-indices between 0.703 and 0.731. The clinicopathologic, whole lung, esophagus, pancreas, and GTV models were the most frequently present models in the top 100 meta models with frequencies of 98, 71, 69, 62, and 61, respectively. In this study, we highlighted the value of maximizing the use of medical imaging for NSCLC recurrence prognostication by incorporating data from various organs, rather than focusing solely on the tumor and its immediate surroundings. This multisource integration proved particularly beneficial in the meta models, where combining clinicopathologic data with tumor radiomics and organomics models significantly enhanced recurrence prediction.

AlzFormer: Video-based space-time attention model for early diagnosis of Alzheimer's disease.

Akan T, Akan S, Alp S, Ledbetter CR, Nobel Bhuiyan MA

pubmed logopapersSep 3 2025
Early and accurate Alzheimer's disease (AD) diagnosis is critical for effective intervention, but it is still challenging due to neurodegeneration's slow and complex progression. Recent studies in brain imaging analysis have highlighted the crucial roles of deep learning techniques in computer-assisted interventions for diagnosing brain diseases. In this study, we propose AlzFormer, a novel deep learning framework based on a space-time attention mechanism, for multiclass classification of AD, MCI, and CN individuals using structural MRI scans. Unlike conventional deep learning models, we used spatiotemporal self-attention to model inter-slice continuity by treating T1-weighted MRI volumes as sequential inputs, where slices correspond to video frames. Our model was fine-tuned and evaluated using 1.5 T MRI scans from the ADNI dataset. To ensure the anatomical consistency of all the MRI data, All MRI volumes were pre-processed with skull stripping and spatial normalization to MNI space. AlzFormer achieved an overall accuracy of 94 % on the test set, with balanced class-wise F1-scores (AD: 0.94, MCI: 0.99, CN: 0.98) and a macro-average AUC of 0.98. We also utilized attention map analysis to identify clinically significant patterns, particularly emphasizing subcortical structures and medial temporal regions implicated in AD. These findings demonstrate the potential of transformer-based architectures for robust and interpretable classification of brain disorders using structural MRI.

Resting-State Functional MRI: Current State, Controversies, Limitations, and Future Directions-<i>AJR</i> Expert Panel Narrative Review.

Vachha BA, Kumar VA, Pillai JJ, Shimony JS, Tanabe J, Sair HI

pubmed logopapersSep 3 2025
Resting-state functional MRI (rs-fMRI), a promising method for interrogating different brain functional networks from a single MRI acquisition, is increasingly used in clinical presurgical and other pretherapeutic brain mapping. However, challenges in standardization of acquisition, preprocessing, and analysis methods across centers and variability in results interpretation complicate its clinical use. Additionally, inherent problems regarding reliability of language lateralization, interpatient variability of cognitive network representation, dynamic aspects of intranetwork and internetwork connectivity, and effects of neurovascular uncoupling on network detection still must be overcome. Although deep learning solutions and further methodologic standardization will help address these issues, rs-fMRI remains generally considered an adjunct to task-based fMRI (tb-fMRI) for clinical presurgical mapping. Nonetheless, in many clinical instances, rs-fMRI may offer valuable additional information that supplements tb-fMRI, especially if tb-fMRI is inadequate due to patient performance or other limitations. Future growth in clinical applications of rs-fMRI is anticipated as challenges are increasingly addressed. This <i>AJR</i> Expert Panel Narrative Review summarizes the current state and emerging clinical utility of rs-fMRI, focusing on its role in presurgical mapping. Ongoing controversies and limitations in clinical applicability are presented and future directions are discussed, including the developing role of rs-fMRI in neuromodulation treatment of various neurologic disorders.

RTGMFF: Enhanced fMRI-based Brain Disorder Diagnosis via ROI-driven Text Generation and Multimodal Feature Fusion

Junhao Jia, Yifei Sun, Yunyou Liu, Cheng Yang, Changmiao Wang, Feiwei Qin, Yong Peng, Wenwen Min

arxiv logopreprintSep 3 2025
Functional magnetic resonance imaging (fMRI) is a powerful tool for probing brain function, yet reliable clinical diagnosis is hampered by low signal-to-noise ratios, inter-subject variability, and the limited frequency awareness of prevailing CNN- and Transformer-based models. Moreover, most fMRI datasets lack textual annotations that could contextualize regional activation and connectivity patterns. We introduce RTGMFF, a framework that unifies automatic ROI-level text generation with multimodal feature fusion for brain-disorder diagnosis. RTGMFF consists of three components: (i) ROI-driven fMRI text generation deterministically condenses each subject's activation, connectivity, age, and sex into reproducible text tokens; (ii) Hybrid frequency-spatial encoder fuses a hierarchical wavelet-mamba branch with a cross-scale Transformer encoder to capture frequency-domain structure alongside long-range spatial dependencies; and (iii) Adaptive semantic alignment module embeds the ROI token sequence and visual features in a shared space, using a regularized cosine-similarity loss to narrow the modality gap. Extensive experiments on the ADHD-200 and ABIDE benchmarks show that RTGMFF surpasses current methods in diagnostic accuracy, achieving notable gains in sensitivity, specificity, and area under the ROC curve. Code is available at https://github.com/BeistMedAI/RTGMFF.

Edge-centric Brain Connectome Representations Reveal Increased Brain Functional Diversity of Reward Circuit in Patients with Major Depressive Disorder.

Qin K, Ai C, Zhu P, Xiang J, Chen X, Zhang L, Wang C, Zou L, Chen F, Pan X, Wang Y, Gu J, Pan N, Chen W

pubmed logopapersSep 3 2025
Major depressive disorder (MDD) has been increasingly understood as a disorder of network-level functional dysconnectivity. However, previous brain connectome studies have primarily relied on node-centric approaches, neglecting critical edge-edge interactions that may capture essential features of network dysfunction. This study included resting-state functional MRI data from 838 MDD patients and 881 healthy controls (HC) across 23 sites. We applied a novel edge-centric connectome model to estimate edge functional connectivity and identify overlapping network communities. Regional functional diversity was quantified via normalized entropy based on community overlap patterns. Neurobiological decoding was performed to map brain-wide relationships between functional diversity alterations and patterns of gene expression and neurotransmitter distribution. Comparative machine learning analyses further evaluated the diagnostic utility of edge-centric versus node-centric connectome representations. Compared with HC, MDD patients exhibited significantly increased functional diversity within the prefrontal-striatal-thalamic reward circuit. Neurobiological decoding analysis revealed that functional diversity alterations in MDD were spatially associated with transcriptional patterns enriched for inflammatory processes, as well as distribution of 5-HT1B receptors. Machine learning analyses demonstrated superior classification performance of edge-centric models over traditional node-centric approaches in distinguishing MDD patients from HC at the individual level. Our findings highlighted that abnormal functional diversity within the reward processing system might underlie multi-level neurobiological mechanisms of MDD. The edge-centric connectome approach offers a valuable tool for identifying disease biomarkers, characterizing individual variation and advancing current understanding of complex network configuration in psychiatric disorders.
Page 41 of 2352345 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.