Sort by:
Page 79 of 91901 results

Multi-modal Integration Analysis of Alzheimer's Disease Using Large Language Models and Knowledge Graphs

Kanan Kiguchi, Yunhao Tu, Katsuhiro Ajito, Fady Alnajjar, Kazuyuki Murase

arxiv logopreprintMay 21 2025
We propose a novel framework for integrating fragmented multi-modal data in Alzheimer's disease (AD) research using large language models (LLMs) and knowledge graphs. While traditional multimodal analysis requires matched patient IDs across datasets, our approach demonstrates population-level integration of MRI, gene expression, biomarkers, EEG, and clinical indicators from independent cohorts. Statistical analysis identified significant features in each modality, which were connected as nodes in a knowledge graph. LLMs then analyzed the graph to extract potential correlations and generate hypotheses in natural language. This approach revealed several novel relationships, including a potential pathway linking metabolic risk factors to tau protein abnormalities via neuroinflammation (r>0.6, p<0.001), and unexpected correlations between frontal EEG channels and specific gene expression profiles (r=0.42-0.58, p<0.01). Cross-validation with independent datasets confirmed the robustness of major findings, with consistent effect sizes across cohorts (variance <15%). The reproducibility of these findings was further supported by expert review (Cohen's k=0.82) and computational validation. Our framework enables cross modal integration at a conceptual level without requiring patient ID matching, offering new possibilities for understanding AD pathology through fragmented data reuse and generating testable hypotheses for future research.

Predictive machine learning and multimodal data to develop highly sensitive, composite biomarkers of disease progression in Friedreich ataxia.

Saha S, Corben LA, Selvadurai LP, Harding IH, Georgiou-Karistianis N

pubmed logopapersMay 21 2025
Friedreich ataxia (FRDA) is a rare, inherited progressive movement disorder for which there is currently no cure. The field urgently requires more sensitive, objective, and clinically relevant biomarkers to enhance the evaluation of treatment efficacy in clinical trials and to speed up the process of drug discovery. This study pioneers the development of clinically relevant, multidomain, fully objective composite biomarkers of disease severity and progression, using multimodal neuroimaging and background data (i.e., demographic, disease history, genetics). Data from 31 individuals with FRDA and 31 controls from a longitudinal multimodal natural history study IMAGE-FRDA, were included. Using an elasticnet predictive machine learning (ML) regression model, we derived a weighted combination of background, structural MRI, diffusion MRI, and quantitative susceptibility imaging (QSM) measures that predicted Friedreich ataxia rating scale (FARS) with high accuracy (R<sup>2</sup> = 0.79, root mean square error (RMSE) = 13.19). This composite also exhibited strong sensitivity to disease progression over two years (Cohen's d = 1.12), outperforming the sensitivity of the FARS score alone (d = 0.88). The approach was validated using the Scale for the assessment and rating of ataxia (SARA), demonstrating the potential and robustness of ML-derived composites to surpass individual biomarkers and act as complementary or surrogate markers of disease severity and progression. However, further validation, refinement, and the integration of additional data modalities will open up new opportunities for translating these biomarkers into clinical practice and clinical trials for FRDA, as well as other rare neurodegenerative diseases.

An Exploratory Approach Towards Investigating and Explaining Vision Transformer and Transfer Learning for Brain Disease Detection

Shuvashis Sarker, Shamim Rahim Refat, Faika Fairuj Preotee, Shifat Islam, Tashreef Muhammad, Mohammad Ashraful Hoque

arxiv logopreprintMay 21 2025
The brain is a highly complex organ that manages many important tasks, including movement, memory and thinking. Brain-related conditions, like tumors and degenerative disorders, can be hard to diagnose and treat. Magnetic Resonance Imaging (MRI) serves as a key tool for identifying these conditions, offering high-resolution images of brain structures. Despite this, interpreting MRI scans can be complicated. This study tackles this challenge by conducting a comparative analysis of Vision Transformer (ViT) and Transfer Learning (TL) models such as VGG16, VGG19, Resnet50V2, MobilenetV2 for classifying brain diseases using MRI data from Bangladesh based dataset. ViT, known for their ability to capture global relationships in images, are particularly effective for medical imaging tasks. Transfer learning helps to mitigate data constraints by fine-tuning pre-trained models. Furthermore, Explainable AI (XAI) methods such as GradCAM, GradCAM++, LayerCAM, ScoreCAM, and Faster-ScoreCAM are employed to interpret model predictions. The results demonstrate that ViT surpasses transfer learning models, achieving a classification accuracy of 94.39%. The integration of XAI methods enhances model transparency, offering crucial insights to aid medical professionals in diagnosing brain diseases with greater precision.

Reconsider the Template Mesh in Deep Learning-based Mesh Reconstruction

Fengting Zhang, Boxu Liang, Qinghao Liu, Min Liu, Xiang Chen, Yaonan Wang

arxiv logopreprintMay 21 2025
Mesh reconstruction is a cornerstone process across various applications, including in-silico trials, digital twins, surgical planning, and navigation. Recent advancements in deep learning have notably enhanced mesh reconstruction speeds. Yet, traditional methods predominantly rely on deforming a standardised template mesh for individual subjects, which overlooks the unique anatomical variations between them, and may compromise the fidelity of the reconstructions. In this paper, we propose an adaptive-template-based mesh reconstruction network (ATMRN), which generates adaptive templates from the given images for the subsequent deformation, moving beyond the constraints of a singular, fixed template. Our approach, validated on cortical magnetic resonance (MR) images from the OASIS dataset, sets a new benchmark in voxel-to-cortex mesh reconstruction, achieving an average symmetric surface distance of 0.267mm across four cortical structures. Our proposed method is generic and can be easily transferred to other image modalities and anatomical structures.

Multi-modal Integration Analysis of Alzheimer's Disease Using Large Language Models and Knowledge Graphs

Kanan Kiguchi, Yunhao Tu, Katsuhiro Ajito, Fady Alnajjar, Kazuyuki Murase

arxiv logopreprintMay 21 2025
We propose a novel framework for integrating fragmented multi-modal data in Alzheimer's disease (AD) research using large language models (LLMs) and knowledge graphs. While traditional multimodal analysis requires matched patient IDs across datasets, our approach demonstrates population-level integration of MRI, gene expression, biomarkers, EEG, and clinical indicators from independent cohorts. Statistical analysis identified significant features in each modality, which were connected as nodes in a knowledge graph. LLMs then analyzed the graph to extract potential correlations and generate hypotheses in natural language. This approach revealed several novel relationships, including a potential pathway linking metabolic risk factors to tau protein abnormalities via neuroinflammation (r>0.6, p<0.001), and unexpected correlations between frontal EEG channels and specific gene expression profiles (r=0.42-0.58, p<0.01). Cross-validation with independent datasets confirmed the robustness of major findings, with consistent effect sizes across cohorts (variance <15%). The reproducibility of these findings was further supported by expert review (Cohen's k=0.82) and computational validation. Our framework enables cross modal integration at a conceptual level without requiring patient ID matching, offering new possibilities for understanding AD pathology through fragmented data reuse and generating testable hypotheses for future research.

Deep learning-based radiomics and machine learning for prognostic assessment in IDH-wildtype glioblastoma after maximal safe surgical resection: a multicenter study.

Liu J, Jiang S, Wu Y, Zou R, Bao Y, Wang N, Tu J, Xiong J, Liu Y, Li Y

pubmed logopapersMay 20 2025
Glioblastoma (GBM) is a highly aggressive brain tumor with poor prognosis. This study aimed to construct and validate a radiomics-based machine learning model for predicting overall survival (OS) in IDH-wildtype GBM after maximal safe surgical resection using magnetic resonance imaging. A total of 582 patients were retrospectively enrolled, comprising 301 in the training cohort, 128 in the internal validation cohort, and 153 in the external validation cohort. Volumes of interest (VOIs) from contrast-enhanced T1-weighted imaging (CE-T1WI) were segmented into three regions: contrast-enhancing tumor, necrotic non-enhancing core, and peritumoral edema using an ResNet-based segmentation network. A total of 4,227 radiomic features were extracted and filtered using LASSO-Cox regression to identify signatures. The prognostic model was constructed using the Mime prediction framework, categorizing patients into high- and low-risk groups based on the median OS. Model performance was assessed using the concordance index (CI) and Kaplan-Meier survival analysis. Independent prognostic factors were identified through multivariable Cox regression analysis, and a nomogram was developed for individualized risk assessment. The Step Cox [backward] + RSF model achieved CIs of 0.89, 0.81, and 0.76 in the training, internal and external validation cohorts. Log-rank tests demonstrated significant survival differences between high- and low-risk groups across all cohorts (P < 0.05). Multivariate Cox analysis identified age (HR: 1.022; 95% CI: 0.979, 1.009, P < 0.05), KPS score (HR: 0.970, 95% CI: 0.960, 0.978, P < 0.05), rad-scores of the necrotic non-enhancing core (HR: 8.164; 95% CI: 2.439, 27.331, P < 0.05), and peritumoral edema (HR: 3.748; 95% CI: 1.212, 11.594, P < 0.05) as independent predictors of OS. A nomogram integrating these predictors provided individualized risk assessment. This deep learning segmentation-based radiomics model demonstrated robust performance in predicting OS in GBM after maximal safe surgical resection. By incorporating radiomic signatures and advanced machine learning algorithms, it offers a non-invasive tool for personalized prognostic assessment and supports clinical decision-making.

"DCSLK: Combined Large Kernel Shared Convolutional Model with Dynamic Channel Sampling".

Li Z, Luo S, Li H, Li Y

pubmed logopapersMay 20 2025
This study centers around the competition between Convolutional Neural Networks (CNNs) with large convolutional kernels and Vision Transformers in the domain of computer vision, delving deeply into the issues pertaining to parameters and computational complexity that stem from the utilization of large convolutional kernels. Even though the size of the convolutional kernels has been extended up to 51×51, the enhancement of performance has hit a plateau, and moreover, striped convolution incurs a performance degradation. Enlightened by the hierarchical visual processing mechanism inherent in humans, this research innovatively incorporates a shared parameter mechanism for large convolutional kernels. It synergizes the expansion of the receptive field enabled by large convolutional kernels with the extraction of fine-grained features facilitated by small convolutional kernels. To address the surging number of parameters, a meticulously designed parameter sharing mechanism is employed, featuring fine-grained processing in the central region of the convolutional kernel and wide-ranging parameter sharing in the periphery. This not only curtails the parameter count and mitigates the model complexity but also sustains the model's capacity to capture extensive spatial relationships. Additionally, in light of the problems of spatial feature information loss and augmented memory access during the 1×1 convolutional channel compression phase, this study further puts forward a dynamic channel sampling approach, which markedly elevates the accuracy of tumor subregion segmentation. To authenticate the efficacy of the proposed methodology, a comprehensive evaluation has been conducted on three brain tumor segmentation datasets, namely BraTs2020, BraTs2024, and Medical Segmentation Decathlon Brain 2018. The experimental results evince that the proposed model surpasses the current mainstream ConvNet and Transformer architectures across all performance metrics, proffering novel research perspectives and technical stratagems for the realm of medical image segmentation.

End-to-end Cortical Surface Reconstruction from Clinical Magnetic Resonance Images

Jesper Duemose Nielsen, Karthik Gopinath, Andrew Hoopes, Adrian Dalca, Colin Magdamo, Steven Arnold, Sudeshna Das, Axel Thielscher, Juan Eugenio Iglesias, Oula Puonti

arxiv logopreprintMay 20 2025
Surface-based cortical analysis is valuable for a variety of neuroimaging tasks, such as spatial normalization, parcellation, and gray matter (GM) thickness estimation. However, most tools for estimating cortical surfaces work exclusively on scans with at least 1 mm isotropic resolution and are tuned to a specific magnetic resonance (MR) contrast, often T1-weighted (T1w). This precludes application using most clinical MR scans, which are very heterogeneous in terms of contrast and resolution. Here, we use synthetic domain-randomized data to train the first neural network for explicit estimation of cortical surfaces from scans of any contrast and resolution, without retraining. Our method deforms a template mesh to the white matter (WM) surface, which guarantees topological correctness. This mesh is further deformed to estimate the GM surface. We compare our method to recon-all-clinical (RAC), an implicit surface reconstruction method which is currently the only other tool capable of processing heterogeneous clinical MR scans, on ADNI and a large clinical dataset (n=1,332). We show a approximately 50 % reduction in cortical thickness error (from 0.50 to 0.24 mm) with respect to RAC and better recovery of the aging-related cortical thinning patterns detected by FreeSurfer on high-resolution T1w scans. Our method enables fast and accurate surface reconstruction of clinical scans, allowing studies (1) with sample sizes far beyond what is feasible in a research setting, and (2) of clinical populations that are difficult to enroll in research studies. The code is publicly available at https://github.com/simnibs/brainnet.

NOVA: A Benchmark for Anomaly Localization and Clinical Reasoning in Brain MRI

Cosmin I. Bercea, Jun Li, Philipp Raffler, Evamaria O. Riedel, Lena Schmitzer, Angela Kurz, Felix Bitzer, Paula Roßmüller, Julian Canisius, Mirjam L. Beyrle, Che Liu, Wenjia Bai, Bernhard Kainz, Julia A. Schnabel, Benedikt Wiestler

arxiv logopreprintMay 20 2025
In many real-world applications, deployed models encounter inputs that differ from the data seen during training. Out-of-distribution detection identifies whether an input stems from an unseen distribution, while open-world recognition flags such inputs to ensure the system remains robust as ever-emerging, previously $unknown$ categories appear and must be addressed without retraining. Foundation and vision-language models are pre-trained on large and diverse datasets with the expectation of broad generalization across domains, including medical imaging. However, benchmarking these models on test sets with only a few common outlier types silently collapses the evaluation back to a closed-set problem, masking failures on rare or truly novel conditions encountered in clinical use. We therefore present $NOVA$, a challenging, real-life $evaluation-only$ benchmark of $\sim$900 brain MRI scans that span 281 rare pathologies and heterogeneous acquisition protocols. Each case includes rich clinical narratives and double-blinded expert bounding-box annotations. Together, these enable joint assessment of anomaly localisation, visual captioning, and diagnostic reasoning. Because NOVA is never used for training, it serves as an $extreme$ stress-test of out-of-distribution generalisation: models must bridge a distribution gap both in sample appearance and in semantic space. Baseline results with leading vision-language models (GPT-4o, Gemini 2.0 Flash, and Qwen2.5-VL-72B) reveal substantial performance drops across all tasks, establishing NOVA as a rigorous testbed for advancing models that can detect, localize, and reason about truly unknown anomalies.

Dynadiff: Single-stage Decoding of Images from Continuously Evolving fMRI

Marlène Careil, Yohann Benchetrit, Jean-Rémi King

arxiv logopreprintMay 20 2025
Brain-to-image decoding has been recently propelled by the progress in generative AI models and the availability of large ultra-high field functional Magnetic Resonance Imaging (fMRI). However, current approaches depend on complicated multi-stage pipelines and preprocessing steps that typically collapse the temporal dimension of brain recordings, thereby limiting time-resolved brain decoders. Here, we introduce Dynadiff (Dynamic Neural Activity Diffusion for Image Reconstruction), a new single-stage diffusion model designed for reconstructing images from dynamically evolving fMRI recordings. Our approach offers three main contributions. First, Dynadiff simplifies training as compared to existing approaches. Second, our model outperforms state-of-the-art models on time-resolved fMRI signals, especially on high-level semantic image reconstruction metrics, while remaining competitive on preprocessed fMRI data that collapse time. Third, this approach allows a precise characterization of the evolution of image representations in brain activity. Overall, this work lays the foundation for time-resolved brain-to-image decoding.
Page 79 of 91901 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.