Sort by:
Page 50 of 65646 results

SAMba-UNet: Synergizing SAM2 and Mamba in UNet with Heterogeneous Aggregation for Cardiac MRI Segmentation

Guohao Huo, Ruiting Dai, Hao Tang

arxiv logopreprintMay 22 2025
To address the challenge of complex pathological feature extraction in automated cardiac MRI segmentation, this study proposes an innovative dual-encoder architecture named SAMba-UNet. The framework achieves cross-modal feature collaborative learning by integrating the vision foundation model SAM2, the state-space model Mamba, and the classical UNet. To mitigate domain discrepancies between medical and natural images, a Dynamic Feature Fusion Refiner is designed, which enhances small lesion feature extraction through multi-scale pooling and a dual-path calibration mechanism across channel and spatial dimensions. Furthermore, a Heterogeneous Omni-Attention Convergence Module (HOACM) is introduced, combining global contextual attention with branch-selective emphasis mechanisms to effectively fuse SAM2's local positional semantics and Mamba's long-range dependency modeling capabilities. Experiments on the ACDC cardiac MRI dataset demonstrate that the proposed model achieves a Dice coefficient of 0.9103 and an HD95 boundary error of 1.0859 mm, significantly outperforming existing methods, particularly in boundary localization for complex pathological structures such as right ventricular anomalies. This work provides an efficient and reliable solution for automated cardiac disease diagnosis, and the code will be open-sourced.

CMRINet: Joint Groupwise Registration and Segmentation for Cardiac Function Quantification from Cine-MRI

Mohamed S. Elmahdy, Marius Staring, Patrick J. H. de Koning, Samer Alabed, Mahan Salehi, Faisal Alandejani, Michael Sharkey, Ziad Aldabbagh, Andrew J. Swift, Rob J. van der Geest

arxiv logopreprintMay 22 2025
Accurate and efficient quantification of cardiac function is essential for the estimation of prognosis of cardiovascular diseases (CVDs). One of the most commonly used metrics for evaluating cardiac pumping performance is left ventricular ejection fraction (LVEF). However, LVEF can be affected by factors such as inter-observer variability and varying pre-load and after-load conditions, which can reduce its reproducibility. Additionally, cardiac dysfunction may not always manifest as alterations in LVEF, such as in heart failure and cardiotoxicity diseases. An alternative measure that can provide a relatively load-independent quantitative assessment of myocardial contractility is myocardial strain and strain rate. By using LVEF in combination with myocardial strain, it is possible to obtain a thorough description of cardiac function. Automated estimation of LVEF and other volumetric measures from cine-MRI sequences can be achieved through segmentation models, while strain calculation requires the estimation of tissue displacement between sequential frames, which can be accomplished using registration models. These tasks are often performed separately, potentially limiting the assessment of cardiac function. To address this issue, in this study we propose an end-to-end deep learning (DL) model that jointly estimates groupwise (GW) registration and segmentation for cardiac cine-MRI images. The proposed anatomically-guided Deep GW network was trained and validated on a large dataset of 4-chamber view cine-MRI image series of 374 subjects. A quantitative comparison with conventional GW registration using elastix and two DL-based methods showed that the proposed model improved performance and substantially reduced computation time.

DP-MDM: detail-preserving MR reconstruction via multiple diffusion models.

Geng M, Zhu J, Hong R, Liu Q, Liang D, Liu Q

pubmed logopapersMay 22 2025
<i>Objective.</i>Magnetic resonance imaging (MRI) is critical in medical diagnosis and treatment by capturing detailed features, such as subtle tissue changes, which help clinicians make precise diagnoses. However, the widely used single diffusion model has limitations in accurately capturing more complex details. This study aims to address these limitations by proposing an efficient method to enhance the reconstruction of detailed features in MRI.<i>Approach.</i>We present a detail-preserving reconstruction method that leverages multiple diffusion models (DP-MDM) to extract structural and detailed features in the k-space domain, which complements the image domain. Since high-frequency information in k-space is more systematically distributed around the periphery compared to the irregular distribution of detailed features in the image domain, this systematic distribution allows for more efficient extraction of detailed features. To further reduce redundancy and enhance model performance, we introduce virtual binary masks with adjustable circular center windows that selectively focus on high-frequency regions. These masks align with the frequency distribution of k-space data, enabling the model to focus more efficiently on high-frequency information. The proposed method employs a cascaded architecture, where the first diffusion model recovers low-frequency structural components, with subsequent models enhancing high-frequency details during the iterative reconstruction stage.<i>Main results.</i>Experimental results demonstrate that DP-MDM achieves superior performance across multiple datasets. On the<i>T1-GE brain</i>dataset with 2D random sampling at<i>R</i>= 15, DP-MDM achieved 35.14 dB peak signal-to-noise ratio (PSNR) and 0.8891 structural similarity (SSIM), outperforming other methods. The proposed method also showed robust performance on the<i>Fast-MRI</i>and<i>Cardiac MR</i>datasets, achieving the highest PSNR and SSIM values.<i>Significance.</i>DP-MDM significantly advances MRI reconstruction by balancing structural integrity and detail preservation. It not only enhances diagnostic accuracy through improved image quality but also offers a versatile framework that can potentially be extended to other imaging modalities, thereby broadening its clinical applicability.

Multimodal MRI radiomics enhances epilepsy prediction in pediatric low-grade glioma patients.

Tang T, Wu Y, Dong X, Zhai X

pubmed logopapersMay 22 2025
Determining whether pediatric patients with low-grade gliomas (pLGGs) have tumor-related epilepsy (GAE) is a crucial aspect of preoperative evaluation. Therefore, we aim to propose an innovative, machine learning- and deep learning-based framework for the rapid and non-invasive preoperative assessment of GAE in pediatric patients using magnetic resonance imaging (MRI). In this study, we propose a novel radiomics-based approach that integrates tumor and peritumoral features extracted from preoperative multiparametric MRI scans to accurately and non-invasively predict the occurrence of tumor-related epilepsy in pediatric patients. Our study developed a multimodal MRI radiomics model to predict epilepsy in pLGGs patients, achieving an AUC of 0.969. The integration of multi-sequence MRI data significantly improved predictive performance, with Stochastic Gradient Descent (SGD) classifier showing robust results (sensitivity: 0.882, specificity: 0.956). Our model can accurately predict whether pLGGs patients have tumor-related epilepsy, which could guide surgical decision-making. Future studies should focus on similarly standardized preoperative evaluations in pediatric epilepsy centers to increase training data and enhance the generalizability of the model.

Brain age prediction from MRI scans in neurodegenerative diseases.

Papouli A, Cole JH

pubmed logopapersMay 22 2025
This review explores the use of brain age estimation from MRI scans as a biomarker of brain health. With disorders like Alzheimer's and Parkinson's increasing globally, there is an urgent need for early detection tools that can identify at-risk individuals before cognitive symptoms emerge. Brain age offers a noninvasive, quantitative measure of neurobiological ageing, with applications in early diagnosis, disease monitoring, and personalized medicine. Studies show that individuals with Alzheimer's, mild cognitive impairment (MCI), and Parkinson's have older brain ages than their chronological age. Longitudinal research indicates that brain-predicted age difference (brain-PAD) rises with disease progression and often precedes cognitive decline. Advances in deep learning and multimodal imaging have improved the accuracy and interpretability of brain age predictions. Moreover, socioeconomic disparities and environmental factors significantly affect brain aging, highlighting the need for inclusive models. Brain age estimation is a promising biomarker for identify future risk of neurodegenerative disease, monitoring progression, and helping prognosis. Challenges like implementation of standardization, demographic biases, and interpretability remain. Future research should integrate brain age with biomarkers and multimodal imaging to enhance early diagnosis and intervention strategies.

Cross-Scale Texture Supplementation for Reference-based Medical Image Super-Resolution.

Li Y, Hao W, Zeng H, Wang L, Xu J, Routray S, Jhaveri RH, Gadekallu TR

pubmed logopapersMay 22 2025
Magnetic Resonance Imaging (MRI) is a widely used medical imaging technique, but its resolution is often limited by acquisition time constraints, potentially compromising diagnostic accuracy. Reference-based Image Super-Resolution (RefSR) has shown promising performance in addressing such challenges by leveraging external high-resolution (HR) reference images to enhance the quality of low-resolution (LR) images. The core objective of RefSR is to accurately establish correspondences between the reference HR image and the LR images. In pursuit of this objective, this paper develops a Self-rectified Texture Supplementation network for RefSR (STS-SR) to enhance fine details in MRI images and support the expanding role of autonomous AI in healthcare. Our network comprises a texture-specified selfrectified feature transfer module and a cross-scale texture complementary network. The feature transfer module employs highfrequency filtering to facilitate the network concentrating on fine details. To better exploit the information from both the reference and LR images, our cross-scale texture complementary module incorporates the All-ViT and Swin Transformer layers to achieve feature aggregation at multiple scales, which enables high-quality image enhancement that is critical for autonomous AI systems in healthcare to make accurate decisions. Extensive experiments are performed across various benchmark datasets. The results validate the effectiveness of our method and demonstrate that the method produces state-of-the-art performance as compared to existing approaches. This advancement enables autonomous AI systems to utilize high-quality MRI images for more accurate diagnostics and reliable predictions.

A Novel Dynamic Neural Network for Heterogeneity-Aware Structural Brain Network Exploration and Alzheimer's Disease Diagnosis.

Cui W, Leng Y, Peng Y, Bai C, Li L, Jiang X, Yuan G, Zheng J

pubmed logopapersMay 22 2025
Heterogeneity is a fundamental characteristic of brain diseases, distinguished by variability not only in brain atrophy but also in the complexity of neural connectivity and brain networks. However, existing data-driven methods fail to provide a comprehensive analysis of brain heterogeneity. Recently, dynamic neural networks (DNNs) have shown significant advantages in capturing sample-wise heterogeneity. Therefore, in this article, we first propose a novel dynamic heterogeneity-aware network (DHANet) to identify critical heterogeneous brain regions, explore heterogeneous connectivity between them, and construct a heterogeneous-aware structural brain network (HGA-SBN) using structural magnetic resonance imaging (sMRI). Specifically, we develop a 3-D dynamic convmixer to extract abundant heterogeneous features from sMRI first. Subsequently, the critical brain atrophy regions are identified by dynamic prototype learning with embedding the hierarchical brain semantic structure. Finally, we employ a joint dynamic edge-correlation (JDE) modeling approach to construct the heterogeneous connectivity between these regions and analyze the HGA-SBN. To evaluate the effectiveness of the DHANet, we conduct elaborate experiments on three public datasets and the method achieves state-of-the-art (SOTA) performance on two classification tasks.

Predictive machine learning and multimodal data to develop highly sensitive, composite biomarkers of disease progression in Friedreich ataxia.

Saha S, Corben LA, Selvadurai LP, Harding IH, Georgiou-Karistianis N

pubmed logopapersMay 21 2025
Friedreich ataxia (FRDA) is a rare, inherited progressive movement disorder for which there is currently no cure. The field urgently requires more sensitive, objective, and clinically relevant biomarkers to enhance the evaluation of treatment efficacy in clinical trials and to speed up the process of drug discovery. This study pioneers the development of clinically relevant, multidomain, fully objective composite biomarkers of disease severity and progression, using multimodal neuroimaging and background data (i.e., demographic, disease history, genetics). Data from 31 individuals with FRDA and 31 controls from a longitudinal multimodal natural history study IMAGE-FRDA, were included. Using an elasticnet predictive machine learning (ML) regression model, we derived a weighted combination of background, structural MRI, diffusion MRI, and quantitative susceptibility imaging (QSM) measures that predicted Friedreich ataxia rating scale (FARS) with high accuracy (R<sup>2</sup> = 0.79, root mean square error (RMSE) = 13.19). This composite also exhibited strong sensitivity to disease progression over two years (Cohen's d = 1.12), outperforming the sensitivity of the FARS score alone (d = 0.88). The approach was validated using the Scale for the assessment and rating of ataxia (SARA), demonstrating the potential and robustness of ML-derived composites to surpass individual biomarkers and act as complementary or surrogate markers of disease severity and progression. However, further validation, refinement, and the integration of additional data modalities will open up new opportunities for translating these biomarkers into clinical practice and clinical trials for FRDA, as well as other rare neurodegenerative diseases.

Right Ventricular Strain as a Key Feature in Interpretable Machine Learning for Identification of Takotsubo Syndrome: A Multicenter CMR-based Study.

Du Z, Hu H, Shen C, Mei J, Feng Y, Huang Y, Chen X, Guo X, Hu Z, Jiang L, Su Y, Biekan J, Lyv L, Chong T, Pan C, Liu K, Ji J, Lu C

pubmed logopapersMay 21 2025
To develop an interpretable machine learning (ML) model based on cardiac magnetic resonance (CMR) multimodal parameters and clinical data to discriminate Takotsubo syndrome (TTS), acute myocardial infarction (AMI), and acute myocarditis (AM), and to further assess the diagnostic value of right ventricular (RV) strain in TTS. This study analyzed CMR and clinical data of 130 patients from three centers. Key features were selected using least absolute shrinkage and selection operator regression and random forest. Data were split into a training cohort and an internal testing cohort (ITC) in the ratio 7:3, with overfitting avoided using leave-one-out cross-validation and bootstrap methods. Nine ML models were evaluated using standard performance metrics, with Shapley additive explanations (SHAP) analysis used for model interpretation. A total of 11 key features were identified. The extreme gradient boosting model showed the best performance, with an area under the curve (AUC) value of 0.94 (95% CI: 0.85-0.97) in the ITC. Right ventricular basal circumferential strain (RVCS-basal) was the most important feature for identifying TTS. Its absolute value was significantly higher in TTS patients than in AMI and AM patients (-9.93%, -5.21%, and -6.18%, respectively, p < 0.001), with values above -6.55% contributing to a diagnosis of TTS. This study developed an interpretable ternary classification ML model for identifying TTS and used SHAP analysis to elucidate the significant value of RVCS-basal in TTS diagnosis. An online calculator (https://lsszxyy.shinyapps.io/XGboost/) based on this model was developed to provide immediate decision support for clinical use.

Multi-modal Integration Analysis of Alzheimer's Disease Using Large Language Models and Knowledge Graphs

Kanan Kiguchi, Yunhao Tu, Katsuhiro Ajito, Fady Alnajjar, Kazuyuki Murase

arxiv logopreprintMay 21 2025
We propose a novel framework for integrating fragmented multi-modal data in Alzheimer's disease (AD) research using large language models (LLMs) and knowledge graphs. While traditional multimodal analysis requires matched patient IDs across datasets, our approach demonstrates population-level integration of MRI, gene expression, biomarkers, EEG, and clinical indicators from independent cohorts. Statistical analysis identified significant features in each modality, which were connected as nodes in a knowledge graph. LLMs then analyzed the graph to extract potential correlations and generate hypotheses in natural language. This approach revealed several novel relationships, including a potential pathway linking metabolic risk factors to tau protein abnormalities via neuroinflammation (r>0.6, p<0.001), and unexpected correlations between frontal EEG channels and specific gene expression profiles (r=0.42-0.58, p<0.01). Cross-validation with independent datasets confirmed the robustness of major findings, with consistent effect sizes across cohorts (variance <15%). The reproducibility of these findings was further supported by expert review (Cohen's k=0.82) and computational validation. Our framework enables cross modal integration at a conceptual level without requiring patient ID matching, offering new possibilities for understanding AD pathology through fragmented data reuse and generating testable hypotheses for future research.
Page 50 of 65646 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.