Sort by:
Page 144 of 1601593 results

SAMba-UNet: Synergizing SAM2 and Mamba in UNet with Heterogeneous Aggregation for Cardiac MRI Segmentation

Guohao Huo, Ruiting Dai, Hao Tang

arxiv logopreprintMay 22 2025
To address the challenge of complex pathological feature extraction in automated cardiac MRI segmentation, this study proposes an innovative dual-encoder architecture named SAMba-UNet. The framework achieves cross-modal feature collaborative learning by integrating the vision foundation model SAM2, the state-space model Mamba, and the classical UNet. To mitigate domain discrepancies between medical and natural images, a Dynamic Feature Fusion Refiner is designed, which enhances small lesion feature extraction through multi-scale pooling and a dual-path calibration mechanism across channel and spatial dimensions. Furthermore, a Heterogeneous Omni-Attention Convergence Module (HOACM) is introduced, combining global contextual attention with branch-selective emphasis mechanisms to effectively fuse SAM2's local positional semantics and Mamba's long-range dependency modeling capabilities. Experiments on the ACDC cardiac MRI dataset demonstrate that the proposed model achieves a Dice coefficient of 0.9103 and an HD95 boundary error of 1.0859 mm, significantly outperforming existing methods, particularly in boundary localization for complex pathological structures such as right ventricular anomalies. This work provides an efficient and reliable solution for automated cardiac disease diagnosis, and the code will be open-sourced.

Cross-Scale Texture Supplementation for Reference-based Medical Image Super-Resolution.

Li Y, Hao W, Zeng H, Wang L, Xu J, Routray S, Jhaveri RH, Gadekallu TR

pubmed logopapersMay 22 2025
Magnetic Resonance Imaging (MRI) is a widely used medical imaging technique, but its resolution is often limited by acquisition time constraints, potentially compromising diagnostic accuracy. Reference-based Image Super-Resolution (RefSR) has shown promising performance in addressing such challenges by leveraging external high-resolution (HR) reference images to enhance the quality of low-resolution (LR) images. The core objective of RefSR is to accurately establish correspondences between the reference HR image and the LR images. In pursuit of this objective, this paper develops a Self-rectified Texture Supplementation network for RefSR (STS-SR) to enhance fine details in MRI images and support the expanding role of autonomous AI in healthcare. Our network comprises a texture-specified selfrectified feature transfer module and a cross-scale texture complementary network. The feature transfer module employs highfrequency filtering to facilitate the network concentrating on fine details. To better exploit the information from both the reference and LR images, our cross-scale texture complementary module incorporates the All-ViT and Swin Transformer layers to achieve feature aggregation at multiple scales, which enables high-quality image enhancement that is critical for autonomous AI systems in healthcare to make accurate decisions. Extensive experiments are performed across various benchmark datasets. The results validate the effectiveness of our method and demonstrate that the method produces state-of-the-art performance as compared to existing approaches. This advancement enables autonomous AI systems to utilize high-quality MRI images for more accurate diagnostics and reliable predictions.

Predicting Depression in Healthy Young Adults: A Machine Learning Approach Using Longitudinal Neuroimaging Data.

Zhang A, Zhang H

pubmed logopapersMay 22 2025
Accurate prediction of depressive symptoms in healthy individuals can enable early intervention and reduce both individual and societal costs. This study aimed to develop predictive models for depression in young adults using machine learning (ML) techniques and longitudinal data from the Beck Depression Inventory, structural MRI (sMRI), and resting-state functional MRI (rs-fMRI). Feature selection methods, including the least absolute shrinkage and selection operator (LASSO), Boruta, and VSURF, were applied to identify MRI features associated with depression. Support vector machine and random forest algorithms were then used to construct prediction models. Eight MRI features were identified as predictive of depression, including brain regions in the Orbital Gyrus, Superior Frontal Gyrus, Middle Frontal Gyrus, Parahippocampal Gyrus, Cingulate Gyrus, and Inferior Parietal Lobule. The overlaps and the differences between selected features and brain regions with significant between-group differences in t-tests suggest that ML provides a unique perspective on the neural changes associated with depression. Six pairs of prediction models demonstrated varying performance, with accuracies ranging from 0.68 to 0.85 and areas under the curve (AUC) ranging from 0.57 to 0.81. The best-performing model achieved an accuracy of 0.85 and an AUC of 0.80, highlighting the potential of combining sMRI and rs-fMRI features with ML for early depression detection while revealing the potential of overfitting in small-sample and high-dimensional settings. This study necessitates further research to (1) replicate findings in independent larger datasets to address potential overfitting and (2) utilize different advanced ML techniques and multimodal data fusion to improve model performance.

Denoising of high-resolution 3D UTE-MR angiogram data using lightweight and efficient convolutional neural networks.

Tessema AW, Ambaye DT, Cho H

pubmed logopapersMay 22 2025
High-resolution magnetic resonance angiography (~ 50 μm<sup>3</sup> MRA) data plays a critical role in the accurate diagnosis of various vascular disorders. However, it is very challenging to acquire, and it is susceptible to artifacts and noise which limits its ability to visualize smaller blood vessels and necessitates substantial noise reduction measures. Among many techniques, the BM4D filter is a state-of-the-art denoising technique but comes with high computational cost, particularly for high-resolution 3D MRA data. In this research, five different optimized convolutional neural networks were utilized to denoise contrast-enhanced UTE-MRA data using a supervised learning approach. Since noise-free MRA data is challenging to acquire, the denoised image using BM4D filter was used as ground truth and this research mainly focused on reducing computational cost and inference time for denoising high-resolution UTE-MRA data. All five models were able to generate nearly similar denoised data compared to the ground truth with different computational footprints. Among all, the nested-UNet model generated almost similar images with the ground truth and achieved SSIM, PSNR, and MSE of 0.998, 46.12, and 3.38e-5 with 3× faster inference time than the BM4D filter. In addition, most optimized models like UNet and attention-UNet models generated nearly similar images with nested-UNet but 8.8× and 7.1× faster than the BM4D filter. In conclusion, using highly optimized networks, we have shown the possibility of denoising high-resolution UTE-MRA data with significantly shorter inference time, even with limited datasets from animal models. This can potentially make high-resolution 3D UTE-MRA data to be less computationally burdensome.

Brain age prediction from MRI scans in neurodegenerative diseases.

Papouli A, Cole JH

pubmed logopapersMay 22 2025
This review explores the use of brain age estimation from MRI scans as a biomarker of brain health. With disorders like Alzheimer's and Parkinson's increasing globally, there is an urgent need for early detection tools that can identify at-risk individuals before cognitive symptoms emerge. Brain age offers a noninvasive, quantitative measure of neurobiological ageing, with applications in early diagnosis, disease monitoring, and personalized medicine. Studies show that individuals with Alzheimer's, mild cognitive impairment (MCI), and Parkinson's have older brain ages than their chronological age. Longitudinal research indicates that brain-predicted age difference (brain-PAD) rises with disease progression and often precedes cognitive decline. Advances in deep learning and multimodal imaging have improved the accuracy and interpretability of brain age predictions. Moreover, socioeconomic disparities and environmental factors significantly affect brain aging, highlighting the need for inclusive models. Brain age estimation is a promising biomarker for identify future risk of neurodegenerative disease, monitoring progression, and helping prognosis. Challenges like implementation of standardization, demographic biases, and interpretability remain. Future research should integrate brain age with biomarkers and multimodal imaging to enhance early diagnosis and intervention strategies.

CMRINet: Joint Groupwise Registration and Segmentation for Cardiac Function Quantification from Cine-MRI

Mohamed S. Elmahdy, Marius Staring, Patrick J. H. de Koning, Samer Alabed, Mahan Salehi, Faisal Alandejani, Michael Sharkey, Ziad Aldabbagh, Andrew J. Swift, Rob J. van der Geest

arxiv logopreprintMay 22 2025
Accurate and efficient quantification of cardiac function is essential for the estimation of prognosis of cardiovascular diseases (CVDs). One of the most commonly used metrics for evaluating cardiac pumping performance is left ventricular ejection fraction (LVEF). However, LVEF can be affected by factors such as inter-observer variability and varying pre-load and after-load conditions, which can reduce its reproducibility. Additionally, cardiac dysfunction may not always manifest as alterations in LVEF, such as in heart failure and cardiotoxicity diseases. An alternative measure that can provide a relatively load-independent quantitative assessment of myocardial contractility is myocardial strain and strain rate. By using LVEF in combination with myocardial strain, it is possible to obtain a thorough description of cardiac function. Automated estimation of LVEF and other volumetric measures from cine-MRI sequences can be achieved through segmentation models, while strain calculation requires the estimation of tissue displacement between sequential frames, which can be accomplished using registration models. These tasks are often performed separately, potentially limiting the assessment of cardiac function. To address this issue, in this study we propose an end-to-end deep learning (DL) model that jointly estimates groupwise (GW) registration and segmentation for cardiac cine-MRI images. The proposed anatomically-guided Deep GW network was trained and validated on a large dataset of 4-chamber view cine-MRI image series of 374 subjects. A quantitative comparison with conventional GW registration using elastix and two DL-based methods showed that the proposed model improved performance and substantially reduced computation time.

DP-MDM: detail-preserving MR reconstruction via multiple diffusion models.

Geng M, Zhu J, Hong R, Liu Q, Liang D, Liu Q

pubmed logopapersMay 22 2025
<i>Objective.</i>Magnetic resonance imaging (MRI) is critical in medical diagnosis and treatment by capturing detailed features, such as subtle tissue changes, which help clinicians make precise diagnoses. However, the widely used single diffusion model has limitations in accurately capturing more complex details. This study aims to address these limitations by proposing an efficient method to enhance the reconstruction of detailed features in MRI.<i>Approach.</i>We present a detail-preserving reconstruction method that leverages multiple diffusion models (DP-MDM) to extract structural and detailed features in the k-space domain, which complements the image domain. Since high-frequency information in k-space is more systematically distributed around the periphery compared to the irregular distribution of detailed features in the image domain, this systematic distribution allows for more efficient extraction of detailed features. To further reduce redundancy and enhance model performance, we introduce virtual binary masks with adjustable circular center windows that selectively focus on high-frequency regions. These masks align with the frequency distribution of k-space data, enabling the model to focus more efficiently on high-frequency information. The proposed method employs a cascaded architecture, where the first diffusion model recovers low-frequency structural components, with subsequent models enhancing high-frequency details during the iterative reconstruction stage.<i>Main results.</i>Experimental results demonstrate that DP-MDM achieves superior performance across multiple datasets. On the<i>T1-GE brain</i>dataset with 2D random sampling at<i>R</i>= 15, DP-MDM achieved 35.14 dB peak signal-to-noise ratio (PSNR) and 0.8891 structural similarity (SSIM), outperforming other methods. The proposed method also showed robust performance on the<i>Fast-MRI</i>and<i>Cardiac MR</i>datasets, achieving the highest PSNR and SSIM values.<i>Significance.</i>DP-MDM significantly advances MRI reconstruction by balancing structural integrity and detail preservation. It not only enhances diagnostic accuracy through improved image quality but also offers a versatile framework that can potentially be extended to other imaging modalities, thereby broadening its clinical applicability.

Multimodal MRI radiomics enhances epilepsy prediction in pediatric low-grade glioma patients.

Tang T, Wu Y, Dong X, Zhai X

pubmed logopapersMay 22 2025
Determining whether pediatric patients with low-grade gliomas (pLGGs) have tumor-related epilepsy (GAE) is a crucial aspect of preoperative evaluation. Therefore, we aim to propose an innovative, machine learning- and deep learning-based framework for the rapid and non-invasive preoperative assessment of GAE in pediatric patients using magnetic resonance imaging (MRI). In this study, we propose a novel radiomics-based approach that integrates tumor and peritumoral features extracted from preoperative multiparametric MRI scans to accurately and non-invasively predict the occurrence of tumor-related epilepsy in pediatric patients. Our study developed a multimodal MRI radiomics model to predict epilepsy in pLGGs patients, achieving an AUC of 0.969. The integration of multi-sequence MRI data significantly improved predictive performance, with Stochastic Gradient Descent (SGD) classifier showing robust results (sensitivity: 0.882, specificity: 0.956). Our model can accurately predict whether pLGGs patients have tumor-related epilepsy, which could guide surgical decision-making. Future studies should focus on similarly standardized preoperative evaluations in pediatric epilepsy centers to increase training data and enhance the generalizability of the model.

FLAMeS: A Robust Deep Learning Model for Automated Multiple Sclerosis Lesion Segmentation

Dereskewicz, E., La Rosa, F., dos Santos Silva, J., Sizer, E., Kohli, A., Wynen, M., Mullins, W. A., Maggi, P., Levy, S., Onyemeh, K., Ayci, B., Solomon, A. J., Assländer, J., Al-Louzi, O., Reich, D. S., Sumowski, J. F., Beck, E. S.

medrxiv logopreprintMay 22 2025
Background and Purpose Assessment of brain lesions on MRI is crucial for research in multiple sclerosis (MS). Manual segmentation is time consuming and inconsistent. We aimed to develop an automated MS lesion segmentation algorithm for T2-weighted fluid-attenuated inversion recovery (FLAIR) MRI. Methods We developed FLAIR Lesion Analysis in Multiple Sclerosis (FLAMeS), a deep learning-based MS lesion segmentation algorithm based on the nnU-Net 3D full-resolution U-Net and trained on 668 FLAIR 1.5 and 3 tesla scans from persons with MS. FLAMeS was evaluated on three external datasets: MSSEG-2 (n=14), MSLesSeg (n=51), and a clinical cohort (n=10), and compared to SAMSEG, LST-LPA, and LST-AI. Performance was assessed qualitatively by two blinded experts and quantitatively by comparing automated and ground truth lesion masks using standard segmentation metrics. Results In a blinded qualitative review of 20 scans, both raters selected FLAMeS as the most accurate segmentation in 15 cases, with one rater favoring FLAMeS in two additional cases. Across all testing datasets, FLAMeS achieved a mean Dice score of 0.74, a true positive rate of 0.84, and an F1 score of 0.78, consistently outperforming the benchmark methods. For other metrics, including positive predictive value, relative volume difference, and false positive rate, FLAMeS performed similarly or better than benchmark methods. Most lesions missed by FLAMeS were smaller than 10 mm3, whereas the benchmark methods missed larger lesions in addition to smaller ones. Conclusions FLAMeS is an accurate, robust method for MS lesion segmentation that outperforms other publicly available methods.

An Interpretable Deep Learning Approach for Autism Spectrum Disorder Detection in Children Using NASNet-Mobile.

K VRP, Hima Bindu C, Devi KRM

pubmed logopapersMay 22 2025
Autism spectrum disorder (ASD) is a multifaceted neurodevelopmental disorder featuring impaired social interactions and communication abilities engaging the individuals in a restrictive or repetitive behaviour. Though incurable early detection and intervention can reduce the severity of symptoms. Structural magnetic resonance imaging (sMRI) can improve diagnostic accuracy, facilitating early diagnosis to offer more tailored care. With the emergence of deep learning (DL), neuroimaging-based approaches for ASD diagnosis have been focused. However, many existing models lack interpretability of their decisions for diagnosis. The prime objective of this work is to perform ASD classification precisely and to interpret the classification process in a better way so as to discern the major features that are appropriate for the prediction of disorder. The proposed model employs neural architecture search network - mobile(NASNet-Mobile) model for ASD detection, which is integrated with an explainable artificial intelligence (XAI) technique called local interpretable model-agnostic explanations (LIME) for increased transparency of ASD classification. The model is trained on sMRI images of two age groups taken from autism brain imaging data exchange-I (ABIDE-I) dataset. The proposed model yielded accuracy of 0.9607, F1-score of 0.9614, specificity of 0.9774, sensitivity of 0.9451, negative predicted value (NPV) of 0.9429, positive predicted value (PPV) of 0.9783 and the diagnostic odds ratio of 745.59 for 2 to 11 years age group compared to 12 to 18 years group. These results are superior compared to other state of the art models Inception v3 and SqueezeNet.
Page 144 of 1601593 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.