Sort by:
Page 233 of 2922917 results

Graph Mamba for Efficient Whole Slide Image Understanding

Jiaxuan Lu, Junyan Shi, Yuhui Lin, Fang Yan, Yue Gao, Shaoting Zhang, Xiaosong Wang

arxiv logopreprintMay 23 2025
Whole Slide Images (WSIs) in histopathology present a significant challenge for large-scale medical image analysis due to their high resolution, large size, and complex tile relationships. Existing Multiple Instance Learning (MIL) methods, such as Graph Neural Networks (GNNs) and Transformer-based models, face limitations in scalability and computational cost. To bridge this gap, we propose the WSI-GMamba framework, which synergistically combines the relational modeling strengths of GNNs with the efficiency of Mamba, the State Space Model designed for sequence learning. The proposed GMamba block integrates Message Passing, Graph Scanning & Flattening, and feature aggregation via a Bidirectional State Space Model (Bi-SSM), achieving Transformer-level performance with 7* fewer FLOPs. By leveraging the complementary strengths of lightweight GNNs and Mamba, the WSI-GMamba framework delivers a scalable solution for large-scale WSI analysis, offering both high accuracy and computational efficiency for slide-level classification.

Novel Deep Learning Framework for Simultaneous Assessment of Left Ventricular Mass and Longitudinal Strain: Clinical Feasibility and Validation in Patients with Hypertrophic Cardiomyopathy

Park, J., Yoon, Y. E., Jang, Y., Jung, T., Jeon, J., Lee, S.-A., Choi, H.-M., Hwang, I.-C., Chun, E. J., Cho, G.-Y., Chang, H.-J.

medrxiv logopreprintMay 23 2025
BackgroundThis study aims to present the Segmentation-based Myocardial Advanced Refinement Tracking (SMART) system, a novel artificial intelligence (AI)-based framework for transthoracic echocardiography (TTE) that incorporates motion tracking and left ventricular (LV) myocardial segmentation for automated LV mass (LVM) and global longitudinal strain (LVGLS) assessment. MethodsThe SMART system demonstrates LV speckle tracking based on motion vector estimation, refined by structural information using endocardial and epicardial segmentation throughout the cardiac cycle. This approach enables automated measurement of LVMSMART and LVGLSSMART. The feasibility of SMART is validated in 111 hypertrophic cardiomyopathy (HCM) patients (median age: 58 years, 69% male) who underwent TTE and cardiac magnetic resonance imaging (CMR). ResultsLVGLSSMART showed a strong correlation with conventional manual LVGLS measurements (Pearsons correlation coefficient [PCC] 0.851; mean difference 0 [-2-0]). When compared to CMR as the reference standard for LVM, the conventional dimension-based TTE method overestimated LVM (PCC 0.652; mean difference: 106 [90-123]), whereas LVMSMART demonstrated excellent agreement with CMR (PCC 0.843; mean difference: 1 [-11-13]). For predicting extensive myocardial fibrosis, LVGLSSMART and LVMSMART exhibited performance comparable to conventional LVGLS and CMR (AUC: 0.72 and 0.66, respectively). Patients identified as high-risk for extensive fibrosis by LVGLSSMART and LVMSMART had significantly higher rates of adverse outcomes, including heart failure hospitalization, new-onset atrial fibrillation, and defibrillator implantation. ConclusionsThe SMART technique provides a comparable LVGLS evaluation and a more accurate LVM assessment than conventional TTE, with predictive values for myocardial fibrosis and adverse outcomes. These findings support its utility in HCM management.

High-Fidelity Functional Ultrasound Reconstruction via A Visual Auto-Regressive Framework

Xuhang Chen, Zhuo Li, Yanyan Shen, Mufti Mahmud, Hieu Pham, Chi-Man Pun, Shuqiang Wang

arxiv logopreprintMay 23 2025
Functional ultrasound (fUS) imaging provides exceptional spatiotemporal resolution for neurovascular mapping, yet its practical application is significantly hampered by critical challenges. Foremost among these are data scarcity, arising from ethical considerations and signal degradation through the cranium, which collectively limit dataset diversity and compromise the fairness of downstream machine learning models.

PDS-UKAN: Subdivision hopping connected to the U-KAN network for medical image segmentation.

Deng L, Wang W, Chen S, Yang X, Huang S, Wang J

pubmed logopapersMay 23 2025
Accurate and efficient segmentation of medical images plays a vital role in clinical tasks, such as diagnostic procedures and planning treatments. Traditional U-shaped encoder-decoder architectures, built on convolutional and transformer-based networks, have shown strong performance in medical image processing. However, the simple skip connections commonly used in these networks face limitations, such as insufficient nonlinear modeling capacity, weak global multiscale context modeling, and limited interpretability. To address these challenges, this study proposes the PDS-UKAN network, an innovative subdivision-based U-KAN architecture, designed to improve segmentation accuracy. The PDS-UKAN incorporates a PKAN module-comprising partial convolutions and Kolmogorov - Arnold network layers-into the encoder bottleneck, enhancing the network's nonlinear modeling and interpretability. Additionally, the proposed Dual-Branch Convolutional Boundary Enhancement Module (DBE) focuses on pixel-level boundary refinement, improving edge detail preservation in shallow skip connections. Meanwhile, the Skip Connection Channel Spatial Attention Module (SCCSA) mechanism is applied in the deeper skip connections to strengthen cross-dimensional interactions between channels and spatial features, mitigating the loss of spatial information due to downsampling. Extensive experiments across multiple medical imaging datasets demonstrate that PDS-UKAN consistently achieves superior performance compared to state-of-the-art (SOTA) methods.

How We Won the ISLES'24 Challenge by Preprocessing

Tianyi Ren, Juampablo E. Heras Rivera, Hitender Oswal, Yutong Pan, William Henry, Sophie Walters, Mehmet Kurt

arxiv logopreprintMay 23 2025
Stroke is among the top three causes of death worldwide, and accurate identification of stroke lesion boundaries is critical for diagnosis and treatment. Supervised deep learning methods have emerged as the leading solution for stroke lesion segmentation but require large, diverse, and annotated datasets. The ISLES'24 challenge addresses this need by providing longitudinal stroke imaging data, including CT scans taken on arrival to the hospital and follow-up MRI taken 2-9 days from initial arrival, with annotations derived from follow-up MRI. Importantly, models submitted to the ISLES'24 challenge are evaluated using only CT inputs, requiring prediction of lesion progression that may not be visible in CT scans for segmentation. Our winning solution shows that a carefully designed preprocessing pipeline including deep-learning-based skull stripping and custom intensity windowing is beneficial for accurate segmentation. Combined with a standard large residual nnU-Net architecture for segmentation, this approach achieves a mean test Dice of 28.5 with a standard deviation of 21.27.

Artificial Intelligence enhanced R1 maps can improve lesion detection in focal epilepsy in children

Doumou, G., D'Arco, F., Figini, M., Lin, H., Lorio, S., Piper, R., O'Muircheartaigh, J., Cross, H., Weiskopf, N., Alexander, D., Carmichael, D. W.

medrxiv logopreprintMay 23 2025
Background and purposeMRI is critical for the detection of subtle cortical pathology in epilepsy surgery assessment. This can be aided by improved MRI quality and resolution using ultra-high field (7T). But poor access and long scan durations limit widespread use, particularly in a paediatric setting. AI-based learning approaches may provide similar information by enhancing data obtained with conventional MRI (3T). We used a convolutional neural network trained on matched 3T and 7T images to enhance quantitative R1-maps (longitudinal relaxation rate) obtained at 3T in paediatric epilepsy patients and to determine their potential clinical value for lesion identification. Materials and MethodsA 3D U-Net was trained using paired patches from 3T and 7T R1-maps from n=10 healthy volunteers. The trained network was applied to enhance paediatric focal epilepsy 3T R1 images from a different scanner/site (n=17 MRI lesion positive / n=14 MR-negative). Radiological review assessed image quality, as well as lesion identification and visualization of enhanced maps in comparison to the 3T R1-maps without clinical information. Lesion appearance was then compared to 3D-FLAIR. ResultsAI enhanced R1 maps were superior in terms of image quality in comparison to the original 3T R1 maps, while preserving and enhancing the visibility of lesions. After exclusion of 5/31 patients (due to movement artefact or incomplete data), lesions were detected in AI Enhanced R1 maps for 14/15 (93%) MR-positive and 4/11 (36%) MR-negative patients. ConclusionAI enhanced R1 maps improved the visibility of lesions in MR positive patients, as well as providing higher sensitivity in the MR-negative group compared to either the original 3T R1-maps or 3D-FLAIR. This provides promising initial evidence that 3T quantitative maps can outperform conventional 3T imaging via enhancement by an AI model trained on 7T MRI data, without the need for pathology-specific information.

Brain age prediction from MRI scans in neurodegenerative diseases.

Papouli A, Cole JH

pubmed logopapersMay 22 2025
This review explores the use of brain age estimation from MRI scans as a biomarker of brain health. With disorders like Alzheimer's and Parkinson's increasing globally, there is an urgent need for early detection tools that can identify at-risk individuals before cognitive symptoms emerge. Brain age offers a noninvasive, quantitative measure of neurobiological ageing, with applications in early diagnosis, disease monitoring, and personalized medicine. Studies show that individuals with Alzheimer's, mild cognitive impairment (MCI), and Parkinson's have older brain ages than their chronological age. Longitudinal research indicates that brain-predicted age difference (brain-PAD) rises with disease progression and often precedes cognitive decline. Advances in deep learning and multimodal imaging have improved the accuracy and interpretability of brain age predictions. Moreover, socioeconomic disparities and environmental factors significantly affect brain aging, highlighting the need for inclusive models. Brain age estimation is a promising biomarker for identify future risk of neurodegenerative disease, monitoring progression, and helping prognosis. Challenges like implementation of standardization, demographic biases, and interpretability remain. Future research should integrate brain age with biomarkers and multimodal imaging to enhance early diagnosis and intervention strategies.

Mitigating Overfitting in Medical Imaging: Self-Supervised Pretraining vs. ImageNet Transfer Learning for Dermatological Diagnosis

Iván Matas, Carmen Serrano, Miguel Nogales, David Moreno, Lara Ferrándiz, Teresa Ojeda, Begoña Acha

arxiv logopreprintMay 22 2025
Deep learning has transformed computer vision but relies heavily on large labeled datasets and computational resources. Transfer learning, particularly fine-tuning pretrained models, offers a practical alternative; however, models pretrained on natural image datasets such as ImageNet may fail to capture domain-specific characteristics in medical imaging. This study introduces an unsupervised learning framework that extracts high-value dermatological features instead of relying solely on ImageNet-based pretraining. We employ a Variational Autoencoder (VAE) trained from scratch on a proprietary dermatological dataset, allowing the model to learn a structured and clinically relevant latent space. This self-supervised feature extractor is then compared to an ImageNet-pretrained backbone under identical classification conditions, highlighting the trade-offs between general-purpose and domain-specific pretraining. Our results reveal distinct learning patterns. The self-supervised model achieves a final validation loss of 0.110 (-33.33%), while the ImageNet-pretrained model stagnates at 0.100 (-16.67%), indicating overfitting. Accuracy trends confirm this: the self-supervised model improves from 45% to 65% (+44.44%) with a near-zero overfitting gap, whereas the ImageNet-pretrained model reaches 87% (+50.00%) but plateaus at 75% (+19.05%), with its overfitting gap increasing to +0.060. These findings suggest that while ImageNet pretraining accelerates convergence, it also amplifies overfitting on non-clinically relevant features. In contrast, self-supervised learning achieves steady improvements, stronger generalization, and superior adaptability, underscoring the importance of domain-specific feature extraction in medical imaging.

FLAMeS: A Robust Deep Learning Model for Automated Multiple Sclerosis Lesion Segmentation

Dereskewicz, E., La Rosa, F., dos Santos Silva, J., Sizer, E., Kohli, A., Wynen, M., Mullins, W. A., Maggi, P., Levy, S., Onyemeh, K., Ayci, B., Solomon, A. J., Assländer, J., Al-Louzi, O., Reich, D. S., Sumowski, J. F., Beck, E. S.

medrxiv logopreprintMay 22 2025
Background and Purpose Assessment of brain lesions on MRI is crucial for research in multiple sclerosis (MS). Manual segmentation is time consuming and inconsistent. We aimed to develop an automated MS lesion segmentation algorithm for T2-weighted fluid-attenuated inversion recovery (FLAIR) MRI. Methods We developed FLAIR Lesion Analysis in Multiple Sclerosis (FLAMeS), a deep learning-based MS lesion segmentation algorithm based on the nnU-Net 3D full-resolution U-Net and trained on 668 FLAIR 1.5 and 3 tesla scans from persons with MS. FLAMeS was evaluated on three external datasets: MSSEG-2 (n=14), MSLesSeg (n=51), and a clinical cohort (n=10), and compared to SAMSEG, LST-LPA, and LST-AI. Performance was assessed qualitatively by two blinded experts and quantitatively by comparing automated and ground truth lesion masks using standard segmentation metrics. Results In a blinded qualitative review of 20 scans, both raters selected FLAMeS as the most accurate segmentation in 15 cases, with one rater favoring FLAMeS in two additional cases. Across all testing datasets, FLAMeS achieved a mean Dice score of 0.74, a true positive rate of 0.84, and an F1 score of 0.78, consistently outperforming the benchmark methods. For other metrics, including positive predictive value, relative volume difference, and false positive rate, FLAMeS performed similarly or better than benchmark methods. Most lesions missed by FLAMeS were smaller than 10 mm3, whereas the benchmark methods missed larger lesions in addition to smaller ones. Conclusions FLAMeS is an accurate, robust method for MS lesion segmentation that outperforms other publicly available methods.

Radiomics-Based Early Triage of Prostate Cancer: A Multicenter Study from the CHAIMELEON Project

Vraka, A., Marfil-Trujillo, M., Ribas-Despuig, G., Flor-Arnal, S., Cerda-Alberich, L., Jimenez-Gomez, P., Jimenez-Pastor, A., Marti-Bonmati, L.

medrxiv logopreprintMay 22 2025
Prostate cancer (PCa) is the most commonly diagnosed malignancy in men worldwide. Accurate triage of patients based on tumor aggressiveness and staging is critical for selecting appropriate management pathways. While magnetic resonance imaging (MRI) has become a mainstay in PCa diagnosis, most predictive models rely on multiparametric imaging or invasive inputs, limiting generalizability in real-world clinical settings. This study aimed to develop and validate machine learning (ML) models using radiomic features extracted from T2-weighted MRI--alone and in combination with clinical variables--to predict ISUP grade (tumor aggressiveness), lymph node involvement (cN) and distant metastasis (cM). A retrospective multicenter cohort from three European sites in the Chaimeleon project was analyzed. Radiomic features were extracted from prostate zone segmentations and lesion masks, following standardized preprocessing and ComBat harmonization. Feature selection and model optimization were performed using nested cross-validation and Bayesian tuning. Hybrid models were trained using XGBoost and interpreted with SHAP values. The ISUP model achieved an AUC of 0.66, while the cN and cM models reached AUCs of 0.77 and 0.80, respectively. The best-performing models consistently combined prostate zone radiomics with clinical features such as PSA, PIRADSv2 and ISUP grade. SHAP analysis confirmed the importance of both clinical and texture-based radiomic features, with entropy and non-uniformity measures playing central roles in all tasks. Our results demonstrate the feasibility of using T2-weighted MRI and zonal radiomics for robust prediction of aggressiveness, nodal involvement and distant metastasis in PCa. This fully automated pipeline offers an interpretable, accessible and clinically translatable tool for first-line PCa triage, with potential integration into real-world diagnostic workflows.
Page 233 of 2922917 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.