Sort by:
Page 37 of 3523516 results

Large-scale Multi-sequence Pretraining for Generalizable MRI Analysis in Versatile Clinical Applications

Zelin Qiu, Xi Wang, Zhuoyao Xie, Juan Zhou, Yu Wang, Lingjie Yang, Xinrui Jiang, Juyoung Bae, Moo Hyun Son, Qiang Ye, Dexuan Chen, Rui Zhang, Tao Li, Neeraj Ramesh Mahboobani, Varut Vardhanabhuti, Xiaohui Duan, Yinghua Zhao, Hao Chen

arxiv logopreprintAug 10 2025
Multi-sequence Magnetic Resonance Imaging (MRI) offers remarkable versatility, enabling the distinct visualization of different tissue types. Nevertheless, the inherent heterogeneity among MRI sequences poses significant challenges to the generalization capability of deep learning models. These challenges undermine model performance when faced with varying acquisition parameters, thereby severely restricting their clinical utility. In this study, we present PRISM, a foundation model PRe-trained with large-scale multI-Sequence MRI. We collected a total of 64 datasets from both public and private sources, encompassing a wide range of whole-body anatomical structures, with scans spanning diverse MRI sequences. Among them, 336,476 volumetric MRI scans from 34 datasets (8 public and 26 private) were curated to construct the largest multi-organ multi-sequence MRI pretraining corpus to date. We propose a novel pretraining paradigm that disentangles anatomically invariant features from sequence-specific variations in MRI, while preserving high-level semantic representations. We established a benchmark comprising 44 downstream tasks, including disease diagnosis, image segmentation, registration, progression prediction, and report generation. These tasks were evaluated on 32 public datasets and 5 private cohorts. PRISM consistently outperformed both non-pretrained models and existing foundation models, achieving first-rank results in 39 out of 44 downstream benchmarks with statistical significance improvements. These results underscore its ability to learn robust and generalizable representations across unseen data acquired under diverse MRI protocols. PRISM provides a scalable framework for multi-sequence MRI analysis, thereby enhancing the translational potential of AI in radiology. It delivers consistent performance across diverse imaging protocols, reinforcing its clinical applicability.

SynMatch: Rethinking Consistency in Medical Image Segmentation with Sparse Annotations

Zhiqiang Shen, Peng Cao, Xiaoli Liu, Jinzhu Yang, Osmar R. Zaiane

arxiv logopreprintAug 10 2025
Label scarcity remains a major challenge in deep learning-based medical image segmentation. Recent studies use strong-weak pseudo supervision to leverage unlabeled data. However, performance is often hindered by inconsistencies between pseudo labels and their corresponding unlabeled images. In this work, we propose \textbf{SynMatch}, a novel framework that sidesteps the need for improving pseudo labels by synthesizing images to match them instead. Specifically, SynMatch synthesizes images using texture and shape features extracted from the same segmentation model that generates the corresponding pseudo labels for unlabeled images. This design enables the generation of highly consistent synthesized-image-pseudo-label pairs without requiring any training parameters for image synthesis. We extensively evaluate SynMatch across diverse medical image segmentation tasks under semi-supervised learning (SSL), weakly-supervised learning (WSL), and barely-supervised learning (BSL) settings with increasingly limited annotations. The results demonstrate that SynMatch achieves superior performance, especially in the most challenging BSL setting. For example, it outperforms the recent strong-weak pseudo supervision-based method by 29.71\% and 10.05\% on the polyp segmentation task with 5\% and 10\% scribble annotations, respectively. The code will be released at https://github.com/Senyh/SynMatch.

The eyelid and pupil dynamics underlying stress levels in awake mice.

Zeng, H.

biorxiv logopreprintAug 10 2025
Stress is a natural response of the body to perceived threats, and it can have both positive and negative effects on brain hemodynamics. Stress-induced changes in pupil and eyelid size/shape have been used as a biomarker in several fMRI studies. However, there were limited knowledges regarding changes in behavior of pupil and eyelid dynamics, particularly on animal models. In the present study, the pupil and eyelid dynamics were carefully investigated and characterized in a newly developed awake rodent fMRI protocol. Leveraging deep learning techniques, the mouse pupil and eyelid diameters were extracted and analyzed during different training and imaging phases in the present project. Our findings demonstrate a consistent downwards trend in pupil and eyelid dynamics under a meticulously designed training protocol, suggesting that the behaviors of the pupil and eyelid can be served as reliable indicators of stress levels and motion artifacts in awake fMRI studies. The current recording platform not only enables the facilitation of awake animal MRI studies but also highlights its potential applications to numerous other research areas, owing to the non-invasive nature and straightforward implementation.

Dendrite cross attention for high-dose-rate brachytherapy distribution planning.

Saini S, Liu X

pubmed logopapersAug 10 2025
Cervical cancer is a significant global health issue, and high-dose-rate brachytherapy (HDR-BT) is crucial for its treatment. However, manually creating HDR-BT plans is time-consuming and heavily relies on the planner's expertise, making standardization difficult. This study introduces two advanced deep learning models to address this need: Bi-branch Cross-Attention UNet (BiCA-UNet) and Dendrite Cross-Attention UNet (DCA-UNet). BiCA-UNet enhances the correlation between the CT scan and segmentation maps of the clinical target volume (CTV), applicator, bladder, and rectum. It uses two branches: one processes the stacked input of CT scans and segmentations, and the other focuses on the CTV segmentation. A cross-attention mechanism integrates these branches, improving the model's understanding of the CTV region for accurate dose predictions. Building on BiCA-UNet, DCA-UNet further introduces a primary branch of stacked inputs and three secondary branches for CTV, bladder, and rectum segmentations forming a dendritic structure. Cross attention with bladder and rectum segmentation helps the model understand the regions of organs at risk (OAR), refining dose prediction. Evaluation of these models using multiple metrics indicates that both BiCA-UNet and DCA-UNet significantly improve HDR-BT dose prediction accuracy for various applicator types. The cross-attention mechanisms enhance the feature representation of critical anatomical regions, leading to precise and reliable treatment plans. This research highlights the potential of BiCA-UNet and DCA-UNet in advancing HDR-BT planning, contributing to the standardization of treatment plans, and offering promising directions for future research to improve patient outcomes in the source data.

Pulmonary diseases accurate recognition using adaptive multiscale feature fusion in chest radiography.

Zhou M, Gao L, Bian K, Wang H, Wang N, Chen Y, Liu S

pubmed logopapersAug 10 2025
Pulmonary disease can severely impair respiratory function and be life-threatening. Accurately recognizing pulmonary diseases in chest X-ray images is challenging due to overlapping body structures and the complex anatomy of the chest. We propose an adaptive multiscale feature fusion model for recognizing Chest X-ray images of pneumonia, tuberculosis, and COVID-19, which are common pulmonary diseases. We introduce an Adaptive Multiscale Fusion Network (AMFNet) for pulmonary disease classification in chest X-ray images. AMFNet consists of a lightweight Multiscale Fusion Network (MFNet) and ResNet50 as the secondary feature extraction network. MFNet employs Fusion Blocks with self-calibrated convolution (SCConv) and Attention Feature Fusion (AFF) to capture multiscale semantic features, and integrates a custom activation function, MFReLU, which is employed to reduce the model's memory access time. A fusion module adaptively combines features from both networks. Experimental results show that AMFNet achieves 97.48% accuracy and an F1 score of 0.9781 on public datasets, outperforming models like ResNet50, DenseNet121, ConvNeXt-Tiny, and Vision Transformer while using fewer parameters.

Prediction of hematoma changes in spontaneous intracerebral hemorrhage using a Transformer-based generative adversarial network to generate follow-up CT images.

Feng C, Jiang C, Hu C, Kong S, Ye Z, Han J, Zhong K, Yang T, Yin H, Lao Q, Ding Z, Shen D, Shen Q

pubmed logopapersAug 10 2025
To visualize and assess hematoma growth trends by generating follow-up CT images within 24 h based on baseline CT images of spontaneous intracerebral hemorrhage (sICH) using Transformer-integrated Generative Adversarial Networks (GAN). Patients with sICH were retrospectively recruited from two medical centers. The imaging data included baseline non-contrast CT scans taken after onset and follow-up imaging within 24 h. In the test set, the peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM) were utilized to quantitatively assess the quality of the predicted images. Pearson's correlation analysis was performed to assess the agreement of semantic features and geometric properties of hematomas between true follow-up CT images and the predicted images. The consistency of hematoma expansion prediction between true and generated images was further examined. The PSNR of the predicted images was 26.73 ± 1.11, and the SSIM was 91.23 ± 1.10. The Pearson correlation coefficients (r) with 95 % confidence intervals (CI) for irregularity, satellite sign number, intraventricular or subarachnoid hemorrhage, midline shift, edema expansion, mean CT value, maximum cross-sectional area, and hematoma volume between the predicted and true follow-up images were as follows: 0.94 (0.91, 0.96), 0.87 (0.81, 0.91), 0.86 (0.80, 0.91), 0.89 (0.84, 0.92), 0.91 (0.87, 0.94), 0.78(0.68, 0.84), 0.94(0.91, 0.96), and 0.94 (0.91, 0.96), respectively. The correlation coefficient (r) for predicting hematoma expansion between predicted and true follow-up images was 0.86 (95 % CI: 0.79, 0.90; P < 0.001). The model constructed using a GAN integrated with Transformer modules can accurately visualize early hematoma changes in sICH.

OctreeNCA: Single-Pass 184 MP Segmentation on Consumer Hardware

Nick Lemke, John Kalkhof, Niklas Babendererde, Anirban Mukhopadhyay

arxiv logopreprintAug 9 2025
Medical applications demand segmentation of large inputs, like prostate MRIs, pathology slices, or videos of surgery. These inputs should ideally be inferred at once to provide the model with proper spatial or temporal context. When segmenting large inputs, the VRAM consumption of the GPU becomes the bottleneck. Architectures like UNets or Vision Transformers scale very poorly in VRAM consumption, resulting in patch- or frame-wise approaches that compromise global consistency and inference speed. The lightweight Neural Cellular Automaton (NCA) is a bio-inspired model that is by construction size-invariant. However, due to its local-only communication rules, it lacks global knowledge. We propose OctreeNCA by generalizing the neighborhood definition using an octree data structure. Our generalized neighborhood definition enables the efficient traversal of global knowledge. Since deep learning frameworks are mainly developed for large multi-layer networks, their implementation does not fully leverage the advantages of NCAs. We implement an NCA inference function in CUDA that further reduces VRAM demands and increases inference speed. Our OctreeNCA segments high-resolution images and videos quickly while occupying 90% less VRAM than a UNet during evaluation. This allows us to segment 184 Megapixel pathology slices or 1-minute surgical videos at once.

Spatio-Temporal Conditional Diffusion Models for Forecasting Future Multiple Sclerosis Lesion Masks Conditioned on Treatments

Gian Mario Favero, Ge Ya Luo, Nima Fathi, Justin Szeto, Douglas L. Arnold, Brennan Nichyporuk, Chris Pal, Tal Arbel

arxiv logopreprintAug 9 2025
Image-based personalized medicine has the potential to transform healthcare, particularly for diseases that exhibit heterogeneous progression such as Multiple Sclerosis (MS). In this work, we introduce the first treatment-aware spatio-temporal diffusion model that is able to generate future masks demonstrating lesion evolution in MS. Our voxel-space approach incorporates multi-modal patient data, including MRI and treatment information, to forecast new and enlarging T2 (NET2) lesion masks at a future time point. Extensive experiments on a multi-centre dataset of 2131 patient 3D MRIs from randomized clinical trials for relapsing-remitting MS demonstrate that our generative model is able to accurately predict NET2 lesion masks for patients across six different treatments. Moreover, we demonstrate our model has the potential for real-world clinical applications through downstream tasks such as future lesion count and location estimation, binary lesion activity classification, and generating counterfactual future NET2 masks for several treatments with different efficacies. This work highlights the potential of causal, image-based generative models as powerful tools for advancing data-driven prognostics in MS.

Fusion-Based Brain Tumor Classification Using Deep Learning and Explainable AI, and Rule-Based Reasoning

Melika Filvantorkaman, Mohsen Piri, Maral Filvan Torkaman, Ashkan Zabihi, Hamidreza Moradi

arxiv logopreprintAug 9 2025
Accurate and interpretable classification of brain tumors from magnetic resonance imaging (MRI) is critical for effective diagnosis and treatment planning. This study presents an ensemble-based deep learning framework that combines MobileNetV2 and DenseNet121 convolutional neural networks (CNNs) using a soft voting strategy to classify three common brain tumor types: glioma, meningioma, and pituitary adenoma. The models were trained and evaluated on the Figshare dataset using a stratified 5-fold cross-validation protocol. To enhance transparency and clinical trust, the framework integrates an Explainable AI (XAI) module employing Grad-CAM++ for class-specific saliency visualization, alongside a symbolic Clinical Decision Rule Overlay (CDRO) that maps predictions to established radiological heuristics. The ensemble classifier achieved superior performance compared to individual CNNs, with an accuracy of 91.7%, precision of 91.9%, recall of 91.7%, and F1-score of 91.6%. Grad-CAM++ visualizations revealed strong spatial alignment between model attention and expert-annotated tumor regions, supported by Dice coefficients up to 0.88 and IoU scores up to 0.78. Clinical rule activation further validated model predictions in cases with distinct morphological features. A human-centered interpretability assessment involving five board-certified radiologists yielded high Likert-scale scores for both explanation usefulness (mean = 4.4) and heatmap-region correspondence (mean = 4.0), reinforcing the framework's clinical relevance. Overall, the proposed approach offers a robust, interpretable, and generalizable solution for automated brain tumor classification, advancing the integration of deep learning into clinical neurodiagnostics.

LWT-ARTERY-LABEL: A Lightweight Framework for Automated Coronary Artery Identification

Shisheng Zhang, Ramtin Gharleghi, Sonit Singh, Daniel Moses, Dona Adikari, Arcot Sowmya, Susann Beier

arxiv logopreprintAug 9 2025
Coronary artery disease (CAD) remains the leading cause of death globally, with computed tomography coronary angiography (CTCA) serving as a key diagnostic tool. However, coronary arterial analysis using CTCA, such as identifying artery-specific features from computational modelling, is labour-intensive and time-consuming. Automated anatomical labelling of coronary arteries offers a potential solution, yet the inherent anatomical variability of coronary trees presents a significant challenge. Traditional knowledge-based labelling methods fall short in leveraging data-driven insights, while recent deep-learning approaches often demand substantial computational resources and overlook critical clinical knowledge. To address these limitations, we propose a lightweight method that integrates anatomical knowledge with rule-based topology constraints for effective coronary artery labelling. Our approach achieves state-of-the-art performance on benchmark datasets, providing a promising alternative for automated coronary artery labelling.
Page 37 of 3523516 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.