Back to all papers

SAFusion: Scenario-Adaptive Network for Multimodal Medical Image Fusion.

January 12, 2026pubmed logopapers

Authors

Li W,Jia P,He D,Liu S,Wang G,Huang Y

Abstract

Multimodal medical image fusion aims to integrate complementary information from different modalities to support clinical diagnosis and treatment. Although deep learning has significantly advanced this field, existing methods often overlook the differences between various fusion scenarios, making a single network inadequate for diverse fusion requirements. Therefore, we propose a novel scenario-adaptive fusion network. The network employs a two-stage training process. In the first stage, an autoencoder is trained for multiscale feature extraction and image reconstruction. In the second stage, the autoencoder parameters are frozen, and a Fusion Layer is trained to achieve multimodal feature integration. The Fusion Layer consists of a Scenario-Specific Fusion Module and a Scenario-General Fusion Module. The former uses a mixture-of-experts model to customize fusion strategies for different scenarios to optimize the fusion process. The latter employs a dual-path fusion structure based on standard convolution and deformable convolution gating mechanisms to achieve general feature fusion across multi-scenario. Compared to eleven state-of-the-art methods, our method demonstrates superior information integration and visual consistency, offering a flexible and efficient solution for various fusion scenarios. The code is available at https://github.com/PengtaoJia/SAFusion.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 8,300+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.