Back to all papers

CMAP-Fusion: A cross-modal feature selection and model pruning framework for laboratory and imaging data.

April 24, 2026pubmed logopapers

Authors

Liu C,Yang L,Lei J

Affiliations (3)

  • Senior Engineer, Liuzhou Women and Children's HealthCare Hospital, Liuzhou, Guangxi, China.
  • Senior Engineer, Guangzhou Women and Children's Medical Center Liuzhou Hospital, Liuzhou, Guangxi, China.
  • Associate Senior Technologist, Guangzhou Women and Children's Medical Center Liuzhou Hospital, Liuzhou, Guangxi, China.

Abstract

Cross-modal fusion of medical imaging and laboratory data is a key pathway for accurate diagnosis of diseases, yet it is constrained by issues such as the modal heterogeneity gap, accumulation of feature redundancy, and efficiency imbalance. Existing methods struggle to balance precision and clinical adaptability, and some rely on simulated data leading to limited generalization ability. To address these challenges, we propose the Cross-Modal Alignment-Pruning Fusion model (CMAP-Fusion), which achieves optimization through modular collaboration of "encoding alignment → redundant pruning → fusion prediction": ViT-B/16 is used to complete imaging feature extraction and dimension alignment, the SmartTrim dynamic pruning module screens key features and reduces redundancy, and the Cross-Modal Transformer (CMT) mines deep associations between dual modalities. Experiments on the COVID-19 Radiography Dataset, ISIC Skin Cancer Dataset, and ChestX-ray14 Dataset demonstrate that the model achieves accuracies of 95.3%, 89.7%, and 93.6% respectively, representing an improvement of 3.1% to 4.1% compared with optimal baselines. Meanwhile, the number of parameters is reduced by 44.2%, computational complexity is decreased by more than 43%, and cross-modal similarity and feature sparsity are significantly superior to baselines. This model realizes the synergistic optimization of "precision-efficiency-generalization," providing an efficient solution for medical cross-modal fusion. In the future, we will expand to multi-source modalities and multi-disease scenarios, strengthen clinical multi-center validation, further improve the model's interpretability and clinical acceptance, and facilitate the lightweight deployment of medical AI.

Topics

COVID-19Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.