Back to all papers

Cogformer: A unified multi-scale brain representation for visual decoding and reconstruction from fMRI.

February 24, 2026pubmed logopapers

Authors

Yin X,Gan JQ,Wang H

Abstract

With the rapid development of deep generative models (DGMs), the performance of decoding language and reconstructing images from Functional Magnetic Resonance Imaging (fMRI) has been improved. Nevertheless, the accurate representation of brain activity remains highly challenging, primarily due to the limited paired samples and the low signal-to-noise ratios of fMRI. To tackle these challenges, we introduce Cogformer, a unified multi-scale brain representation method. It is the first to learn brain representation from multi-scale fMRI activities via self-attention, and integrate a synchronized decoding and dynamic decoupling strategy for structural and semantic features through cross-attention. We conduct a systematic evaluation of Cogformer on the large-scale Natural Scenes Dataset (NSD) across a broad range of visual decoding tasks, including category classification, multi-label classification, image retrieval, image captioning, and image reconstruction. To the best of our knowledge, this represents the most extensive task coverage reported in related research. Cogformer achieves superior performance compared to a range of transformer-based baselines in category classification, multi-label classification, and image retrieval tasks. Moreover, in the more challenging tasks of image captioning and image reconstruction, Cogformer leverages a prior diffusion module to enhance the alignment with image semantics. This further improves the semantic consistency for caption generation and visual fidelity in image reconstruction. Across multiple evaluation metrics, Cogformer demonstrates competitive performance against existing state-of-the-art (SOTA) methods, highlighting its strong decoding capabilities and generalization potential.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.