Back to all papers

An efficient, scalable, and adaptable plug-and-play temporal attention module for motion-guided cardiac segmentation with sparse temporal labels.

February 9, 2026pubmed logopapers

Authors

Hasan MK,Yang G,Yap CH

Affiliations (3)

  • Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK. Electronic address: [email protected].
  • Bioengineering Department and Imperial-X, Imperial College London, London, W12 7SL, UK; National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, UK; Cardiovascular Research Centre Royal Brompton Hospital, London, SW3 6NP, UK; School of Biomedical Engineering & Imaging Sciences, King's, College London London, WC2R 2LS, UK. Electronic address: [email protected].
  • Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK. Electronic address: [email protected].

Abstract

Cardiac anatomy segmentation is essential for clinical assessment of cardiac function and disease diagnosis to inform treatment and intervention. Deep learning (DL) has improved cardiac anatomy segmentation accuracy, especially when information on cardiac motion dynamics is integrated into the networks. Several methods for incorporating motion information have been proposed; however, existing methods are not yet optimal: adding the time dimension to input data causes high computational costs, and incorporating registration into the segmentation network remains computationally costly and can be affected by errors of registration, especially with non-DL registration. While attention-based motion modeling is promising, suboptimal design constrains its capacity to learn the complex and coherent temporal interactions inherent in cardiac image sequences. Here, we propose a novel approach to incorporating motion information in the DL segmentation networks: a computationally efficient yet robust Temporal Attention Module (TAM), modeled as a small, multi-headed, cross-temporal attention module, which can be plug-and-play inserted into a broad range of segmentation networks (CNN, transformer, or hybrid) without a drastic architecture modification. Extensive experiments on multiple cardiac imaging datasets, such as 2D echocardiography (CAMUS and EchoNet-Dynamic), 3D echocardiography (MITEA), and 3D cardiac MRI (ACDC), confirm that TAM consistently improves segmentation performance across datasets when added to a range of networks, including UNet, FCN8s, UNetR, SwinUNetR, and the recent I<sup>2</sup>UNet and DT-VNet. Integrating TAM into SAM yields a temporal SAM that reduces Hausdorff distance (HD) from 3.99 mm to 3.51 mm on the CAMUS dataset, while integrating TAM into a pre-trained MedSAM reduces HD from 3.04 to 2.06 pixels after fine-tuning on the EchoNet-Dynamic dataset. On the ACDC 3D dataset, our TAM-UNet and TAM-DT-VNet achieve substantial reductions in HD, from 7.97 mm to 4.23 mm and 6.87 mm to 4.74 mm, respectively. Additionally, TAM's training does not require segmentation of ground truths from all time frames and can be achieved with sparse temporal annotation. TAM is thus a robust, generalizable, and adaptable solution for motion-awareness enhancement that is easily scaled from 2D to 3D. The code is available at https://github.com/kamruleee51/TAM.

Topics

Deep LearningHeartImage Processing, Computer-AssistedImage Interpretation, Computer-AssistedJournal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.