Back to all papers

Subsampled randomized Fourier GaLore for adapting foundation models in depth-driven liver landmark segmentation.

May 6, 2026pubmed logopapers

Authors

Lin YC,Huang J,Zhang H,Kavtaradze S,Clarkson MJ,Hoque MI

Affiliations (4)

  • UCL Hawkes Institute and Dept of Medical Physics and Biomedical Engineering, University College London, London, UK. [email protected].
  • UCL Hawkes Institute and Dept of Medical Physics and Biomedical Engineering, University College London, London, UK.
  • Visual Understanding Research Group, Dept of Informatics, King's College London, London, UK.
  • Division of Informatics, Imaging and Data Science, The University of Manchester, Manchester, UK.

Abstract

Accurate detection and delineation of anatomical structures in medical imaging are critical for computer-assisted interventions, particularly in laparoscopic liver surgery where 2D video streams limit depth perception and complicate landmark localization. While recent works have leveraged monocular depth cues for enhanced landmark detection, challenges remain in fusing RGB and depth features and in efficiently adapting large-scale vision models to surgical domains. We propose a depth-guided segmentation framework integrating semantic and geometric cues via dual foundation encoders: SAM2 for RGB and Depth Anything V2 for depth features. To efficiently adapt SAM2, we introduce SRFT-GaLore, a novel low-rank gradient projection method using Subsampled Randomized Fourier Transform. This enables efficient fine-tuning of high-dimensional attention layers without sacrificing representational power. A cross-attention fusion module further integrates RGB and depth cues. To assess cross-dataset generalization, validated on the public L3D and our new LLSD datasets. On the public L3D dataset, our method achieves a 4.85% improvement in Dice Similarity Coefficient (DSC) and a 11.78-point reduction in Average Symmetric Surface Distance (ASSD) compared to the D2GPLand. To further assess generalization capability, we evaluate our model on LLSD dataset. Our model maintains competitive performance and significantly outperforms SAM-based baselines, demonstrating strong cross-dataset robustness and adaptability to unseen surgical environments. The SRFT-GaLore-enhanced dual-encoder framework enables scalable, precise segmentation in depth-constrained surgical settings. Our findings highlight the potential of foundation model adaptation for real-time computer-assisted interventions. While current cross-modal fusion remains shallow for efficiency, future work will explore transformer-based decoders and deeper attention mechanisms. Ultimately, this research provides a robust foundation for 3D-2D anatomical registration and AR-guided navigation in complex laparoscopic procedures.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.