Back to all papers

PAM: a propagation-based model for segmenting any 3D objects across multi-modal medical images.

December 2, 2025pubmed logopapers

Authors

Chen Z,Nan X,Li J,Zhao J,Li H,Lin Z,Li H,Chen H,Liu Y,Tang L,Zhang L,Dong B

Affiliations (10)

  • Center for Machine Learning Research, Peking University, Beijing, China.
  • Center for Data Science, Peking University, Beijing, China.
  • Department of Radiology, Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Peking University Cancer Hospital and Institute, Beijing, China.
  • National Engineering Laboratory for Big Data Analysis and Applications, Peking University, Beijing, China.
  • Beijing International Center for Mathematical Research, Peking University, Beijing, China.
  • Department of Radiology, Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Peking University Cancer Hospital and Institute, Beijing, China. [email protected].
  • Center for Data Science, Peking University, Beijing, China. [email protected].
  • Center for Machine Learning Research, Peking University, Beijing, China. [email protected].
  • National Engineering Laboratory for Big Data Analysis and Applications, Peking University, Beijing, China. [email protected].
  • Beijing International Center for Mathematical Research and the New Cornerstone Science Laboratory, Peking University, Beijing, China. [email protected].

Abstract

Volumetric segmentation is a major challenge in medical imaging, as current methods require extensive annotations and retraining, limiting transferability across objects. We present PAM, a propagation-based framework that generates 3D segmentations from a minimal 2D prompt. PAM integrates a CNN-based UNet for intra-slice features with Transformer attention for inter-slice propagation, capturing structural and semantic continuity to enable robust cross-object generalization. Across 44 diverse datasets, PAM outperformed MedSAM and SegVol, improving average DSC by 19.3%. It maintained stable performance under variations in prompts (P ≥ 0.5985) and propagation settings (P ≥ 0.6131), while achieving faster inference (P < 0.001) and reducing user interaction time by 63.6%. Gains were strongest for irregular objects, with improvements negatively correlated with object regularity (r < -0.1249). By delivering accurate 3D segmentations from minimal input, PAM lowers reliance on manual annotation and task-specific training, providing an efficient and generalizable tool for automated clinical imaging.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.