Back to all papers

MPCM-RRG: Multi-modal Prompt Collaboration Mechanism for Radiology Report Generation.

Authors

Yu Y,Huang G,Tan Z,Shi J,Li M,Pun CM,Zheng F,Ma S,Wang S,He L

Affiliations (10)

  • School of Information Engineering, Guangdong University of Technology, Guangzhou, China. Electronic address: [email protected].
  • School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, China. Electronic address: [email protected].
  • School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, China. Electronic address: [email protected].
  • School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, China. Electronic address: [email protected].
  • Zhejiang Institute of Optoelectronics, Jinhua 321004, China; Zhejiang Key Laboratory of Intelligent Education Technology and Application, Zhejiang Normal University, Jinhua 321004, China. Electronic address: [email protected].
  • Faculty of Science and Technology, University of Macau, Macao Special Administrative Region of China. Electronic address: [email protected].
  • Faculty of Science and Technology, University of Macau, Macao Special Administrative Region of China; Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China. Electronic address: [email protected].
  • Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China. Electronic address: [email protected].
  • Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China. Electronic address: [email protected].
  • Department of Otorhinolaryngology Head and Neck Surgery, Guangzhou First People's Hospital, the Second Affiliated Hospital of South China University of Technology, Guangzhou, Guangdong Province, China. Electronic address: [email protected].

Abstract

The task of medical report generation involves automatically creating descriptive text reports from medical images, with the aim of alleviating the workload of physicians and enhancing diagnostic efficiency. However, although many existing medical report generation models based on the Transformer framework consider structural information in medical images, they ignore the interference of confounding factors on these structures, which limits the model's ability to effectively capture rich and critical lesion information. Furthermore, these models often struggle to address the significant imbalance between normal and abnormal content in actual reports, leading to challenges in accurately describing abnormalities. To address these limitations, we propose the Multi-modal Prompt Collaboration Mechanism for Radiology Report Generation Model (MPCM-RRG). This model consists of three key components: the Visual Causal Prompting Module (VCP), the Textual Prompt-Guided Feature Enhancement Module (TPGF), and the Visual-Textual Semantic Consistency Module (VTSC). The VCP module uses chest X-ray masks as visual prompts and incorporates causal inference principles to help the model minimize the influence of irrelevant regions. Through causal intervention, the model can learn the causal relationships between the pathological regions in the image and the corresponding findings described in the report. The TPGF module tackles the imbalance between abnormal and normal text by integrating detailed textual prompts, which also guide the model to focus on lesion areas using a multi-head attention mechanism. The VTSC module promotes alignment between the visual and textual representations through contrastive consistency loss, fostering greater interaction and collaboration between the visual and textual prompts. Experimental results demonstrate that MPCM-RRG outperforms other methods on the IU X-ray and MIMIC-CXR datasets, highlighting its effectiveness in generating high-quality medical reports.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.