A survey of deep-learning-based radiology report generation using multimodal inputs.

Authors

Wang X,Figueredo G,Li R,Zhang WE,Chen W,Chen X

Affiliations (4)

  • School of Computer Science, The University of Nottingham, Nottingham NG7 2RD, United Kingdom.
  • School of Medicine, The University of Nottingham, Nottingham NG7 2RD, United Kingdom.
  • School of Computer and Mathematical Sciences, The University of Adelaide, Adelaide, SA 5005, Australia.
  • School of Computer Science, The University of Nottingham, Nottingham NG7 2RD, United Kingdom. Electronic address: [email protected].

Abstract

Automatic radiology report generation can alleviate the workload for physicians and minimize regional disparities in medical resources, therefore becoming an important topic in the medical image analysis field. It is a challenging task, as the computational model needs to mimic physicians to obtain information from multi-modal input data (i.e., medical images, clinical information, medical knowledge, etc.), and produce comprehensive and accurate reports. Recently, numerous works have emerged to address this issue using deep-learning-based methods, such as transformers, contrastive learning, and knowledge-base construction. This survey summarizes the key techniques developed in the most recent works and proposes a general workflow for deep-learning-based report generation with five main components, including multi-modality data acquisition, data preparation, feature learning, feature fusion and interaction, and report generation. The state-of-the-art methods for each of these components are highlighted. Additionally, we summarize the latest developments in large model-based methods and model explainability, along with public datasets, evaluation methods, current challenges, and future directions in this field. We have also conducted a quantitative comparison between different methods in the same experimental setting. This is the most up-to-date survey that focuses on multi-modality inputs and data fusion for radiology report generation. The aim is to provide comprehensive and rich information for researchers interested in automatic clinical report generation and medical image analysis, especially when using multimodal inputs, and to assist them in developing new algorithms to advance the field.

Topics

Journal ArticleReview
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.