A multimodal vision-language model for generalizable annotation-free pathology localization.
Authors
Affiliations (16)
Affiliations (16)
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
- Pengcheng Laboratory, Shenzhen, China.
- University of Chinese Academy of Sciences, Beijing, China.
- School of Biomedical Engineering, Tsinghua Medicine, Tsinghua University, Beijing, China.
- Institute for AI in Medicine and Faculty of Medicine, Macau University of Science and Technology, Macau, China.
- State Key Laboratory of Eye Health, Eye Hospital and Institute for Advanced Study on Eye Health and Diseases, Institute for Clinical Data Science, Wenzhou Medical University, Wenzhou, China.
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China.
- Chinese Medicine Guangdong Laboratory, Hengqin, China.
- Beijing Chaoyang Hospital, Capital Medical University, Beijing, China.
- Key Lab of Medical Engineering for Cardiovascular Disease, Ministry of Education, Beijing, China.
- Department of Urology, South China Hospital, Medical School, Shenzhen University, Shenzhen, China.
- Faculty of Applied Sciences, Macao Polytechnic University, Macau, China.
- Institute for AI in Medicine and Faculty of Medicine, Macau University of Science and Technology, Macau, China. [email protected].
- State Key Laboratory of Eye Health, Eye Hospital and Institute for Advanced Study on Eye Health and Diseases, Institute for Clinical Data Science, Wenzhou Medical University, Wenzhou, China. [email protected].
- Guangzhou National Laboratory, Guangzhou, China. [email protected].
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China. [email protected].
Abstract
Existing deep learning models for defining pathology from clinical imaging data rely on expert annotations and lack generalization capabilities in open clinical environments. Here we present a generalizable vision-language model for Annotation-Free pathology Localization (AFLoc). The core strength of AFLoc is extensive multilevel semantic structure-based contrastive learning, which comprehensively aligns multigranularity medical concepts with abundant image features to adapt to the diverse expressions of pathologies without the reliance on expert image annotations. We conducted primary experiments on a dataset of 220,000 pairs of image-report chest X-ray images and performed validation across 8 external datasets encompassing 34 types of chest pathology. The results demonstrate that AFLoc outperforms state-of-the-art methods in both annotation-free localization and classification tasks. In addition, we assessed the generalizability of AFLoc on other modalities, including histopathology and retinal fundus images. We show that AFLoc exhibits robust generalization capabilities, even surpassing human benchmarks in localizing five different types of pathological image. These results highlight the potential of AFLoc in reducing annotation requirements and its applicability in complex clinical environments.