Leveraging Vision Transformers in Multimodal Models for Retinal OCT Analysis.
Authors
Affiliations (7)
Affiliations (7)
- School of Science and Technology, Hellenic Open University, Patras, Greece.
- School of Medicine, National and Kapodistrian University of Athens, Athens, Greece.
- Merative, Healthcare, Dublin Docklands, Dublin 2, Ireland.
- Medical Retina Department, Bristol Eye Hospital, Bristol, UK.
- Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, UK.
- Medical School, Humanitas University, Milan, Italy.
- Intensive Care Unit, Sismanogleio General Hospital, Marousi, Greece.
Abstract
Optical Coherence Tomography (OCT) has become an indispensable imaging modality in ophthalmology, providing high-resolution cross-sectional images of the retina. Accurate classification of OCT images is crucial for diagnosing retinal diseases such as Age-related Macular Degeneration (AMD) and Diabetic Macular Edema (DME). This study explores the efficacy of various deep learning models, including convolutional neural networks (CNNs) and Vision Transformers (ViTs), in classifying OCT images. We also investigate the impact of integrating metadata (patient age, sex, eye laterality, and year) into the classification process, even when a significant portion of metadata is missing. Our results demonstrate that multimodal models leveraging both image and metadata inputs, such as the Multimodal ResNet18, can achieve competitive performance compared to image-only models, such as DenseNet121. Notably, DenseNet121 and Multimodal ResNet18 achieved the highest accuracy of 95.16%, with DenseNet121 showing a slightly higher F1-score of 0.9313. The multimodal ViT-based model also demonstrated promising results, achieving an accuracy of 93.22%, indicating the potential of Vision Transformers (ViTs) in medical image analysis, especially for handling complex multimodal data.