Back to all papers

A novel interpreted deep network for Alzheimer's disease prediction based on inverted self attention and vision transformer.

Authors

Ibrar W,Khan MA,Hamza A,Rubab S,Alqahtani O,Alouane MT,Teng S,Nam Y

Affiliations (6)

  • Department of Computer Science, HITEC University, Taxila, Pakistan.
  • Department of Artificial Intelligence, Prince Mohammad Bin Fahd University, Al-Khobar, KSA, Saudi Arabia. [email protected].
  • Department of Computer Engineering, College of Computing and Informatics, University of Sharjah, 27272, Sharjah, United Arab Emirates.
  • College of Computer Science, King Khalid University, Abha, Saudi Arabia.
  • Department of ICT Convergence, Soonchunhyang University, Asan, 31538, Korea.
  • Department of ICT Convergence, Soonchunhyang University, Asan, 31538, Korea. [email protected].

Abstract

In the world, Alzheimer's disease (AD) is the utmost public reason for dementia. AD causes memory loss and disturbing mental function impairment in aging people. The loss of memory and disturbing mental function brings a significant load on patients as well as on society. So far, there is no actual treatment that can cure AD; however, early diagnosis can slow down this disease. Deep learning has shown substantial success in diagnosing AZ disease. However, challenges remain due to limited data, improper model selection, and extraction of irrelevant features. In this work, we proposed a fully automated framework based on the fusion of a vision transformer and a novel inverted residual bottleneck with self-attention (IRBwSA) for AD diagnosis. In the first step, data augmentation was performed to balance the selected dataset. After that, the vision model is designed and modified according to the dataset. Similarly, a new inverted bottleneck self-attention model is developed. The designed models are trained on the augmented dataset, and extracted features are fused using a novel search-based approach. Moreover, the designed models are interpreted using an explainable artificial intelligence technique named LIME. The fused features are finally classified using a shallow wide neural network and other classifiers. The experimental process was conducted on an augmented MRI dataset, and 96.1% accuracy and 96.05% precision rate were obtained. Comparison with a few recent techniques shows the proposed framework's better performance.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.