Back to all papers

Alzheimer's and Parkinson's Detection with Video-Based Hybrid Deep Learning from Brain MRI.

April 14, 2026pubmed logopapers

Authors

Sunnetci KM,Balci M,Ekersular MN,Oguz FE,Alkan A

Affiliations (5)

  • Department of Electrical and Electronics Engineering, Osmaniye Korkut Ata University, Osmaniye, Turkey.
  • Department of Electronics and Automation, Gaziantep University, Naci Topçuoğlu Vocational School, Gaziantep, Turkey.
  • Department of Electrical and Electronics Engineering, Kahramanmaraş Sütçü İmam University, Kahramanmaraş, Turkey.
  • Department of Biomedical Device Technology, Hatay Mustafa Kemal University, Hassa Vocational School, Hatay, Turkey.
  • Department of Electrical and Electronics Engineering, Kahramanmaraş Sütçü İmam University, Kahramanmaraş, Turkey. [email protected].

Abstract

Dementia, which refers to disorders related to human memory, significantly affects the human brain, and a person with it can experience certain difficulties in physical and mental activities. The most common and fatal types of dementia are Alzheimer's Disease (AD) and Parkinson's Disease (PD). Therefore, AD, PD, and control labels are detected in this study from videos created using brain Magnetic Resonance Imaging (MRI). A public dataset including these three classes is used in this study. After preprocessing this dataset, a video is created for each class. Afterward, short video clips are randomly obtained from these videos. These video clips are randomly split into 50% training and 50% validation sets. Herein, features are extracted for training and validation using a Convolutional Neural Network (CNN)-based architecture. Long Short-Term Memory (LSTM), LSTM + Gated Recurrent Units (GRU), and Deeper LSTM architectures are trained using these extracted features. In addition, a user-friendly Graphical User Interface (GUI) application including all three models developed in the study is designed for AD, PD, and control detection. It is noted that these video-based architectures can achieve high performance with fewer short video clips, although 50% of the training data is used in the study. The maximum accuracy and specificity values achievable by the models developed in the study are 99.67% and 99.83%, respectively.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.