Back to all papers

S<math xmlns="http://www.w3.org/1998/Math/MathML"><mmultiscripts><mrow></mrow><mrow></mrow><mn>2</mn></mmultiscripts></math>A-RConvNet: standalone self-attention enabled deep learning model for brain tumor classification with MRI images.

April 25, 2026pubmed logopapers

Authors

Waghmode U,Naik A,Deone J,Choudhury SR,Dhawale D,Puri D,Solanki S

Affiliations (4)

  • Ramrao Adik Institute of Technology, Navi Mumbai, India.
  • Lokmanya Tilak College of Engineering, Koparkhairane, Navi Mumbai, India.
  • Department of Information Technology, Vidhyalankar Institute of Technology, Wadala, Mumbai, India.
  • Department of Artificial Intelligence and Machine Learning, Manipal University Jaipur, Jaipur, 303007, Rajasthan, India. [email protected].

Abstract

Globally, the main factor that contributes to increasing the mortality rate among people is the development of abnormal cells in the brain, which leads to a Brain Tumor (BT). Therefore, the classification of BT is essential to prevent the increasing death rate by diagnosing the tumor based on its type. In order to classify the types of BT, several models are introduced, but they possess numerous drawbacks, including poor accuracy, higher time consumption, computational complexities, overfitting, and so forth. Hence, the Standalone Self-Attention based Repeated Convolutional Network (<math xmlns="http://www.w3.org/1998/Math/MathML"><msup><mtext>S</mtext><mn>2</mn></msup></math>A-RConvNet) model is developed to classify the BT types accurately to save the lives of affected people by solving the limitations of conventional approaches. The incorporation of the Standalone Self-Attention (<math xmlns="http://www.w3.org/1998/Math/MathML"><msup><mtext>S</mtext><mn>2</mn></msup></math>A) module enables the RConvNet to focus more on the tumor area, which helps to increase the model's accuracy in BT categorization. Furthermore, the extraction of Structured ResNet Attention Gray-level (SRAG) features increases the training period and decreases the computational complexities, which leads to better performance of the model in BT classification. The <math xmlns="http://www.w3.org/1998/Math/MathML"><msup><mtext>S</mtext><mn>2</mn></msup></math>A-RConvNet model attained the values of sensitivity of 97.61%, precision of 98.71%, F1-Score of 98.16%, specificity of 98.43% and accuracy of 97.98% with 90% of training using the BraTS 2021 dataset.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.