Sort by:
Page 74 of 75745 results

Neurovision: A deep learning driven web application for brain tumour detection using weight-aware decision approach.

Santhosh TRS, Mohanty SN, Pradhan NR, Khan T, Derbali M

pubmed logopapersJan 1 2025
In recent times, appropriate diagnosis of brain tumour is a crucial task in medical system. Therefore, identification of a potential brain tumour is challenging owing to the complex behaviour and structure of the human brain. To address this issue, a deep learning-driven framework consisting of four pre-trained models viz DenseNet169, VGG-19, Xception, and EfficientNetV2B2 is developed to classify potential brain tumours from medical resonance images. At first, the deep learning models are trained and fine-tuned on the training dataset, obtained validation scores of trained models are considered as model-wise weights. Then, trained models are subsequently evaluated on the test dataset to generate model-specific predictions. In the weight-aware decision module, the class-bucket of a probable output class is updated with the weights of deep models when their predictions match the class. Finally, the bucket with the highest aggregated value is selected as the final output class for the input image. A novel weight-aware decision mechanism is a key feature of this framework, which effectively deals tie situations in multi-class classification compared to conventional majority-based techniques. The developed framework has obtained promising results of 98.7%, 97.52%, and 94.94% accuracy on three different datasets. The entire framework is seamlessly integrated into an end-to-end web-application for user convenience. The source code, dataset and other particulars are publicly released at https://github.com/SaiSanthosh1508/Brain-Tumour-Image-classification-app [Rishik Sai Santhosh, "Brain Tumour Image Classification Application," https://github.com/SaiSanthosh1508/Brain-Tumour-Image-classification-app] for academic, research and other non-commercial usage.

Convolutional neural network using magnetic resonance brain imaging to predict outcome from tuberculosis meningitis.

Dong THK, Canas LS, Donovan J, Beasley D, Thuong-Thuong NT, Phu NH, Ha NT, Ourselin S, Razavi R, Thwaites GE, Modat M

pubmed logopapersJan 1 2025
Tuberculous meningitis (TBM) leads to high mortality, especially amongst individuals with HIV. Predicting the incidence of disease-related complications is challenging, for which purpose the value of brain magnetic resonance imaging (MRI) has not been well investigated. We used a convolutional neural network (CNN) to explore the complementary contribution of brain MRI to the conventional prognostic determinants. We pooled data from two randomised control trials of HIV-positive and HIV-negative adults with clinical TBM in Vietnam to predict the occurrence of death or new neurological complications in the first two months after the subject's first MRI session. We developed and compared three models: a logistic regression with clinical, demographic and laboratory data as reference, a CNN that utilised only T1-weighted MRI volumes, and a model that fused all available information. All models were fine-tuned using two repetitions of 5-fold cross-validation. The final evaluation was based on a random 70/30 training/test split, stratified by the outcome and HIV status. Based on the selected model, we explored the interpretability maps derived from the models. 215 patients were included, with an event prevalence of 22.3%. On the test set our non-imaging model had higher AUC (71.2% [Formula: see text] 1.1%) than the imaging-only model (67.3% [Formula: see text] 2.6%). The fused model was superior to both, with an average AUC = 77.3% [Formula: see text] 4.0% in the test set. The non-imaging variables were more informative in the HIV-positive group, while the imaging features were more predictive in the HIV-negative group. All three models performed better in the HIV-negative cohort. The interpretability maps show the model's focus on the lateral fissures, the corpus callosum, the midbrain, and peri-ventricular tissues. Imaging information can provide added value to predict unwanted outcomes of TBM. However, to confirm this finding, a larger dataset is needed.

Same-model and cross-model variability in knee cartilage thickness measurements using 3D MRI systems.

Katano H, Kaneko H, Sasaki E, Hashiguchi N, Nagai K, Ishijima M, Ishibashi Y, Adachi N, Kuroda R, Tomita M, Masumoto J, Sekiya I

pubmed logopapersJan 1 2025
Magnetic Resonance Imaging (MRI) based three-dimensional analysis of knee cartilage has evolved to become fully automatic. However, when implementing these measurements across multiple clinical centers, scanner variability becomes a critical consideration. Our purposes were to quantify and compare same-model variability (between repeated scans on the same MRI system) and cross-model variability (across different MRI systems) in knee cartilage thickness measurements using MRI scanners from five manufacturers, as analyzed with a specific 3D volume analysis software. Ten healthy volunteers (eight males and two females, aged 22-60 years) underwent two scans of their right knee on 3T MRI systems from five manufacturers (Canon, Fujifilm, GE, Philips, and Siemens). The imaging protocol included fat-suppressed spoiled gradient echo and proton density weighted sequences. Cartilage regions were automatically segmented into 7 subregions using a specific deep learning-based 3D volume analysis software. This resulted in 350 measurements for same-model variability and 2,800 measurements for cross-model variability. For same-model variability, 82% of measurements showed variability ≤0.10 mm, and 98% showed variability ≤0.20 mm. For cross-model variability, 51% showed variability ≤0.10 mm, and 84% showed variability ≤0.20 mm. The mean same-model variability (0.06 ± 0.05 mm) was significantly lower than cross-model variability (0.11 ± 0.09 mm) (p < 0.001). This study demonstrates that knee cartilage thickness measurements exhibit significantly higher variability across different MRI systems compared to repeated measurements on the same system, when analyzed using this specific software. This finding has important implications for multi-center studies and longitudinal assessments using different MRI systems and highlights the software-dependent nature of such variability assessments.

Radiomic Model Associated with Tumor Microenvironment Predicts Immunotherapy Response and Prognosis in Patients with Locoregionally Advanced Nasopharyngeal Carcinoma.

Sun J, Wu X, Zhang X, Huang W, Zhong X, Li X, Xue K, Liu S, Chen X, Li W, Liu X, Shen H, You J, He W, Jin Z, Yu L, Li Y, Zhang S, Zhang B

pubmed logopapersJan 1 2025
<b>Background:</b> No robust biomarkers have been identified to predict the efficacy of programmed cell death protein 1 (PD-1) inhibitors in patients with locoregionally advanced nasopharyngeal carcinoma (LANPC). We aimed to develop radiomic models using pre-immunotherapy MRI to predict the response to PD-1 inhibitors and the patient prognosis. <b>Methods:</b> This study included 246 LANPC patients (training cohort, <i>n</i> = 117; external test cohort, <i>n</i> = 129) from 10 centers. The best-performing machine learning classifier was employed to create the radiomic models. A combined model was constructed by integrating clinical and radiomic data. A radiomic interpretability study was performed with whole slide images (WSIs) stained with hematoxylin and eosin (H&E) and immunohistochemistry (IHC). A total of 150 patient-level nuclear morphological features (NMFs) and 12 cell spatial distribution features (CSDFs) were extracted from WSIs. The correlation between the radiomic and pathological features was assessed using Spearman correlation analysis. <b>Results:</b> The radiomic model outperformed the clinical and combined models in predicting treatment response (area under the curve: 0.760 vs. 0.559 vs. 0.652). For overall survival estimation, the combined model performed comparably to the radiomic model but outperformed the clinical model (concordance index: 0.858 vs. 0.812 vs. 0.664). Six treatment response-related radiomic features correlated with 50 H&E-derived (146 pairs, |<i>r</i>|= 0.31 to 0.46) and 2 to 26 IHC-derived NMF, particularly for CD45RO (69 pairs, |<i>r</i>|= 0.31 to 0.48), CD8 (84, |<i>r</i>|= 0.30 to 0.59), PD-L1 (73, |<i>r</i>|= 0.32 to 0.48), and CD163 (53, |<i>r</i>| = 0.32 to 0.59). Eight prognostic radiomic features correlated with 11 H&E-derived (16 pairs, |<i>r</i>|= 0.48 to 0.61) and 2 to 31 IHC-derived NMF, particularly for PD-L1 (80 pairs, |<i>r</i>|= 0.44 to 0.64), CD45RO (65, |<i>r</i>|= 0.42 to 0.67), CD19 (35, |<i>r</i>|= 0.44 to 0.58), CD66b (61, |<i>r</i>| = 0.42 to 0.67), and FOXP3 (21, |<i>r</i>| = 0.41 to 0.71). In contrast, fewer CSDFs exhibited correlations with specific radiomic features. <b>Conclusion:</b> The radiomic model and combined model are feasible in predicting immunotherapy response and outcomes in LANPC patients. The radiology-pathology correlation suggests a potential biological basis for the predictive models.

3D-MRI brain glioma intelligent segmentation based on improved 3D U-net network.

Wang T, Wu T, Yang D, Xu Y, Lv D, Jiang T, Wang H, Chen Q, Xu S, Yan Y, Lin B

pubmed logopapersJan 1 2025
To enhance glioma segmentation, a 3D-MRI intelligent glioma segmentation method based on deep learning is introduced. This method offers significant guidance for medical diagnosis, grading, and treatment strategy selection. Glioma case data were sourced from the BraTS2023 public dataset. Firstly, we preprocess the dataset, including 3D clipping, resampling, artifact elimination and normalization. Secondly, in order to enhance the perception ability of the network to different scale features, we introduce the space pyramid pool module. Then, by making the model focus on glioma details and suppressing irrelevant background information, we propose a multi-scale fusion attention mechanism; And finally, to address class imbalance and enhance learning of misclassified voxels, a combination of Dice and Focal loss functions was employed, creating a loss function, this method not only maintains the accuracy of segmentation, It also improves the recognition of challenge samples, thus improving the accuracy and generalization of the model in glioma segmentation. Experimental findings reveal that the enhanced 3D U-Net network model stabilizes training loss at 0.1 after 150 training iterations. The refined model demonstrates superior performance with the highest DSC, Recall, and Precision values of 0.7512, 0.7064, and 0.77451, respectively. In Whole Tumor (WT) segmentation, the Dice Similarity Coefficient (DSC), Recall, and Precision scores are 0.9168, 0.9426, and 0.9375, respectively. For Core Tumor (TC) segmentation, these scores are 0.8954, 0.9014, and 0.9369, respectively. In Enhanced Tumor (ET) segmentation, the method achieves DSC, Recall, and Precision values of 0.8674, 0.9045, and 0.9011, respectively. The DSC, Recall, and Precision indices in the WT, TC, and ET segments using this method are the highest recorded, significantly enhancing glioma segmentation. This improvement bolsters the accuracy and reliability of diagnoses, ultimately providing a scientific foundation for clinical diagnosis and treatment.

Brain tumor classification using MRI images and deep learning techniques.

Wong Y, Su ELM, Yeong CF, Holderbaum W, Yang C

pubmed logopapersJan 1 2025
Brain tumors pose a significant medical challenge, necessitating early detection and precise classification for effective treatment. This study aims to address this challenge by introducing an automated brain tumor classification system that utilizes deep learning (DL) and Magnetic Resonance Imaging (MRI) images. The main purpose of this research is to develop a model that can accurately detect and classify different types of brain tumors, including glioma, meningioma, pituitary tumors, and normal brain scans. A convolutional neural network (CNN) architecture with pretrained VGG16 as the base model is employed, and diverse public datasets are utilized to ensure comprehensive representation. Data augmentation techniques are employed to enhance the training dataset, resulting in a total of 17,136 brain MRI images across the four classes. The accuracy of this model was 99.24%, a higher accuracy than other similar works, demonstrating its potential clinical utility. This higher accuracy was achieved mainly due to the utilization of a large and diverse dataset, the improvement of network configuration, the application of a fine-tuning strategy to adjust pretrained weights, and the implementation of data augmentation techniques in enhancing classification performance for brain tumor detection. In addition, a web application was developed by leveraging HTML and Dash components to enhance usability, allowing for easy image upload and tumor prediction. By harnessing artificial intelligence (AI), the developed system addresses the need to reduce human error and enhance diagnostic accuracy. The proposed approach provides an efficient and reliable solution for brain tumor classification, facilitating early diagnosis and enabling timely medical interventions. This work signifies a potential advancement in brain tumor classification, promising improved patient care and outcomes.

MRI based early Temporal Lobe Epilepsy detection using DGWO based optimized HAETN and Fuzzy-AAL Segmentation Framework (FASF).

Khan H, Alutaibi AI, Tejani GG, Sharma SK, Khan AR, Ahmad F, Mousavirad SJ

pubmed logopapersJan 1 2025
This work aims to promote early and accurate diagnosis of Temporal Lobe Epilepsy (TLE) by developing state-of-the-art deep learning techniques, with the goal of minimizing the consequences of epilepsy on individuals and society. Current approaches for TLE detection have drawbacks, including applicability to particular MRI sequences, moderate ability to determine the side of the onset zones, and weak cross-validation with different patient groups, which hampers their practical use. To overcome these difficulties, a new Hybrid Attention-Enhanced Transformer Network (HAETN) is introduced for early TLE diagnosis. This approach uses newly developed Fuzzy-AAL Segmentation Framework (FASF) which is a combination of Fuzzy Possibilistic C-Means (FPCM) algorithm for segmentation of tissue and AAL labelling for labelling of tissues. Furthermore, an effective feature selection method is proposed using the Dipper- grey wolf optimization (DGWO) algorithm to improve the performance of the proposed model. The performance of the proposed method is thoroughly assessed by accuracy, sensitivity, and F1-score. The performance of the suggested approach is evaluated on the Temporal Lobe Epilepsy-UNAM MRI Dataset, where it attains an accuracy of 98.61%, a sensitivity of 99.83%, and F1-score of 99.82%, indicating its efficiency and applicability in clinical practice.

Investigating methods to enhance interpretability and performance in cardiac MRI for myocardial scarring diagnosis using convolutional neural network classification and One Match.

Udin MH, Armstrong S, Kai A, Doyle ST, Pokharel S, Ionita CN, Sharma UC

pubmed logopapersJan 1 2025
Machine learning (ML) classification of myocardial scarring in cardiac MRI is often hindered by limited explainability, particularly with convolutional neural networks (CNNs). To address this, we developed One Match (OM), an algorithm that builds on template matching to improve on both the explainability and performance of ML myocardial scaring classification. By incorporating OM, we aim to foster trust in AI models for medical diagnostics and demonstrate that improved interpretability does not have to compromise classification accuracy. Using a cardiac MRI dataset from 279 patients, this study evaluates One Match, which classifies myocardial scarring in images by matching each image to a set of labeled template images. It uses the highest correlation score from these matches for classification and is compared to a traditional sequential CNN. Enhancements such as autodidactic enhancement (AE) and patient-level classifications (PLCs) were applied to improve the predictive accuracy of both methods. Results are reported as follows: accuracy, sensitivity, specificity, precision, and F1-score. The highest classification performance was observed with the OM algorithm when enhanced by both AE and PLCs, 95.3% accuracy, 92.3% sensitivity, 96.7% specificity, 92.3% precision, and 92.3% F1-score, marking a significant improvement over the base configurations. AE alone had a positive impact on OM increasing accuracy from 89.0% to 93.2%, but decreased the accuracy of the CNN from 85.3% to 82.9%. In contrast, PLCs improved accuracy for both the CNN and OM, raising the CNN's accuracy by 4.2% and OM's by 7.4%. This study demonstrates the effectiveness of OM in classifying myocardial scars, particularly when enhanced with AE and PLCs. The interpretability of OM also enabled the examination of misclassifications, providing insights that could accelerate development and foster greater trust among clinical stakeholders.

Recognition of flight cadets brain functional magnetic resonance imaging data based on machine learning analysis.

Ye L, Weng S, Yan D, Ma S, Chen X

pubmed logopapersJan 1 2025
The rapid advancement of the civil aviation industry has attracted significant attention to research on pilots. However, the brain changes experienced by flight cadets following their training remain, to some extent, an unexplored territory compared to those of the general population. The aim of this study was to examine the impact of flight training on brain function by employing machine learning(ML) techniques. We collected resting-state functional magnetic resonance imaging (resting-state fMRI) data from 79 flight cadets and ground program cadets, extracting blood oxygenation level-dependent (BOLD) signal, amplitude of low frequency fluctuation (ALFF), regional homogeneity (ReHo), and functional connectivity (FC) metrics as feature inputs for ML models. After conducting feature selection using a two-sample t-test, we established various ML classification models, including Extreme Gradient Boosting (XGBoost), Logistic Regression (LR), Random Forest (RF), Support Vector Machine (SVM), and Gaussian Naive Bayes (GNB). Comparative analysis of the model results revealed that the LR classifier based on BOLD signals could accurately distinguish flight cadets from the general population, achieving an AUC of 83.75% and an accuracy of 0.93. Furthermore, an analysis of the features contributing significantly to the ML classification models indicated that these features were predominantly located in brain regions associated with auditory-visual processing, motor function, emotional regulation, and cognition, primarily within the Default Mode Network (DMN), Visual Network (VN), and SomatoMotor Network (SMN). These findings suggest that flight-trained cadets may exhibit enhanced functional dynamics and cognitive flexibility.

Fully automated MRI-based analysis of the locus coeruleus in aging and Alzheimer's disease dementia using ELSI-Net.

Dünnwald M, Krohn F, Sciarra A, Sarkar M, Schneider A, Fliessbach K, Kimmich O, Jessen F, Rostamzadeh A, Glanz W, Incesoy EI, Teipel S, Kilimann I, Goerss D, Spottke A, Brustkern J, Heneka MT, Brosseron F, Lüsebrink F, Hämmerer D, Düzel E, Tönnies K, Oeltze-Jafra S, Betts MJ

pubmed logopapersJan 1 2025
The locus coeruleus (LC) is linked to the development and pathophysiology of neurodegenerative diseases such as Alzheimer's disease (AD). Magnetic resonance imaging-based LC features have shown potential to assess LC integrity in vivo. We present a deep learning-based LC segmentation and feature extraction method called Ensemble-based Locus Coeruleus Segmentation Network (ELSI-Net) and apply it to healthy aging and AD dementia datasets. Agreement to expert raters and previously published LC atlases were assessed. We aimed to reproduce previously reported differences in LC integrity in aging and AD dementia and correlate extracted features to cerebrospinal fluid (CSF) biomarkers of AD pathology. ELSI-Net demonstrated high agreement to expert raters and published atlases. Previously reported group differences in LC integrity were detected and correlations to CSF biomarkers were found. Although we found excellent performance, further evaluations on more diverse datasets from clinical cohorts are required for a conclusive assessment of ELSI-Net's general applicability. We provide a thorough evaluation of a fully automatic locus coeruleus (LC) segmentation method termed Ensemble-based Locus Coeruleus Segmentation Network (ELSI-Net) in aging and Alzheimer's disease (AD) dementia.ELSI-Net outperforms previous work and shows high agreement with manual ratings and previously published LC atlases.ELSI-Net replicates previously shown LC group differences in aging and AD.ELSI-Net's LC mask volume correlates with cerebrospinal fluid biomarkers of AD pathology.
Page 74 of 75745 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.