Sort by:
Page 24 of 81805 results

Deep learning strategies for semantic segmentation of pediatric brain tumors in multiparametric MRI.

Cariola A, Sibilano E, Guerriero A, Bevilacqua V, Brunetti A

pubmed logopapersJul 2 2025
Automated segmentation of pediatric brain tumors (PBTs) can support precise diagnosis and treatment monitoring, but it is still poorly investigated in literature. This study proposes two different Deep Learning approaches for semantic segmentation of tumor regions in PBTs from MRI scans. Two pipelines were developed for segmenting enhanced tumor (ET), tumor core (TC), and whole tumor (WT) in pediatric gliomas from the BraTS-PEDs 2024 dataset. First, a pre-trained SegResNet model was retrained with a transfer learning approach and tested on the pediatric cohort. Then, two novel multi-encoder architectures leveraging the attention mechanism were designed and trained from scratch. To enhance the performance on ET regions, an ensemble paradigm and post-processing techniques were implemented. Overall, the 3-encoder model achieved the best performance in terms of Dice Score on TC and WT when trained with Dice Loss and on ET when trained with Generalized Dice Focal Loss. SegResNet showed higher recall on TC and WT, and higher precision on ET. After post-processing, we reached Dice Scores of 0.843, 0.869, 0.757 with the pre-trained model and 0.852, 0.876, 0.764 with the ensemble model for TC, WT and ET, respectively. Both strategies yielded state-of-the-art performances, although the ensemble demonstrated significantly superior results. Segmentation of the ET region was improved after post-processing, which increased test metrics while maintaining the integrity of the data.

Lightweight convolutional neural networks using nonlinear Lévy chaotic moth flame optimisation for brain tumour classification via efficient hyperparameter tuning.

Dehkordi AA, Neshat M, Khosravian A, Thilakaratne M, Safaa Sadiq A, Mirjalili S

pubmed logopapersJul 2 2025
Deep convolutional neural networks (CNNs) have seen significant growth in medical image classification applications due to their ability to automate feature extraction, leverage hierarchical learning, and deliver high classification accuracy. However, Deep CNNs require substantial computational power and memory, particularly for large datasets and complex architectures. Additionally, optimising the hyperparameters of deep CNNs, although critical for enhancing model performance, is challenging due to the high computational costs involved, making it difficult without access to high-performance computing resources. To address these limitations, this study presents a fast and efficient model that aims to achieve superior classification performance compared to popular Deep CNNs by developing lightweight CNNs combined with the Nonlinear Lévy chaotic moth flame optimiser (NLCMFO) for automatic hyperparameter optimisation. NLCMFO integrates the Lévy flight, chaotic parameters, and nonlinear control mechanisms to enhance the exploration capabilities of the Moth Flame Optimiser during the search phase while also leveraging the Lévy flight theorem to improve the exploitation phase. To assess the efficiency of the proposed model, empirical analyses were performed using a dataset of 2314 brain tumour detection images (1245 images of brain tumours and 1069 normal brain images). The evaluation results indicate that the CNN_NLCMFO outperformed a non-optimised CNN by 5% (92.40% accuracy) and surpassed established models such as DarkNet19 (96.41%), EfficientNetB0 (96.32%), Xception (96.41%), ResNet101 (92.15%), and InceptionResNetV2 (95.63%) by margins ranging from 1 to 5.25%. The findings demonstrate that the lightweight CNN combined with NLCMFO provides a computationally efficient yet highly accurate solution for medical image classification, addressing the challenges associated with traditional deep CNNs.

Classifying and diagnosing Alzheimer's disease with deep learning using 6735 brain MRI images.

Mousavi SM, Moulaei K, Ahmadian L

pubmed logopapersJul 2 2025
Traditional diagnostic methods for Alzheimer's disease often suffer from low accuracy and lengthy processing times, delaying crucial interventions and patient care. Deep convolutional neural networks trained on MRI data can enhance diagnostic precision. This study aims to utilize deep convolutional neural networks (CNNs) trained on MRI data for Alzheimer's disease diagnosis and classification. In this study, the Alzheimer MRI Preprocessed Dataset was used, which includes 6735 brain structural MRI scan images. After data preprocessing and normalization, four models Xception, VGG19, VGG16 and InceptionResNetV2 were utilized. Generalization and hyperparameter tuning were applied to improve training. Early stopping and dynamic learning rate were used to prevent overfitting. Model performance was evaluated based on accuracy, F-score, recall, and precision. The InceptionResnetV2 model showed superior performance in predicting Alzheimer's patients with an accuracy, F-score, recall, and precision of 0.99. Then, the Xception model excelled in precision, recall, and F-score, with values of 0.97 and an accuracy of 96.89. Notably, InceptionResnetV2 and VGG19 demonstrated faster learning, reaching convergence sooner and requiring fewer training iterations than other models. The InceptionResNetV2 model achieved the highest performance, with precision, recall, and F-score of 100% for both mild and moderate dementia classes. The Xception model also performed well, attaining 100% for the moderate dementia class and 99-100% for the mild dementia class. Additionally, the VGG16 and VGG19 models showed strong results, with VGG16 reaching 100% precision, recall, and F-score for the moderate dementia class. Deep convolutional neural networks enhance Alzheimer's diagnosis, surpassing traditional methods with improved precision and efficiency. Models like InceptionResnetV2 show outstanding performance, potentially speeding up patient interventions.

Heterogeneity Habitats -Derived Radiomics of Gd-EOB-DTPA Enhanced MRI for Predicting Proliferation of Hepatocellular Carcinoma.

Sun S, Yu Y, Xiao S, He Q, Jiang Z, Fan Y

pubmed logopapersJul 2 2025
To construct and validate the optimal model for preoperative prediction of proliferative HCC based on habitat-derived radiomics features of Gd-EOB-DTPA-Enhanced MRI. A total of 187 patients who underwent Gd-EOB-DTPA-enhanced MRI before curative partial hepatectomy were divided into training (n=130, 50 proliferative and 80 nonproliferative HCC) and validation cohort (n=57, 25 proliferative and 32 nonproliferative HCC). Habitat subregion generation was performed using the Gaussian Mixture Model (GMM) clustering method to cluster all pixels to identify similar subregions within the tumor. Radiomic features were extracted from each tumor subregion in the arterial phase (AP) and hepatobiliary phase (HBP). Independent sample t tests, Pearson correlation coefficient, and Least Absolute Shrinkage and Selection Operator (LASSO) algorithm were performed to select the optimal features of subregions. After feature integration and selection, machine-learning classification models using the sci-kit-learn library were constructed. Receiver Operating Characteristic (ROC) curves and the DeLong test were performed to compare the identified performance for predicting proliferative HCC among these models. The optimal number of clusters was determined to be 3 based on the Silhouette coefficient. 20, 12, and 23 features were retained from the AP, HBP, and the combined AP and HBP habitat (subregions 1, 2, 3) radiomics features. Three models were constructed with these selected features in AP, HBP, and the combined AP and HBP habitat radiomics features. The ROC analysis and DeLong test show that the Naive Bayes model of AP and HBP habitat radiomics (AP-HBP-Hab-Rad) archived the best performance. Finally, the combined model using the Light Gradient Boosting Machine (LightGBM) algorithm, incorporating the AP-HBP-Hab-Rad, age, and AFP (Alpha-Fetoprotein), was identified as the optimal model for predicting proliferative HCC. For the training and validation cohort, the accuracy, sensitivity, specificity, and AUC were 0.923, 0.880, 0.950, 0.966 (95% CI: 0.937-0.994) and 0.825, 0.680, 0.937, 0.877 (95% CI: 0.786-0.969), respectively. In its validation cohort of the combined model, the AUC value was statistically higher than the other models (P<0.01). A combined model, including AP-HBP-Hab-Rad, serum AFP, and age using the LightGBM algorithm, can satisfactorily predict proliferative HCC preoperatively.

A multi-modal graph-based framework for Alzheimer's disease detection.

Mashhadi N, Marinescu R

pubmed logopapersJul 2 2025
We propose a compositional graph-based Machine Learning (ML) framework for Alzheimer's disease (AD) detection that constructs complex ML predictors from modular components. In our directed computational graph, datasets are represented as nodes [Formula: see text], and deep learning (DL) models are represented as directed edges [Formula: see text], allowing us to model complex image-processing pipelines [Formula: see text] as end-to-end DL predictors. Each directed path in the graph functions as a DL predictor, supporting both forward propagation for transforming data representations, as well as backpropagation for model finetuning, saliency map computation, and input data optimization. We demonstrate our model on Alzheimer's disease prediction, a complex problem that requires integrating multimodal data containing scans of different modalities and contrasts, genetic data and cognitive tests. We built a graph of 11 nodes (data) and 14 edges (ML models), where each model has been trained on handling a specific task (e.g. skull-stripping MRI scans, AD detection,image2image translation, ...). By using a modular and adaptive approach, our framework effectively integrates diverse data types, handles distribution shifts, and scales to arbitrary complexity, offering a practical tool that remains accurate even when modalities are missing for advancing Alzheimer's disease diagnosis and potentially other complex medical prediction tasks.

Habitat-Derived Radiomic Features of Planning Target Volume to Determine the Local Recurrence After Radiotherapy in Patients with Gliomas: A Feasibility Study.

Wang Y, Lin L, Hu Z, Wang H

pubmed logopapersJul 2 2025
To develop a machine learning-based predictive model for local recurrence after radiotherapy in patients with gliomas, with interpretability enhanced through SHapley Additive exPlanations (SHAP). We retrospectively enrolled 145 patients with pathologically confirmed gliomas who underwent brain radiotherapy (training: validation = 102:43). Physiological and structural magnetic resonance imaging (MRI) were used to define habitat regions. A total of 2153 radiomic features were extracted from each MRI sequence in each habitat region, respectively. Relief and Recursive Feature Elimination were used for radiomic feature selection. Support vector machine (SVM) and random forest models incorporating clinical and radiomic features were constructed for each habitat region. The SHAP method was used to explain the predictive model. In the training cohort and validation cohort, the Physiological_Habitat1 (e-THRIVE)_radiomic SVM model demonstrated the best AUC of 0.703 (95% CI 0.569-0.836) and 0.670 (95% CI 0.623-0.717) compared to the other radiomic models. The SHAP summary plot and SHAP force plot were used to interpret the best-performing Physiological_Habitat1 (e-THRIVE)_radiomic SVM model. Radiomic features derived from the Physiological_Habitat1 (e-THRIVE) were predictive of local recurrence in glioma patients following radiotherapy. The SHAP method provided insights into how the tumor microenvironment might influence the effectiveness of radiotherapy in postoperative gliomas.

Optimizing the early diagnosis of neurological disorders through the application of machine learning for predictive analytics in medical imaging.

Sadu VB, Bagam S, Naved M, Andluru SKR, Ramineni K, Alharbi MG, Sengan S, Khadhar Moideen R

pubmed logopapersJul 2 2025
Early diagnosis of Neurological Disorders (ND) such as Alzheimer's disease (AD) and Brain Tumors (BT) can be highly challenging since these diseases cause minor changes in the brain's anatomy. Magnetic Resonance Imaging (MRI) is a vital tool for diagnosing and visualizing these ND; however, standard techniques contingent upon human analysis can be inaccurate, require a long-time, and detect early-stage symptoms necessary for effective treatment. Spatial Feature Extraction (FE) has been improved by Convolutional Neural Networks (CNN) and hybrid models, both of which are changes in Deep Learning (DL). However, these analysis methods frequently fail to accept temporal dynamics, which is significant for a complete test. The present investigation introduces the STGCN-ViT, a hybrid model that integrates CNN + Spatial-Temporal Graph Convolutional Networks (STGCN) + Vision Transformer (ViT) components to address these gaps. The model causes the reference to EfficientNet-B0 for FE in space, STGCN for FE in time, and ViT for FE using AM. By applying the Open Access Series of Imaging Studies (OASIS) and Harvard Medical School (HMS) benchmark datasets, the recommended approach proved effective in the investigations, with Group A attaining an accuracy of 93.56%, a precision of 94.41% and an Area under the Receiver Operating Characteristic Curve (AUC-ROC) score of 94.63%. Compared with standard and transformer-based models, the model attains better results for Group B, with an accuracy of 94.52%, precision of 95.03%, and AUC-ROC score of 95.24%. Those results support the model's use in real-time medical applications by providing proof of the probability of accurate but early-stage ND diagnosis.

Multi-modal models using fMRI, urine and serum biomarkers for classification and risk prognosis in diabetic kidney disease.

Shao X, Xu H, Chen L, Bai P, Sun H, Yang Q, Chen R, Lin Q, Wang L, Li Y, Lin Y, Yu P

pubmed logopapersJul 2 2025
Functional magnetic resonance imaging (fMRI) is a powerful tool for non-invasive evaluation of micro-changes in the kidneys. This study aims to develop classification and prognostic models based on multi-modal data. A total of 172 participants were included, and high-resolution multi-parameter fMRI technology was employed to obtain T2-weighted imaging (T2WI), blood oxygen level dependent (BOLD), and diffusion tensor imaging (DTI) sequence images. Based on clinical indicators, fMRI markers, serum and urine biomarkers (CD300LF, CST4, MMRN2, SERPINA1, l-glutamic acid dimethyl ester and phosphatidylcholine), machine learning algorithms were applied to establish and validate classification diagnosis models (Models 1-6) and risk-prognostic models (Models A-E). Additionally, accuracy, sensitivity, specificity, precision, area under the curve (AUC) and recall were used to evaluate the predictive performance of the models. A total of six classification models were established. Model 5 (fMRI + clinical indicators) exhibited superior performance, with an accuracy of 0.833 (95% confidence interval [CI]: 0.653-0.944). Notably, the multi-modal model incorporating image, serum and urine multi-omics and clinical indicators (Model 6) demonstrated higher predictive performance, achieving an accuracy of 0.923 (95% CI: 0.749-0.991). Furthermore, a total of five prognostic models at 2-year and 3-year follow-up were established. The Model E exhibited superior performance, achieving AUC values of 0.975 at the 2-year follow-up and 0.932 at the 3-year follow-up. Furthermore, Model E can identify patients with a high-risk prognosis. In clinical practice, the multi-modal models presented in this study demonstrate potential to enhance clinical decision-making capabilities regarding patient classification and prognosis prediction.

A deep learning model for early diagnosis of alzheimer's disease combined with 3D CNN and video Swin transformer.

Zhou J, Wei Y, Li X, Zhou W, Tao R, Hua Y, Liu H

pubmed logopapersJul 2 2025
Alzheimer's disease (AD) constitutes a neurodegenerative disorder predominantly observed in the geriatric population. If AD can be diagnosed early, both in terms of prevention and treatment, it is very beneficial to patients. Therefore, our team proposed a novel deep learning model named 3D-CNN-VSwinFormer. The model consists of two components: the first part is a 3D CNN equipped with a 3D Convolutional Block Attention Module (3D CBAM) module, and the second part involves a fine-tuned Video Swin Transformer. Our investigation extracts features from subject-level 3D Magnetic resonance imaging (MRI) data, retaining only a single 3D MRI image per participant. This method circumvents data leakage and addresses the issue of 2D slices failing to capture global spatial information. We utilized the ADNI dataset to validate our proposed model. In differentiating between AD patients and cognitively normal (CN) individuals, we achieved accuracy and AUC values of 92.92% and 0.9660, respectively. Compared to other studies on AD and CN recognition, our model yielded superior results, enhancing the efficiency of AD diagnosis.

Towards reliable WMH segmentation under domain shift: An application study using maximum entropy regularization to improve uncertainty estimation.

Matzkin F, Larrazabal A, Milone DH, Dolz J, Ferrante E

pubmed logopapersJul 2 2025
Accurate segmentation of white matter hyperintensities (WMH) is crucial for clinical decision-making, particularly in the context of multiple sclerosis. However, domain shifts, such as variations in MRI machine types or acquisition parameters, pose significant challenges to model calibration and uncertainty estimation. This comparative study investigates the impact of domain shift on WMH segmentation, proposing maximum-entropy regularization techniques to enhance model calibration and uncertainty estimation. The purpose is to identify errors appearing after model deployment in clinical scenarios using predictive uncertainty as a proxy measure, since it does not require ground-truth labels to be computed. We conducted experiments using a classic U-Net architecture and evaluated maximum entropy regularization schemes to improve model calibration under domain shift on two publicly available datasets: the WMH Segmentation Challenge and the 3D-MR-MS dataset. Performance is assessed with Dice coefficient, Hausdorff distance, expected calibration error, and entropy-based uncertainty estimates. Entropy-based uncertainty estimates can anticipate segmentation errors, both in-distribution and out-of-distribution, with maximum-entropy regularization further strengthening the correlation between uncertainty and segmentation performance, while also improving model calibration under domain shift. Maximum-entropy regularization improves uncertainty estimation for WMH segmentation under domain shift. By strengthening the relationship between predictive uncertainty and segmentation errors, these methods allow models to better flag unreliable predictions without requiring ground-truth annotations. Additionally, maximum-entropy regularization contributes to better model calibration, supporting more reliable and safer deployment of deep learning models in multi-center and heterogeneous clinical environments.
Page 24 of 81805 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.