Sort by:
Page 107 of 1241236 results

Deep learning-based CAD system for Alzheimer's diagnosis using deep downsized KPLS.

Neffati S, Mekki K, Machhout M

pubmed logopapersMay 27 2025
Alzheimer's disease (AD) is the most prevalent type of dementia. It is linked with a gradual decline in various brain functions, such as memory. Many research efforts are now directed toward non-invasive procedures for early diagnosis because early detection greatly benefits the patient care and treatment outcome. Additional to an accurate diagnosis and reduction of the rate of misdiagnosis; Computer-Aided Design (CAD) systems are built to give definitive diagnosis. This paper presents a novel CAD system to determine stages of AD. Initially, deep learning techniques are utilized to extract features from the AD brain MRIs. Then, the extracted features are reduced using a proposed feature reduction technique named Deep Downsized Kernel Partial Least Squares (DDKPLS). The proposed approach selects a reduced number of samples from the initial information matrix. The samples chosen give rise to a new data matrix further processed by KPLS to deal with the high dimensionality. The reduced feature space is finally classified using ELM. The implementation is named DDKPLS-ELM. Reference tests have been performed on the Kaggle MRI dataset, which exhibit the efficacy of the DDKPLS-based classifier; it achieves accuracy up to 95.4% and an F1 score of 95.1%.

Deep Learning Auto-segmentation of Diffuse Midline Glioma on Multimodal Magnetic Resonance Images.

Fernández-Patón M, Montoya-Filardi A, Galiana-Bordera A, Martínez-Gironés PM, Veiga-Canuto D, Martínez de Las Heras B, Cerdá-Alberich L, Martí-Bonmatí L

pubmed logopapersMay 27 2025
Diffuse midline glioma (DMG) H3 K27M-altered is a rare pediatric brainstem cancer with poor prognosis. To advance the development of predictive models to gain a deeper understanding of DMG, there is a crucial need for seamlessly integrating automatic and highly accurate tumor segmentation techniques. There is only one method that tries to solve this task in this cancer; for that reason, this study develops a modified CNN-based 3D-Unet tool to automatically segment DMG in an accurate way in magnetic resonance (MR) images. The dataset consisted of 52 DMG patients and 70 images, each with T1W and T2W or FLAIR images. Three different datasets were created: T1W images, T2W or FLAIR images, and a combined set of T1W and T2W/FLAIR images. Denoising, bias field correction, spatial resampling, and normalization were applied as preprocessing steps to the MR images. Patching techniques were also used to enlarge the dataset size. For tumor segmentation, a 3D U-Net architecture with residual blocks was used. The best results were obtained for the dataset composed of all T1W and T2W/FLAIR images, reaching an average Dice Similarity Coefficient (DSC) of 0.883 on the test dataset. These results are comparable to other brain tumor segmentation models and to state-of-the-art results in DMG segmentation using fewer sequences. Our results demonstrate the effectiveness of the proposed 3D U-Net architecture for DMG tumor segmentation. This advancement holds potential for enhancing the precision of diagnostic and predictive models in the context of this challenging pediatric cancer.

Machine learning decision support model construction for craniotomy approach of pineal region tumors based on MRI images.

Chen Z, Chen Y, Su Y, Jiang N, Wanggou S, Li X

pubmed logopapersMay 27 2025
Pineal region tumors (PRTs) are rare but deep-seated brain tumors, and complete surgical resection is crucial for effective tumor treatment. The choice of surgical approach is often challenging due to the low incidence and deep location. This study aims to combine machine learning and deep learning algorithms with pre-operative MRI images to build a model for PRTs surgical approaches recommendation, striving to model clinical experience for practical reference and education. This study was a retrospective study which enrolled a total of 173 patients diagnosed with PRTs radiologically from our hospital. Three traditional surgical approaches of were recorded for prediction label. Clinical and VASARI related radiological information were selected for machine learning prediction model construction. And MRI images from axial, sagittal and coronal views of orientation were also used for deep learning craniotomy approach prediction model establishment and evaluation. 5 machine learning methods were applied to construct the predictive classifiers with the clinical and VASARI features and all methods could achieve area under the ROC (Receiver operating characteristic) curve (AUC) values over than 0.7. And also, 3 deep learning algorithms (ResNet-50, EfficientNetV2-m and ViT) were applied based on MRI images from different orientations. EfficientNetV2-m achieved the highest AUC value of 0.89, demonstrating a significant high performance of prediction. And class activation mapping was used to reveal that the tumor itself and its surrounding relations are crucial areas for model decision-making. In our study, we used machine learning and deep learning to construct surgical approach recommendation models. Deep learning could achieve high performance of prediction and provide efficient and personalized decision support tools for PRTs surgical approach. Not applicable.

Interpretable Machine Learning Models for Differentiating Glioblastoma From Solitary Brain Metastasis Using Radiomics.

Xia X, Wu W, Tan Q, Gou Q

pubmed logopapersMay 27 2025
To develop and validate interpretable machine learning models for differentiating glioblastoma (GB) from solitary brain metastasis (SBM) using radiomics features from contrast-enhanced T1-weighted MRI (CE-T1WI), and to compare the impact of low-order and high-order features on model performance. A cohort of 434 patients with histopathologically confirmed GB (226 patients) and SBM (208 patients) was retrospectively analyzed. Radiomic features were derived from CE-T1WI, with feature selection conducted through minimum redundancy maximum relevance and least absolute shrinkage and selection operator regression. Machine learning models, including GradientBoost and lightGBM (LGBM), were trained using low-order and high-order features. The performance of the models was assessed through receiver operating characteristic analysis and computation of the area under the curve, along with other indicators, including accuracy, specificity, and sensitivity. SHapley Additive Explanations (SHAP) analysis is used to measure the influence of each feature on the model's predictions. The performances of various machine learning models on both the training and validation datasets were notably different. For the training group, the LGBM, CatBoost, multilayer perceptron (MLP), and GradientBoost models achieved the highest AUC scores, all exceeding 0.9, demonstrating strong discriminative power. The LGBM model exhibited the best stability, with a minimal AUC difference of only 0.005 between the training and test sets, suggesting strong generalizability. Among the validation group results, the GradientBoost classifier achieved the maximum AUC of 0.927, closely followed by random forest at 0.925. GradientBoost also demonstrated high sensitivity (0.911) and negative predictive value (NPV, 0.889), effectively identifying true positives. The LGBM model showed the highest test accuracy (86.2%) and performed excellently in terms of sensitivity (0.911), NPV (0.895), and positive predictive value (PPV, 0.837). The models utilizing high-order features outperformed those based on low-order features in all the metrics. SHAP analysis further enhances model interpretability, providing insights into feature importance and contributions to classification decisions. Machine learning techniques based on radiomics can effectively distinguish GB from SBM, with gradient boosting tree-based models such as LGBMs demonstrating superior performance. High-order features significantly improve model accuracy and robustness. SHAP technology enhances the interpretability and transparency of models for distinguishing brain tumors, providing intuitive visualization of the contribution of radiomic features to classification.

ScanAhead: Simplifying standard plane acquisition of fetal head ultrasound.

Men Q, Zhao H, Drukker L, Papageorghiou AT, Noble JA

pubmed logopapersMay 26 2025
The fetal standard plane acquisition task aims to detect an Ultrasound (US) image characterized by specified anatomical landmarks and appearance for assessing fetal growth. However, in practice, due to variability in human operator skill and possible fetal motion, it can be challenging for a human operator to acquire a satisfactory standard plane. To support a human operator with this task, this paper first describes an approach to automatically predict the fetal head standard plane from a video segment approaching the standard plane. A transformer-based image predictor is proposed to produce a high-quality standard plane by understanding diverse scales of head anatomy within the US video frame. Because of the visual gap between the video frames and standard plane image, the predictor is equipped with an offset adaptor that performs domain adaption to translate the off-plane structures to the anatomies that would usually appear in a standard plane view. To enhance the anatomical details of the predicted US image, the approach is extended by utilizing a second modality, US probe movement, that provides 3D location information. Quantitative and qualitative studies conducted on two different head biometry planes demonstrate that the proposed US image predictor produces clinically plausible standard planes with superior performance to comparative published methods. The results of dual-modality solution show an improved visualization with enhanced anatomical details of the predicted US image. Clinical evaluations are also conducted to demonstrate the consistency between the predicted echo textures and the expected echo patterns seen in a typical real standard plane, which indicates its clinical feasibility for improving the standard plane acquisition process.

Methodological Challenges in Deep Learning-Based Detection of Intracranial Aneurysms: A Scoping Review.

Joo B

pubmed logopapersMay 26 2025
Artificial intelligence (AI), particularly deep learning, has demonstrated high diagnostic performance in detecting intracranial aneurysms on computed tomography angiography (CTA) and magnetic resonance angiography (MRA). However, the clinical translation of these technologies remains limited due to methodological limitations and concerns about generalizability. This scoping review comprehensively evaluates 36 studies that applied deep learning to intracranial aneurysm detection on CTA or MRA, focusing on study design, validation strategies, reporting practices, and reference standards. Key findings include inconsistent handling of ruptured and previously treated aneurysms, underreporting of coexisting brain or vascular abnormalities, limited use of external validation, and an almost complete absence of prospective study designs. Only a minority of studies employed diagnostic cohorts that reflect real-world aneurysm prevalence, and few reported all essential performance metrics, such as patient-wise and lesion-wise sensitivity, specificity, and false positives per case. These limitations suggest that current studies remain at the stage of technical validation, with high risks of bias and limited clinical applicability. To facilitate real-world implementation, future research must adopt more rigorous designs, representative and diverse validation cohorts, standardized reporting practices, and greater attention to human-AI interaction.

Detecting microcephaly and macrocephaly from ultrasound images using artificial intelligence.

Mengistu AK, Assaye BT, Flatie AB, Mossie Z

pubmed logopapersMay 26 2025
Microcephaly and macrocephaly, which are abnormal congenital markers, are associated with developmental and neurologic deficits. Hence, there is a medically imperative need to conduct ultrasound imaging early on. However, resource-limited countries such as Ethiopia are confronted with inadequacies such that access to trained personnel and diagnostic machines inhibits the exact and continuous diagnosis from being met. This study aims to develop a fetal head abnormality detection model from ultrasound images via deep learning. Data were collected from three Ethiopian healthcare facilities to increase model generalizability. The recruitment period for this study started on November 9, 2024, and ended on November 30, 2024. Several preprocessing techniques have been performed, such as augmentation, noise reduction, and normalization. SegNet, UNet, FCN, MobileNetV2, and EfficientNet-B0 were applied to segment and measure fetal head structures using ultrasound images. The measurements were classified as microcephaly, macrocephaly, or normal using WHO guidelines for gestational age, and then the model performance was compared with that of existing industry experts. The metrics used for evaluation included accuracy, precision, recall, the F1 score, and the Dice coefficient. This study was able to demonstrate the feasibility of using SegNet for automatic segmentation, measurement of abnormalities of the fetal head, and classification of macrocephaly and microcephaly, with an accuracy of 98% and a Dice coefficient of 0.97. Compared with industry experts, the model achieved accuracies of 92.5% and 91.2% for the BPD and HC measurements, respectively. Deep learning models can enhance prenatal diagnosis workflows, especially in resource-constrained settings. Future work needs to be done on optimizing model performance, trying complex models, and expanding datasets to improve generalizability. If these technologies are adopted, they can be used in prenatal care delivery. Not applicable.

Auto-segmentation of cerebral cavernous malformations using a convolutional neural network.

Chou CJ, Yang HC, Lee CC, Jiang ZH, Chen CJ, Wu HM, Lin CF, Lai IC, Peng SJ

pubmed logopapersMay 26 2025
This paper presents a deep learning model for the automated segmentation of cerebral cavernous malformations (CCMs). The model was trained using treatment planning data from 199 Gamma Knife (GK) exams, comprising 171 cases with a single CCM and 28 cases with multiple CCMs. The training data included initial MRI images with target CCM regions manually annotated by neurosurgeons. For the extraction of data related to the brain parenchyma, we employed a mask region-based convolutional neural network (Mask R-CNN). Subsequently, this data was processed using a 3D convolutional neural network known as DeepMedic. The efficacy of the brain parenchyma extraction model was demonstrated via five-fold cross-validation, resulting in an average Dice similarity coefficient of 0.956 ± 0.002. The segmentation models used for CCMs achieved average Dice similarity coefficients of 0.741 ± 0.028 based solely on T2W images. The Dice similarity coefficients for the segmentation of CCMs types were as follows: Zabramski Classification type I (0.743), type II (0.742), and type III (0.740). We also developed a user-friendly graphical user interface to facilitate the use of these models in clinical analysis. This paper presents a deep learning model for the automated segmentation of CCMs, demonstrating sufficient performance across various Zabramski classifications. not applicable.

An Explainable Diagnostic Framework for Neurodegenerative Dementias via Reinforcement-Optimized LLM Reasoning

Andrew Zamai, Nathanael Fijalkow, Boris Mansencal, Laurent Simon, Eloi Navet, Pierrick Coupe

arxiv logopreprintMay 26 2025
The differential diagnosis of neurodegenerative dementias is a challenging clinical task, mainly because of the overlap in symptom presentation and the similarity of patterns observed in structural neuroimaging. To improve diagnostic efficiency and accuracy, deep learning-based methods such as Convolutional Neural Networks and Vision Transformers have been proposed for the automatic classification of brain MRIs. However, despite their strong predictive performance, these models find limited clinical utility due to their opaque decision making. In this work, we propose a framework that integrates two core components to enhance diagnostic transparency. First, we introduce a modular pipeline for converting 3D T1-weighted brain MRIs into textual radiology reports. Second, we explore the potential of modern Large Language Models (LLMs) to assist clinicians in the differential diagnosis between Frontotemporal dementia subtypes, Alzheimer's disease, and normal aging based on the generated reports. To bridge the gap between predictive accuracy and explainability, we employ reinforcement learning to incentivize diagnostic reasoning in LLMs. Without requiring supervised reasoning traces or distillation from larger models, our approach enables the emergence of structured diagnostic rationales grounded in neuroimaging findings. Unlike post-hoc explainability methods that retrospectively justify model decisions, our framework generates diagnostic rationales as part of the inference process-producing causally grounded explanations that inform and guide the model's decision-making process. In doing so, our framework matches the diagnostic performance of existing deep learning methods while offering rationales that support its diagnostic conclusions.

Predicting treatment response in individuals with major depressive disorder using structural MRI-based similarity features.

Song S, Wang S, Gao J, Zhu L, Zhang W, Wang Y, Wang D, Zhang D, Wang K

pubmed logopapersMay 26 2025
Major Depressive Disorder (MDD) is a prevalent mental health condition with significant societal impact. Structural magnetic resonance imaging (sMRI) and machine learning have shown promise in psychiatry, offering insights into brain abnormalities in MDD. However, predicting treatment response remains challenging. This study leverages inter-brain similarity from sMRI as a novel feature to enhance prediction accuracy and explore disease mechanisms. The method's generalizability across adult and adolescent cohorts is also evaluated. The study included 172 participants. Based on remission status, 39 participants from the Hangzhou Dataset and 34 from the Jinan Dataset were selected for further analysis. Three methods were used to extract brain similarity features, followed by a statistical test for feature selection. Six machine learning classifiers were employed to predict treatment response, and their generalizability was tested using the Jinan Dataset. Group analyses between remission and non-remission groups were conducted to identify brain regions associated with treatment response. Brain similarity features outperformed traditional metrics in predicting treatment outcomes, with the highest accuracy achieved by the model using these features. Between-group analyses revealed that the remission group had lower gray matter volume and density in the right precentral gyrus, but higher white matter volume (WMV). In the Jinan Dataset, significant differences were observed in the right cerebellum and fusiform gyrus, with higher WMV and density in the remission group. This study demonstrates that brain similarity features combined with machine learning can predict treatment response in MDD with moderate success across age groups. These findings emphasize the importance of considering age-related differences in treatment planning to personalize care. Clinical trial number: not applicable.
Page 107 of 1241236 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.