Sort by:
Page 117 of 1241236 results

Machine learning-based prognostic subgrouping of glioblastoma: A multicenter study.

Akbari H, Bakas S, Sako C, Fathi Kazerooni A, Villanueva-Meyer J, Garcia JA, Mamourian E, Liu F, Cao Q, Shinohara RT, Baid U, Getka A, Pati S, Singh A, Calabrese E, Chang S, Rudie J, Sotiras A, LaMontagne P, Marcus DS, Milchenko M, Nazeri A, Balana C, Capellades J, Puig J, Badve C, Barnholtz-Sloan JS, Sloan AE, Vadmal V, Waite K, Ak M, Colen RR, Park YW, Ahn SS, Chang JH, Choi YS, Lee SK, Alexander GS, Ali AS, Dicker AP, Flanders AE, Liem S, Lombardo J, Shi W, Shukla G, Griffith B, Poisson LM, Rogers LR, Kotrotsou A, Booth TC, Jain R, Lee M, Mahajan A, Chakravarti A, Palmer JD, DiCostanzo D, Fathallah-Shaykh H, Cepeda S, Santonocito OS, Di Stefano AL, Wiestler B, Melhem ER, Woodworth GF, Tiwari P, Valdes P, Matsumoto Y, Otani Y, Imoto R, Aboian M, Koizumi S, Kurozumi K, Kawakatsu T, Alexander K, Satgunaseelan L, Rulseh AM, Bagley SJ, Bilello M, Binder ZA, Brem S, Desai AS, Lustig RA, Maloney E, Prior T, Amankulor N, Nasrallah MP, O'Rourke DM, Mohan S, Davatzikos C

pubmed logopapersMay 15 2025
Glioblastoma (GBM) is the most aggressive adult primary brain cancer, characterized by significant heterogeneity, posing challenges for patient management, treatment planning, and clinical trial stratification. We developed a highly reproducible, personalized prognostication, and clinical subgrouping system using machine learning (ML) on routine clinical data, magnetic resonance imaging (MRI), and molecular measures from 2838 demographically diverse patients across 22 institutions and 3 continents. Patients were stratified into favorable, intermediate, and poor prognostic subgroups (I, II, and III) using Kaplan-Meier analysis (Cox proportional model and hazard ratios [HR]). The ML model stratified patients into distinct prognostic subgroups with HRs between subgroups I-II and I-III of 1.62 (95% CI: 1.43-1.84, P < .001) and 3.48 (95% CI: 2.94-4.11, P < .001), respectively. Analysis of imaging features revealed several tumor properties contributing unique prognostic value, supporting the feasibility of a generalizable prognostic classification system in a diverse cohort. Our ML model demonstrates extensive reproducibility and online accessibility, utilizing routine imaging data rather than complex imaging protocols. This platform offers a unique approach to personalized patient management and clinical trial stratification in GBM.

Does Whole Brain Radiomics on Multimodal Neuroimaging Make Sense in Neuro-Oncology? A Proof of Concept Study.

Danilov G, Kalaeva D, Vikhrova N, Shugay S, Telysheva E, Goraynov S, Kosyrkova A, Pavlova G, Pronin I, Usachev D

pubmed logopapersMay 15 2025
Employing a whole-brain (WB) mask as a region of interest for extracting radiomic features is a feasible, albeit less common, approach in neuro-oncology research. This study aims to evaluate the relationship between WB radiomic features, derived from various neuroimaging modalities in patients with gliomas, and some key baseline characteristics of patients and tumors such as sex, histological tumor type, WHO Grade (2021), IDH1 mutation status, necrosis lesions, contrast enhancement, T/N peak value and metabolic tumor volume. Forty-one patients (average age 50 ± 15 years, 21 females and 20 males) with supratentorial glial tumors were enrolled in this study. A total of 38,720 radiomic features were extracted. Cluster analysis revealed that whole-brain images of biologically different tumors could be distinguished to a certain extent based on their imaging biomarkers. Machine learning capabilities to detect image properties like contrast-enhanced or necrotic zones validated radiomic features in objectifying image semantics. Furthermore, the predictive capability of imaging biomarkers in determining tumor histology, grade and mutation type underscores their diagnostic potential. Whole-brain radiomics using multimodal neuroimaging data appeared to be informative in neuro-oncology, making research in this area well justified.

Zero-Shot Multi-modal Large Language Model v.s. Supervised Deep Learning: A Comparative Study on CT-Based Intracranial Hemorrhage Subtyping

Yinuo Wang, Yue Zeng, Kai Chen, Cai Meng, Chao Pan, Zhouping Tang

arxiv logopreprintMay 14 2025
Introduction: Timely identification of intracranial hemorrhage (ICH) subtypes on non-contrast computed tomography is critical for prognosis prediction and therapeutic decision-making, yet remains challenging due to low contrast and blurring boundaries. This study evaluates the performance of zero-shot multi-modal large language models (MLLMs) compared to traditional deep learning methods in ICH binary classification and subtyping. Methods: We utilized a dataset provided by RSNA, comprising 192 NCCT volumes. The study compares various MLLMs, including GPT-4o, Gemini 2.0 Flash, and Claude 3.5 Sonnet V2, with conventional deep learning models, including ResNet50 and Vision Transformer. Carefully crafted prompts were used to guide MLLMs in tasks such as ICH presence, subtype classification, localization, and volume estimation. Results: The results indicate that in the ICH binary classification task, traditional deep learning models outperform MLLMs comprehensively. For subtype classification, MLLMs also exhibit inferior performance compared to traditional deep learning models, with Gemini 2.0 Flash achieving an macro-averaged precision of 0.41 and a macro-averaged F1 score of 0.31. Conclusion: While MLLMs excel in interactive capabilities, their overall accuracy in ICH subtyping is inferior to deep networks. However, MLLMs enhance interpretability through language interactions, indicating potential in medical imaging analysis. Future efforts will focus on model refinement and developing more precise MLLMs to improve performance in three-dimensional medical image processing.

A fully automatic radiomics pipeline for postoperative facial nerve function prediction of vestibular schwannoma.

Song G, Li K, Wang Z, Liu W, Xue Q, Liang J, Zhou Y, Geng H, Liu D

pubmed logopapersMay 14 2025
Vestibular schwannoma (VS) is the most prevalent intracranial schwannoma. Surgery is one of the options for the treatment of VS, with the preservation of facial nerve (FN) function being the primary objective. Therefore, postoperative FN function prediction is essential. However, achieving automation for such a method remains a challenge. In this study, we proposed a fully automatic deep learning approach based on multi-sequence magnetic resonance imaging (MRI) to predict FN function after surgery in VS patients. We first developed a segmentation network 2.5D Trans-UNet, which combined Transformer and U-Net to optimize contour segmentation for radiomic feature extraction. Next, we built a deep learning network based on the integration of 1DConvolutional Neural Network (1DCNN) and Gated Recurrent Unit (GRU) to predict postoperative FN function using the extracted features. We trained and tested the 2.5D Trans-UNet segmentation network on public and private datasets, achieving accuracies of 89.51% and 90.66%, respectively, confirming the model's strong performance. Then Feature extraction and selection were performed on the private dataset's segmentation results using 2.5D Trans-UNet. The selected features were used to train the 1DCNN-GRU network for classification. The results showed that our proposed fully automatic radiomics pipeline outperformed the traditional radiomics pipeline on the test set, achieving an accuracy of 88.64%, demonstrating its effectiveness in predicting the postoperative FN function in VS patients. Our proposed automatic method has the potential to become a valuable decision-making tool in neurosurgery, assisting neurosurgeons in making more informed decisions regarding surgical interventions and improving the treatment of VS patients.

Early detection of Alzheimer's disease progression stages using hybrid of CNN and transformer encoder models.

Almalki H, Khadidos AO, Alhebaishi N, Senan EM

pubmed logopapersMay 14 2025
Alzheimer's disease (AD) is a neurodegenerative disorder that affects memory and cognitive functions. Manual diagnosis is prone to human error, often leading to misdiagnosis or delayed detection. MRI techniques help visualize the fine tissues of the brain cells, indicating the stage of disease progression. Artificial intelligence techniques analyze MRI with high accuracy and extract subtle features that are difficult to diagnose manually. In this study, a modern methodology was designed that combines the power of CNN models (ResNet101 and GoogLeNet) to extract local deep features and the power of Vision Transformer (ViT) models to extract global features and find relationships between image spots. First, the MRI images of the Open Access Imaging Studies Series (OASIS) dataset were improved by two filters: the adaptive median filter (AMF) and Laplacian filter. The ResNet101 and GoogLeNet models were modified to suit the feature extraction task and reduce computational cost. The ViT architecture was modified to reduce the computational cost while increasing the number of attention vertices to further discover global features and relationships between image patches. The enhanced images were fed into the proposed ViT-CNN methodology. The enhanced images were fed to the modified ResNet101 and GoogLeNet models to extract the deep feature maps with high accuracy. Deep feature maps were fed into the modified ViT model. The deep feature maps were partitioned into 32 feature maps using ResNet101 and 16 feature maps using GoogLeNet, both with a size of 64 features. The feature maps were encoded to recognize the spatial arrangement of the patch and preserve the relationship between patches, helping the self-attention layers distinguish between patches based on their positions. They were fed to the transformer encoder, which consisted of six blocks and multiple vertices to focus on different patterns or regions simultaneously. Finally, the MLP classification layers classify each image into one of four dataset classes. The improved ResNet101-ViT hybrid methodology outperformed the GoogLeNet-ViT hybrid methodology. ResNet101-ViT achieved 98.7% accuracy, 95.05% AUC, 96.45% precision, 99.68% sensitivity, and 97.78% specificity.

A multi-layered defense against adversarial attacks in brain tumor classification using ensemble adversarial training and feature squeezing.

Yinusa A, Faezipour M

pubmed logopapersMay 14 2025
Deep learning, particularly convolutional neural networks (CNNs), has proven valuable for brain tumor classification, aiding diagnostic and therapeutic decisions in medical imaging. Despite their accuracy, these models are vulnerable to adversarial attacks, compromising their reliability in clinical settings. In this research, we utilized a VGG16-based CNN model to classify brain tumors, achieving 96% accuracy on clean magnetic resonance imaging (MRI) data. To assess robustness, we exposed the model to Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks, which reduced accuracy to 32% and 13%, respectively. We then applied a multi-layered defense strategy, including adversarial training with FGSM and PGD examples and feature squeezing techniques such as bit-depth reduction and Gaussian blurring. This approach improved model resilience, achieving 54% accuracy on FGSM and 47% on PGD adversarial examples. Our results highlight the importance of proactive defense strategies for maintaining the reliability of AI in medical imaging under adversarial conditions.

[Radiosurgery of benign intracranial lesions. Indications, results , and perspectives].

Danthez N, De Cournuaud C, Pistocchi S, Aureli V, Giammattei L, Hottinger AF, Schiappacasse L

pubmed logopapersMay 14 2025
Stereotactic radiosurgery (SRS) is a non-invasive technique that is transforming the management of benign intracranial lesions through its precision and preservation of healthy tissues. It is effective for meningiomas, trigeminal neuralgia (TN), pituitary adenomas, vestibular schwannomas, and arteriovenous malformations. SRS ensures high tumor control rates, particularly for Grade I meningiomas and vestibular schwannomas. For refractory TN, it provides initial pain relief > 80 %. The advent of technologies such as PET-MRI, hypofractionation, and artificial intelligence is further improving treatment precision, but challenges remain, including the management of late side effects and standardization of practice.

Fed-ComBat: A Generalized Federated Framework for Batch Effect Harmonization in Collaborative Studies

Silva, S., Lorenzi, M., Altmann, A., Oxtoby, N.

biorxiv logopreprintMay 14 2025
In neuroimaging research, the utilization of multi-centric analyses is crucial for obtaining sufficient sample sizes and representative clinical populations. Data harmonization techniques are typically part of the pipeline in multi-centric studies to address systematic biases and ensure the comparability of the data. However, most multi-centric studies require centralized data, which may result in exposing individual patient information. This poses a significant challenge in data governance, leading to the implementation of regulations such as the GDPR and the CCPA, which attempt to address these concerns but also hinder data access for researchers. Federated learning offers a privacy-preserving alternative approach in machine learning, enabling models to be collaboratively trained on decentralized data without the need for data centralization or sharing. In this paper, we present Fed-ComBat, a federated framework for batch effect harmonization on decentralized data. Fed-ComBat extends existing centralized linear methods, such as ComBat and distributed as d-ComBat, and nonlinear approaches like ComBat-GAM in accounting for potentially nonlinear and multivariate covariate effects. By doing so, Fed-ComBat enables the preservation of nonlinear covariate effects without requiring centralization of data and without prior knowledge of which variables should be considered nonlinear or their interactions, differentiating it from ComBat-GAM. We assessed Fed-ComBat and existing approaches on simulated data and multiple cohorts comprising healthy controls (CN) and subjects with various disorders such as Parkinson's disease (PD), Alzheimer's disease (AD), and autism spectrum disorder (ASD). The results of our study show that Fed-ComBat performs better than centralized ComBat when dealing with nonlinear effects and is on par with centralized methods like ComBat-GAM. Through experiments using synthetic data, Fed-ComBat demonstrates a superior ability to reconstruct the target unbiased function, achieving a 35% improvement (RMSE=0.5952) compared to d-ComBat (RMSE=0.9162) and a 12% improvement compared to our proposal to federate ComBat-GAM, d-ComBat-GAM (RMSE=0.6751). Additionally, Fed-ComBat achieves comparable results to centralized methods like ComBat-GAM for MRI-derived phenotypes without requiring prior knowledge of potential nonlinearities.

Deep learning for cerebral vascular occlusion segmentation: A novel ConvNeXtV2 and GRN-integrated U-Net framework for diffusion-weighted imaging.

Ince S, Kunduracioglu I, Algarni A, Bayram B, Pacal I

pubmed logopapersMay 14 2025
Cerebral vascular occlusion is a serious condition that can lead to stroke and permanent neurological damage due to insufficient oxygen and nutrients reaching brain tissue. Early diagnosis and accurate segmentation are critical for effective treatment planning. Due to its high soft tissue contrast, Magnetic Resonance Imaging (MRI) is commonly used for detecting these occlusions such as ischemic stroke. However, challenges such as low contrast, noise, and heterogeneous lesion structures in MRI images complicate manual segmentation and often lead to misinterpretations. As a result, deep learning-based Computer-Aided Diagnosis (CAD) systems are essential for faster and more accurate diagnosis and treatment methods, although they can sometimes face challenges such as high computational costs and difficulties in segmenting small or irregular lesions. This study proposes a novel U-Net architecture enhanced with ConvNeXtV2 blocks and GRN-based Multi-Layer Perceptrons (MLP) to address these challenges in cerebral vascular occlusion segmentation. This is the first application of ConvNeXtV2 in this domain. The proposed model significantly improves segmentation accuracy, even in low-contrast regions, while maintaining high computational efficiency, which is crucial for real-world clinical applications. To reduce false positives and improve overall accuracy, small lesions (≤5 pixels) were removed in the preprocessing step with the support of expert clinicians. Experimental results on the ISLES 2022 dataset showed superior performance with an Intersection over Union (IoU) of 0.8015 and a Dice coefficient of 0.8894. Comparative analyses indicate that the proposed model achieves higher segmentation accuracy than existing U-Net variants and other methods, offering a promising solution for clinical use.

Comparative performance of large language models in structuring head CT radiology reports: multi-institutional validation study in Japan.

Takita H, Walston SL, Mitsuyama Y, Watanabe K, Ishimaru S, Ueda D

pubmed logopapersMay 14 2025
To compare the diagnostic performance of three proprietary large language models (LLMs)-Claude, GPT, and Gemini-in structuring free-text Japanese radiology reports for intracranial hemorrhage and skull fractures, and to assess the impact of three different prompting approaches on model accuracy. In this retrospective study, head CT reports from the Japan Medical Imaging Database between 2018 and 2023 were collected. Two board-certified radiologists established the ground truth regarding intracranial hemorrhage and skull fractures through independent review and consensus. Each radiology report was analyzed by three LLMs using three prompting strategies-Standard, Chain of Thought, and Self Consistency prompting. Diagnostic performance (accuracy, precision, recall, and F1-score) was calculated for each LLM-prompt combination and compared using McNemar's tests with Bonferroni correction. Misclassified cases underwent qualitative error analysis. A total of 3949 head CT reports from 3949 patients (mean age 59 ± 25 years, 56.2% male) were enrolled. Across all institutions, 856 patients (21.6%) had intracranial hemorrhage and 264 patients (6.6%) had skull fractures. All nine LLM-prompt combinations achieved very high accuracy. Claude demonstrated significantly higher accuracy for intracranial hemorrhage than GPT and Gemini, and also outperformed Gemini for skull fractures (p < 0.0001). Gemini's performance improved notably with Chain of Thought prompting. Error analysis revealed common challenges including ambiguous phrases and findings unrelated to intracranial hemorrhage or skull fractures, underscoring the importance of careful prompt design. All three proprietary LLMs exhibited strong performance in structuring free-text head CT reports for intracranial hemorrhage and skull fractures. While the choice of prompting method influenced accuracy, all models demonstrated robust potential for clinical and research applications. Future work should refine the prompts and validate these approaches in prospective, multilingual settings.
Page 117 of 1241236 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.