Sort by:
Page 208 of 2352341 results

Multimodal MRI radiomics enhances epilepsy prediction in pediatric low-grade glioma patients.

Tang T, Wu Y, Dong X, Zhai X

pubmed logopapersMay 22 2025
Determining whether pediatric patients with low-grade gliomas (pLGGs) have tumor-related epilepsy (GAE) is a crucial aspect of preoperative evaluation. Therefore, we aim to propose an innovative, machine learning- and deep learning-based framework for the rapid and non-invasive preoperative assessment of GAE in pediatric patients using magnetic resonance imaging (MRI). In this study, we propose a novel radiomics-based approach that integrates tumor and peritumoral features extracted from preoperative multiparametric MRI scans to accurately and non-invasively predict the occurrence of tumor-related epilepsy in pediatric patients. Our study developed a multimodal MRI radiomics model to predict epilepsy in pLGGs patients, achieving an AUC of 0.969. The integration of multi-sequence MRI data significantly improved predictive performance, with Stochastic Gradient Descent (SGD) classifier showing robust results (sensitivity: 0.882, specificity: 0.956). Our model can accurately predict whether pLGGs patients have tumor-related epilepsy, which could guide surgical decision-making. Future studies should focus on similarly standardized preoperative evaluations in pediatric epilepsy centers to increase training data and enhance the generalizability of the model.

Influence of content-based image retrieval on the accuracy and inter-reader agreement of usual interstitial pneumonia CT pattern classification.

Park S, Hwang HJ, Yun J, Chae EJ, Choe J, Lee SM, Lee HN, Shin SY, Park H, Jeong H, Kim MJ, Lee JH, Jo KW, Baek S, Seo JB

pubmed logopapersMay 22 2025
To investigate whether a content-based image retrieval (CBIR) of similar chest CT images can help usual interstitial pneumonia (UIP) CT pattern classifications among readers with varying levels of experience. This retrospective study included patients who underwent high-resolution chest CT between 2013 and 2015 for the initial workup for fibrosing interstitial lung disease. UIP classifications were assigned to CT images by three thoracic radiologists, which served as the ground truth. One hundred patients were selected as queries. The CBIR retrieved the top three similar CT images with UIP classifications using a deep learning algorithm. The diagnostic accuracies and inter-reader agreement of nine readers before and after CBIR were evaluated. Of 587 patients (mean age, 63 years; 356 men), 100 query cases (26 UIP patterns, 26 probable UIP patterns, 5 indeterminate for UIP, and 43 alternative diagnoses) were selected. After CBIR, the mean accuracy (61.3% to 67.1%; p = 0.011) and inter-reader agreement (Fleiss Kappa, 0.400 to 0.476; p = 0.003) were slightly improved. The accuracies of the radiologist group for all CT patterns except indeterminate for UIP increased after CBIR; however, they did not reach statistical significance. The resident and pulmonologist groups demonstrated mixed results: accuracy decreased for UIP pattern, increased for alternative diagnosis, and varied for others. CBIR slightly improved diagnostic accuracy and inter-reader agreement in UIP pattern classifications. However, its impact varied depending on the readers' level of experience, suggesting that the current CBIR system may be beneficial when used to complement the interpretations of experienced readers. Question CT pattern classification is important for the standardized assessment and management of idiopathic pulmonary fibrosis, but requires radiologic expertise and shows inter-reader variability. Findings CBIR slightly improved diagnostic accuracy and inter-reader agreement for UIP CT pattern classifications overall. Clinical relevance The proposed CBIR system may guide consistent work-up and treatment strategies by enhancing accuracy and inter-reader agreement in UIP CT pattern classifications by experienced readers whose expertise and experience can effectively interact with CBIR results.

A Novel Dynamic Neural Network for Heterogeneity-Aware Structural Brain Network Exploration and Alzheimer's Disease Diagnosis.

Cui W, Leng Y, Peng Y, Bai C, Li L, Jiang X, Yuan G, Zheng J

pubmed logopapersMay 22 2025
Heterogeneity is a fundamental characteristic of brain diseases, distinguished by variability not only in brain atrophy but also in the complexity of neural connectivity and brain networks. However, existing data-driven methods fail to provide a comprehensive analysis of brain heterogeneity. Recently, dynamic neural networks (DNNs) have shown significant advantages in capturing sample-wise heterogeneity. Therefore, in this article, we first propose a novel dynamic heterogeneity-aware network (DHANet) to identify critical heterogeneous brain regions, explore heterogeneous connectivity between them, and construct a heterogeneous-aware structural brain network (HGA-SBN) using structural magnetic resonance imaging (sMRI). Specifically, we develop a 3-D dynamic convmixer to extract abundant heterogeneous features from sMRI first. Subsequently, the critical brain atrophy regions are identified by dynamic prototype learning with embedding the hierarchical brain semantic structure. Finally, we employ a joint dynamic edge-correlation (JDE) modeling approach to construct the heterogeneous connectivity between these regions and analyze the HGA-SBN. To evaluate the effectiveness of the DHANet, we conduct elaborate experiments on three public datasets and the method achieves state-of-the-art (SOTA) performance on two classification tasks.

Radiomics-Based Early Triage of Prostate Cancer: A Multicenter Study from the CHAIMELEON Project

Vraka, A., Marfil-Trujillo, M., Ribas-Despuig, G., Flor-Arnal, S., Cerda-Alberich, L., Jimenez-Gomez, P., Jimenez-Pastor, A., Marti-Bonmati, L.

medrxiv logopreprintMay 22 2025
Prostate cancer (PCa) is the most commonly diagnosed malignancy in men worldwide. Accurate triage of patients based on tumor aggressiveness and staging is critical for selecting appropriate management pathways. While magnetic resonance imaging (MRI) has become a mainstay in PCa diagnosis, most predictive models rely on multiparametric imaging or invasive inputs, limiting generalizability in real-world clinical settings. This study aimed to develop and validate machine learning (ML) models using radiomic features extracted from T2-weighted MRI--alone and in combination with clinical variables--to predict ISUP grade (tumor aggressiveness), lymph node involvement (cN) and distant metastasis (cM). A retrospective multicenter cohort from three European sites in the Chaimeleon project was analyzed. Radiomic features were extracted from prostate zone segmentations and lesion masks, following standardized preprocessing and ComBat harmonization. Feature selection and model optimization were performed using nested cross-validation and Bayesian tuning. Hybrid models were trained using XGBoost and interpreted with SHAP values. The ISUP model achieved an AUC of 0.66, while the cN and cM models reached AUCs of 0.77 and 0.80, respectively. The best-performing models consistently combined prostate zone radiomics with clinical features such as PSA, PIRADSv2 and ISUP grade. SHAP analysis confirmed the importance of both clinical and texture-based radiomic features, with entropy and non-uniformity measures playing central roles in all tasks. Our results demonstrate the feasibility of using T2-weighted MRI and zonal radiomics for robust prediction of aggressiveness, nodal involvement and distant metastasis in PCa. This fully automated pipeline offers an interpretable, accessible and clinically translatable tool for first-line PCa triage, with potential integration into real-world diagnostic workflows.

Patient Reactions to Artificial Intelligence-Clinician Discrepancies: Web-Based Randomized Experiment.

Madanay F, O'Donohue LS, Zikmund-Fisher BJ

pubmed logopapersMay 22 2025
As the US Food and Drug Administration (FDA)-approved use of artificial intelligence (AI) for medical imaging rises, radiologists are increasingly integrating AI into their clinical practices. In lung cancer screening, diagnostic AI offers a second set of eyes with the potential to detect cancer earlier than human radiologists. Despite AI's promise, a potential problem with its integration is the erosion of patient confidence in clinician expertise when there is a discrepancy between the radiologist's and the AI's interpretation of the imaging findings. We examined how discrepancies between AI-derived recommendations and radiologists' recommendations affect patients' agreement with radiologists' recommendations and satisfaction with their radiologists. We also analyzed how patients' medical maximizing-minimizing preferences moderate these relationships. We conducted a randomized, between-subjects experiment with 1606 US adult participants. Assuming the role of patients, participants imagined undergoing a low-dose computerized tomography scan for lung cancer screening and receiving results and recommendations from (1) a radiologist only, (2) AI and a radiologist in agreement, (3) a radiologist who recommended more testing than AI (ie, radiologist overcalled AI), or (4) a radiologist who recommended less testing than AI (ie, radiologist undercalled AI). Participants rated the radiologist on three criteria: agreement with the radiologist's recommendation, how likely they would be to recommend the radiologist to family and friends, and how good of a provider they perceived the radiologist to be. We measured medical maximizing-minimizing preferences and categorized participants as maximizers (ie, those who seek aggressive intervention), minimizers (ie, those who prefer no or passive intervention), and neutrals (ie, those in the middle). Participants' agreement with the radiologist's recommendation was significantly lower when the radiologist undercalled AI (mean 4.01, SE 0.07, P<.001) than in the other 3 conditions, with no significant differences among them (radiologist overcalled AI [mean 4.63, SE 0.06], agreed with AI [mean 4.55, SE 0.07], or had no AI [mean 4.57, SE 0.06]). Similarly, participants were least likely to recommend (P<.001) and positively rate (P<.001) the radiologist who undercalled AI, with no significant differences among the other conditions. Maximizers agreed with the radiologist who overcalled AI (β=0.82, SE 0.14; P<.001) and disagreed with the radiologist who undercalled AI (β=-0.47, SE 0.14; P=.001). However, whereas minimizers disagreed with the radiologist who overcalled AI (β=-0.43, SE 0.18, P=.02), they did not significantly agree with the radiologist who undercalled AI (β=0.14, SE 0.17, P=.41). Radiologists who recommend less testing than AI may face decreased patient confidence in their expertise, but they may not face this same penalty for giving more aggressive recommendations than AI. Patients' reactions may depend in part on whether their general preferences to maximize or minimize align with the radiologists' recommendations. Future research should test communication strategies for radiologists' disclosure of AI discrepancies to patients.

Mitigating Overfitting in Medical Imaging: Self-Supervised Pretraining vs. ImageNet Transfer Learning for Dermatological Diagnosis

Iván Matas, Carmen Serrano, Miguel Nogales, David Moreno, Lara Ferrándiz, Teresa Ojeda, Begoña Acha

arxiv logopreprintMay 22 2025
Deep learning has transformed computer vision but relies heavily on large labeled datasets and computational resources. Transfer learning, particularly fine-tuning pretrained models, offers a practical alternative; however, models pretrained on natural image datasets such as ImageNet may fail to capture domain-specific characteristics in medical imaging. This study introduces an unsupervised learning framework that extracts high-value dermatological features instead of relying solely on ImageNet-based pretraining. We employ a Variational Autoencoder (VAE) trained from scratch on a proprietary dermatological dataset, allowing the model to learn a structured and clinically relevant latent space. This self-supervised feature extractor is then compared to an ImageNet-pretrained backbone under identical classification conditions, highlighting the trade-offs between general-purpose and domain-specific pretraining. Our results reveal distinct learning patterns. The self-supervised model achieves a final validation loss of 0.110 (-33.33%), while the ImageNet-pretrained model stagnates at 0.100 (-16.67%), indicating overfitting. Accuracy trends confirm this: the self-supervised model improves from 45% to 65% (+44.44%) with a near-zero overfitting gap, whereas the ImageNet-pretrained model reaches 87% (+50.00%) but plateaus at 75% (+19.05%), with its overfitting gap increasing to +0.060. These findings suggest that while ImageNet pretraining accelerates convergence, it also amplifies overfitting on non-clinically relevant features. In contrast, self-supervised learning achieves steady improvements, stronger generalization, and superior adaptability, underscoring the importance of domain-specific feature extraction in medical imaging.

A Deep Learning Vision-Language Model for Diagnosing Pediatric Dental Diseases

Pham, T.

medrxiv logopreprintMay 22 2025
This study proposes a deep learning vision-language model for the automated diagnosis of pediatric dental diseases, with a focus on differentiating between caries and periapical infections. The model integrates visual features extracted from panoramic radiographs using methods of non-linear dynamics and textural encoding with textual descriptions generated by a large language model. These multimodal features are concatenated and used to train a 1D-CNN classifier. Experimental results demonstrate that the proposed model outperforms conventional convolutional neural networks and standalone language-based approaches, achieving high accuracy (90%), sensitivity (92%), precision (92%), and an AUC of 0.96. This work highlights the value of combining structured visual and textual representations in improving diagnostic accuracy and interpretability in dental radiology. The approach offers a promising direction for the development of context-aware, AI-assisted diagnostic tools in pediatric dental care.

Deep Learning-Based Multimodal Feature Interaction-Guided Fusion: Enhancing the Evaluation of EGFR in Advanced Lung Adenocarcinoma.

Xu J, Feng B, Chen X, Wu F, Liu Y, Yu Z, Lu S, Duan X, Chen X, Li K, Zhang W, Dai X

pubmed logopapersMay 22 2025
The aim of this study is to develop a deep learning-based multimodal feature interaction-guided fusion (DL-MFIF) framework that integrates macroscopic information from computed tomography (CT) images with microscopic information from whole-slide images (WSIs) to predict the epidermal growth factor receptor (EGFR) mutations of primary lung adenocarcinoma in patients with advanced-stage disease. Data from 396 patients with lung adenocarcinoma across two medical institutions were analyzed. The data from 243 cases were divided into a training set (n=145) and an internal validation set (n=98) in a 6:4 ratio, and data from an additional 153 cases from another medical institution were included as an external validation set. All cases included CT scan images and WSIs. To integrate multimodal information, we developed the DL-MFIF framework, which leverages deep learning techniques to capture the interactions between radiomic macrofeatures derived from CT images and microfeatures obtained from WSIs. Compared to other classification models, the DL-MFIF model achieved significantly higher area under the curve (AUC) values. Specifically, the model outperformed others on both the internal validation set (AUC=0.856, accuracy=0.750) and the external validation set (AUC=0.817, accuracy=0.708). Decision curve analysis (DCA) demonstrated that the model provided superior net benefits(range 0.15-0.87). Delong's test for external validation confirmed the statistical significance of the results (P<0.05). The DL-MFIF model demonstrated excellent performance in evaluating and distinguishing the EGFR in patients with advanced lung adenocarcinoma. This model effectively aids radiologists in accurately classifying EGFR mutations in patients with primary lung adenocarcinoma, thereby improving treatment outcomes for this population.

Deep learning-based model for difficult transfemoral access prediction compared with human assessment in stroke thrombectomy.

Canals P, Garcia-Tornel A, Requena M, Jabłońska M, Li J, Balocco S, Díaz O, Tomasello A, Ribo M

pubmed logopapersMay 22 2025
In mechanical thrombectomy (MT), extracranial vascular tortuosity is among the main determinants of procedure duration and success. Currently, no rapid and reliable method exists to identify the anatomical features precluding fast and stable access to the cervical vessels. A retrospective sample of 513 patients were included in this study. Patients underwent first-line transfemoral MT following anterior circulation large vessel occlusion stroke. Difficult transfemoral access (DTFA) was defined as impossible common carotid catheterization or time from groin puncture to first carotid angiogram >30 min. A machine learning model based on 29 anatomical features automatically extracted from head-and-neck computed tomography angiography (CTA) was developed to predict DTFA. Three experienced raters independently assessed the likelihood of DTFA on a reduced cohort of 116 cases using a Likert scale as benchmark for the model, using preprocedural CTA as well as automatic 3D vascular segmentation separately. Among the study population, 11.5% of procedures (59/513) presented DTFA. Six different features from the aortic, supra-aortic, and cervical regions were included in the model. Cross-validation resulted in an area under the receiver operating characteristic (AUROC) curve of 0.76 (95% CI 0.75 to 0.76) for DTFA prediction, with high sensitivity for impossible access identification (0.90, 95% CI 0.81 to 0.94). The model outperformed human assessment in the reduced cohort [F1-score (95% CI) by experts with CTA: 0.43 (0.37 to 0.50); experts with 3D segmentation: 0.50 (0.46 to 0.54); and model: 0.70 (0.65 to 0.75)]. A fully automatic model for DTFA prediction was developed and validated. The presented method improved expert assessment of difficult access prediction in stroke MT. Derived information could be used to guide decisions regarding arterial access for MT.

Predicting Depression in Healthy Young Adults: A Machine Learning Approach Using Longitudinal Neuroimaging Data.

Zhang A, Zhang H

pubmed logopapersMay 22 2025
Accurate prediction of depressive symptoms in healthy individuals can enable early intervention and reduce both individual and societal costs. This study aimed to develop predictive models for depression in young adults using machine learning (ML) techniques and longitudinal data from the Beck Depression Inventory, structural MRI (sMRI), and resting-state functional MRI (rs-fMRI). Feature selection methods, including the least absolute shrinkage and selection operator (LASSO), Boruta, and VSURF, were applied to identify MRI features associated with depression. Support vector machine and random forest algorithms were then used to construct prediction models. Eight MRI features were identified as predictive of depression, including brain regions in the Orbital Gyrus, Superior Frontal Gyrus, Middle Frontal Gyrus, Parahippocampal Gyrus, Cingulate Gyrus, and Inferior Parietal Lobule. The overlaps and the differences between selected features and brain regions with significant between-group differences in t-tests suggest that ML provides a unique perspective on the neural changes associated with depression. Six pairs of prediction models demonstrated varying performance, with accuracies ranging from 0.68 to 0.85 and areas under the curve (AUC) ranging from 0.57 to 0.81. The best-performing model achieved an accuracy of 0.85 and an AUC of 0.80, highlighting the potential of combining sMRI and rs-fMRI features with ML for early depression detection while revealing the potential of overfitting in small-sample and high-dimensional settings. This study necessitates further research to (1) replicate findings in independent larger datasets to address potential overfitting and (2) utilize different advanced ML techniques and multimodal data fusion to improve model performance.
Page 208 of 2352341 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.