Sort by:
Page 36 of 2182174 results

Development and validation of a machine learning model for central compartmental lymph node metastasis in solitary papillary thyroid microcarcinoma via ultrasound imaging features and clinical parameters.

Han H, Sun H, Zhou C, Wei L, Xu L, Shen D, Hu W

pubmed logopapersJul 1 2025
Papillary thyroid microcarcinoma (PTMC) is the most common malignant subtype of thyroid cancer. Preoperative assessment of the risk of central compartment lymph node metastasis (CCLNM) can provide scientific support for personalized treatment decisions prior to microwave ablation of thyroid nodules. The objective of this study was to develop a predictive model for CCLNM in patients with solitary PTMC on the basis of a combination of ultrasound radiomics and clinical parameters. We retrospectively analyzed data from 480 patients diagnosed with PTMC via postoperative pathological examination. The patients were randomly divided into a training set (n = 336) and a validation set (n = 144) at a 7:3 ratio. The cohort was stratified into a metastasis group and a nonmetastasis group on the basis of postoperative pathological results. Ultrasound radiomic features were extracted from routine thyroid ultrasound images, and multiple feature selection methods were applied to construct radiomic models for each group. Independent risk factors, along with radiomics features identified through multivariate logistic regression analysis, were subsequently refined through additional feature selection techniques to develop combined predictive models. The performance of each model was then evaluated. The combined model, which incorporates age, the presence of Hashimoto's thyroiditis (HT), and radiomics features selected via an optimal feature selection approach (percentage-based), exhibited superior predictive efficacy, with AUC values of 0.767 (95% CI: 0.716-0.818) in the training set and 0.729 (95% CI: 0.648-0.810) in the validation set. A machine learning-based model combining ultrasound radiomics and clinical variables shows promise for the preoperative risk stratification of CCLNM in patients with PTMC. However, further validation in larger, more diverse cohorts is needed before clinical application. Not applicable.

Automated 3D segmentation of the hyoid bone in CBCT using nnU-Net v2: a retrospective study on model performance and potential clinical utility.

Gümüssoy I, Haylaz E, Duman SB, Kalabalik F, Say S, Celik O, Bayrakdar IS

pubmed logopapersJul 1 2025
This study aimed to identify the hyoid bone (HB) using the nnU-Net based artificial intelligence (AI) model in cone beam computed tomography (CBCT) images and assess the model's success in automatic segmentation. CBCT images of 190 patients were randomly selected. The raw data was converted to DICOM format and transferred to the 3D Slicer Imaging Software (Version 4.10.2; MIT, Cambridge, MA, USA). HB was labeled manually using the 3D Slicer. The dataset was divided into training, validation, and test sets in a ratio of 8:1:1. The nnU-Net v2 architecture was utilized to process the training and test datasets, generating the algorithm weight factors. To assess the model's accuracy and performance, a confusion matrix was employed. F1-score, Dice coefficient (DC), 95% Hausdorff distance (95% HD), and Intersection over Union (IoU) metrics were calculated to evaluate the results. The model's performance metrics were as follows: DC = 0.9434, IoU = 0.8941, F1-score = 0.9446, and 95% HD = 1.9998. The receiver operating characteristic (ROC) curve was generated, yielding an AUC value of 0.98. The results indicated that the nnU-Net v2 model achieved high precision and accuracy in HB segmentation on CBCT images. Automatic segmentation of HB can enhance clinicians' decision-making speed and accuracy in diagnosing and treating various clinical conditions. Not applicable.

Development and validation of CT-based fusion model for preoperative prediction of invasion and lymph node metastasis in adenocarcinoma of esophagogastric junction.

Cao M, Xu R, You Y, Huang C, Tong Y, Zhang R, Zhang Y, Yu P, Wang Y, Chen W, Cheng X, Zhang L

pubmed logopapersJul 1 2025
In the context of precision medicine, radiomics has become a key technology in solving medical problems. For adenocarcinoma of esophagogastric junction (AEG), developing a preoperative CT-based prediction model for AEG invasion and lymph node metastasis is crucial. We retrospectively collected 256 patients with AEG from two centres. The radiomics features were extracted from the preoperative diagnostic CT images, and the feature selection method and machine learning method were applied to reduce the feature size and establish the predictive imaging features. By comparing the three machine learning methods, the best radiomics nomogram was selected, and the average AUC was obtained by 20 repeats of fivefold cross-validation for comparison. The fusion model was constructed by logistic regression combined with clinical factors. On this basis, ROC curve, calibration curve and decision curve of the fusion model are constructed. The predictive efficacy of fusion model for tumour invasion depth was higher than that of radiomics nomogram, with an AUC of 0.764 vs. 0.706 in the test set, P < 0.001, internal validation set 0.752 vs. 0.697, P < 0.001, and external validation set 0.756 vs. 0.687, P < 0.001, respectively. The predictive efficacy of the lymph node metastasis fusion model was higher than that of the radiomics nomogram, with an AUC of 0.809 vs. 0.732 in the test set, P < 0.001, internal validation set 0.841 vs. 0.718, P < 0.001, and external validation set 0.801 vs. 0.680, P < 0.001, respectively. We have developed a fusion model combining radiomics and clinical risk factors, which is crucial for the accurate preoperative diagnosis and treatment of AEG, advancing precision medicine. It may also spark discussions on the imaging feature differences between AEG and GC (Gastric cancer).

Differential dementia detection from multimodal brain images in a real-world dataset.

Leming M, Im H

pubmed logopapersJul 1 2025
Artificial intelligence (AI) models have been applied to differential dementia detection tasks in brain images from curated, high-quality benchmark databases, but not real-world data in hospitals. We describe a deep learning model specially trained for disease detection in heterogeneous clinical images from electronic health records without focusing on confounding factors. It encodes up to 14 multimodal images, alongside age and demographics, and outputs the likelihood of vascular dementia, Alzheimer's, Lewy body dementia, Pick's disease, mild cognitive impairment, and unspecified dementia. We use data from Massachusetts General Hospital (183,018 images from 11,015 patients) for training and external data (125,493 images from 6,662 patients) for testing. Performance ranged between 0.82 and 0.94 area under the curve (AUC) on data from 1003 sites. Analysis shows that the model focused on subcortical brain structures as the basis for its decisions. By detecting biomarkers in real-world data, the presented techniques will help with clinical translation of disease detection AI. Our artificial intelligence (AI) model can detect neurodegenerative disorders in brain imaging electronic health record (EHR) data. It encodes up to 14 brain images and text information from a single patient's EHR. Attention maps show that the model focuses on subcortical brain structures. Performance ranged from 0.82 to 0.94 area under the curve (AUC) on data from 1003 external sites.

Radiomics and machine learning for osteoporosis detection using abdominal computed tomography: a retrospective multicenter study.

Liu Z, Li Y, Zhang C, Xu H, Zhao J, Huang C, Chen X, Ren Q

pubmed logopapersJul 1 2025
This study aimed to develop and validate a predictive model to detect osteoporosis using radiomic features and machine learning (ML) approaches from lumbar spine computed tomography (CT) images during an abdominal CT examination. A total of 509 patients who underwent both quantitative CT (QCT) and abdominal CT examinations (training group, n = 279; internal validation group, n = 120; external validation group, n = 110) were analyzed in this retrospective study from two centers. Radiomic features were extracted from the lumbar spine CT images. Seven radiomic-based ML models, including logistic regression (LR), Bernoulli, Gaussian NB, SGD, decision tree, support vector machine (SVM), and K-nearest neighbor (KNN) models, were constructed. The performance of the models was assessed using the area under the curve (AUC) of receiver operating characteristic (ROC) curve analysis and decision curve analysis (DCA). The radiomic model based on LR in the internal validation group and external validation group had excellent performance, with an AUC of 0.960 and 0.786 for differentiating osteoporosis from normal BMD and osteopenia, respectively. The radiomic model based on LR in the internal validation group and Gaussian NB model in the external validation group yielded the highest performance, with an AUC of 0.905 and 0.839 for discriminating normal BMD from osteopenia and osteoporosis, respectively. DCA in the internal validation group revealed that the LR model had greater net benefit than the other models in differentiating osteoporosis from normal BMD and osteopenia. Radiomic-based ML approaches may be used to predict osteoporosis from abdominal CT images and as a tool for opportunistic osteoporosis screening.

Contrast-enhanced mammography-based interpretable machine learning model for the prediction of the molecular subtype breast cancers.

Ma M, Xu W, Yang J, Zheng B, Wen C, Wang S, Xu Z, Qin G, Chen W

pubmed logopapersJul 1 2025
This study aims to establish a machine learning prediction model to explore the correlation between contrast-enhanced mammography (CEM) imaging features and molecular subtypes of mass-type breast cancer. This retrospective study included women with breast cancer who underwent CEM preoperatively between 2018 and 2021. We included 241 patients, which were randomly assigned to either a training or a test set in a 7:3 ratio. Twenty-one features were visually described, including four clinical features and seventeen radiological features, these radiological features which extracted from the CEM. Three binary classifications of subtypes were performed: Luminal vs. non-Luminal, HER2-enriched vs. non-HER2-enriched, and triple-negative (TNBC) vs. non-triple-negative. A multinomial naive Bayes (MNB) machine learning scheme was employed for the classification, and the least absolute shrink age and selection operator method were used to select the most predictive features for the classifiers. The classification performance was evaluated using the area under the receiver operating characteristic curve. We also utilized SHapley Additive exPlanation (SHAP) values to explain the prediction model. The model that used a combination of low energy (LE) and dual-energy subtraction (DES) achieved the best performance compared to using either of the two images alone, yielding an area under the receiver operating characteristic curve of 0.798 for Luminal vs. non-Luminal subtypes, 0.695 for TNBC vs. non-TNBC, and 0.773 for HER2-enriched vs. non-HER2-enriched. The SHAP algorithm shows that "LE_mass_margin_spiculated," "DES_mass_enhanced_margin_spiculated," and "DES_mass_internal_enhancement_homogeneous" have the most significant impact on the model's performance in predicting Luminal and non-Luminal breast cancer. "mass_calcification_relationship_no," "calcification_ type_no," and "LE_mass_margin_spiculated" have a considerable impact on the model's performance in predicting HER2 and non-HER2 breast cancer. The radiological characteristics of breast tumors extracted from CEM were found to be associated with breast cancer subtypes in our study. Future research is needed to validate these findings.

Multimodal deep learning-based radiomics for meningioma consistency prediction: integrating T1 and T2 MRI in a multi-center study.

Lin H, Yue Y, Xie L, Chen B, Li W, Yang F, Zhang Q, Chen H

pubmed logopapersJul 1 2025
Meningioma consistency critically impacts surgical planning, as soft tumors are easier to resect than hard tumors. Current assessments of tumor consistency using MRI are subjective and lack quantitative accuracy. Integrating deep learning and radiomics could enhance the predictive accuracy of meningioma consistency. A retrospective study analyzed 204 meningioma patients from two centers: the Second Affiliated Hospital of Guangzhou Medical University and the Southern Theater Command Hospital PLA. Three models-a radiomics model (Rad_Model), a deep learning model (DL_Model), and a combined model (DLR_Model)-were developed. Model performance was evaluated using AUC, accuracy, sensitivity, specificity, and precision. The DLR_Model outperformed other models across all cohorts. In the training set, it achieved AUC 0.957, accuracy of 0.908, and precision of 0.965. In the external test cohort, it maintained superior performance with an AUC of 0.854, accuracy of 0.778, and precision of 0.893, surpassing both the Rad_Model (AUC = 0.768) and DL_Model (AUC = 0.720). Combining radiomics and deep learning features improved predictive performance and robustness. Our study introduced and evaluated a deep learning radiomics model (DLR-Model) to accurately predict the consistency of meningiomas, which has the potential to improve preoperative assessments and surgical planning.

A hybrid XAI-driven deep learning framework for robust GI tract disease diagnosis.

Dahan F, Shah JH, Saleem R, Hasnain M, Afzal M, Alfakih TM

pubmed logopapersJul 1 2025
The stomach is one of the main digestive organs in the GIT, essential for digestion and nutrient absorption. However, various gastrointestinal diseases, including gastritis, ulcers, and cancer, affect health and quality of life severely. The precise diagnosis of gastrointestinal (GI) tract diseases is a significant challenge in the field of healthcare, as misclassification leads to late prescriptions and negative consequences for patients. Even with the advancement in machine learning and explainable AI for medical image analysis, existing methods tend to have high false negative rates which compromise critical disease cases. This paper presents a hybrid deep learning based explainable artificial intelligence (XAI) approach to improve the accuracy of gastrointestinal disorder diagnosis, including stomach diseases, from images acquired endoscopically. Swin Transformer with DCNN (EfficientNet-B3, ResNet-50) is integrated to improve both the accuracy of diagnostics and the interpretability of the model to extract robust features. Stacked machine learning classifiers with meta-loss and XAI techniques (Grad-CAM) are combined to minimize false negatives, which helps in early and accurate medical diagnoses in GI tract disease evaluation. The proposed model successfully achieved an accuracy of 93.79% with a lower misclassification rate, which is effective for gastrointestinal tract disease classification. Class-wise performance metrics, such as precision, recall, and F1-score, show considerable improvements with false-negative rates being reduced. AI-driven GI tract disease diagnosis becomes more accessible for medical professionals through Grad-CAM because it provides visual explanations about model predictions. This study makes the prospect of using a synergistic DL with XAI open for improvement towards early diagnosis with fewer human errors and also guiding doctors handling gastrointestinal diseases.

Hybrid model integration with explainable AI for brain tumor diagnosis: a unified approach to MRI analysis and prediction.

Vamsidhar D, Desai P, Joshi S, Kolhar S, Deshpande N, Gite S

pubmed logopapersJul 1 2025
Effective treatment for brain tumors relies on accurate detection because this is a crucial health condition. Medical imaging plays a pivotal role in improving tumor detection and diagnosis in the early stage. This study presents two approaches to the tumor detection problem focusing on the healthcare domain. A combination of image processing, vision transformer (ViT), and machine learning algorithms is the first approach that focuses on analyzing medical images. The second approach is the parallel model integration technique, where we first integrate two pre-trained deep learning models, ResNet101, and Xception, followed by applying local interpretable model-agnostic explanations (LIME) to explain the model. The results obtained an accuracy of 98.17% for the combination of vision transformer, random forest and contrast-limited adaptive histogram equalization and 99. 67% for the parallel model integration (ResNet101 and Xception). Based on these results, this paper proposed the deep learning approach-parallel model integration technique as the most effective method. Future work aims to extend the model to multi-class classification for tumor type detection and improve model generalization for broader applicability.

Transformer attention fusion for fine grained medical image classification.

Badar D, Abbas J, Alsini R, Abbas T, ChengLiang W, Daud A

pubmed logopapersJul 1 2025
Fine-grained visual classification is fundamental for medical image applications because it detects minor lesions. Diabetic retinopathy (DR) is a preventable cause of blindness, which requires exact and timely diagnosis to prevent vision damage. The challenges automated DR classification systems face include irregular lesions, uneven distributions between image classes, and inconsistent image quality that reduces diagnostic accuracy during early detection stages. Our solution to these problems includes MSCAS-Net (Multi-Scale Cross and Self-Attention Network), which uses the Swin Transformer as the backbone. It extracts features at three different resolutions (12 × 12, 24 × 24, 48 × 48), allowing it to detect subtle local features and global elements. This model uses self-attention mechanics to improve spatial connections between single scales and cross-attention to automatically match feature patterns across multiple scales, thereby developing a comprehensive information structure. The model becomes better at detecting significant lesions because of its dual mechanism, which focuses on both attention points. MSCAS-Net displays the best performance on APTOS and DDR and IDRID benchmarks by reaching accuracy levels of 93.8%, 89.80% and 86.70%, respectively. Through its algorithm, the model solves problems with imbalanced datasets and inconsistent image quality without needing data augmentation because it learns stable features. MSCAS-Net demonstrates a breakthrough in automated DR diagnostics since it combines high diagnostic precision with interpretable abilities to become an efficient AI-powered clinical decision support system. The presented research demonstrates how fine-grained visual classification methods benefit detecting and treating DR during its early stages.
Page 36 of 2182174 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.