Sort by:
Page 218 of 3623611 results

CBCT radiomics features combine machine learning to diagnose cystic lesions in the jaw.

Sha X, Wang C, Sun J, Qi S, Yuan X, Zhang H, Yang J

pubmed logopapersJul 1 2025
The aim of this study was to develop a radiomics model based on cone beam CT (CBCT) to differentiate odontogenic cysts (OCs), odontogenic keratocysts (OKCs), and ameloblastomas (ABs). In this retrospective study, CBCT images were collected from 300 patients diagnosed with OC, OKC, and AB who underwent histopathological diagnosis. These patients were randomly divided into training (70%) and test (30%) cohorts. Radiomics features were extracted from the images, and the optimal features were incorporated into random forest model, support vector classifier (SVC) model, logistic regression model, and a soft VotingClassifier based on the above 3 algorithms. The performance of the models was evaluated using a receiver operating characteristic (ROC) curve and the area under the curve (AUC). The optimal model among these was then used to establish the final radiomics prediction model, whose performance was evaluated using the sensitivity, accuracy, precision, specificity, and F1 score in both the training cohort and the test cohort. The 6 optimal radiomics features were incorporated into a soft VotingClassifier. Its performance was the best overall. The AUC values of the One-vs-Rest (OvR) multi-classification strategy were AB-vs-Rest 0.963; OKC-vs-Rest 0.928; OC-vs-Rest 0.919 in the training cohort and AB-vs-Rest 0.814; OKC-vs-Rest 0.781; OC-vs-Rest 0.849 in the test cohort. The overall accuracy of the model in the training cohort was 0.757, and in the test cohort was 0.711. The VotingClassifier model demonstrated the ability of the CBCT radiomics to distinguish the multiple types of diseases (OC, OKC, and AB) in the jaw and may have the potential to diagnose accurately under non-invasive conditions.

A Preoperative CT-based Multiparameter Deep Learning and Radiomic Model with Extracellular Volume Parameter Images Can Predict the Tumor Budding Grade in Rectal Cancer Patients.

Tang X, Zhuang Z, Jiang L, Zhu H, Wang D, Zhang L

pubmed logopapersJul 1 2025
To investigate a computed tomography (CT)-based multiparameter deep learning-radiomic model (DLRM) for predicting the preoperative tumor budding (TB) grade in patients with rectal cancer. Data from 135 patients with histologically confirmed rectal cancer (85 in the Bd1+2 group and 50 in the Bd3 group) were retrospectively included. Deep learning (DL) features and hand-crafted radiomic (HCR) features were separately extracted and selected from preoperative CT-based extracellular volume (ECV) parameter images and venous-phase images. Six predictive signatures were subsequently constructed from machine learning classification algorithms. Finally, a combined DL and HCR model, the DLRM, was established to predict the TB grade of rectal cancer patients by merging the DL and HCR features from the two image sets. In the training and test cohorts, the AUC values of the DLRM were 0.976 [95% CI: 0.942-0.997] and 0.976 [95% CI: 0.942-1.00], respectively. The DLRM had good output agreement and clinical applicability according to calibration curve analysis and DCA, respectively. The DLRM outperformed the individual DL and HCR signatures in terms of predicting the TB grade of rectal cancer patients (p < 0.05). The DLRM can be used to evaluate the TB grade of rectal cancer patients in a noninvasive manner before surgery, thereby providing support for clinical treatment decision-making for these patients.

Hybrid model integration with explainable AI for brain tumor diagnosis: a unified approach to MRI analysis and prediction.

Vamsidhar D, Desai P, Joshi S, Kolhar S, Deshpande N, Gite S

pubmed logopapersJul 1 2025
Effective treatment for brain tumors relies on accurate detection because this is a crucial health condition. Medical imaging plays a pivotal role in improving tumor detection and diagnosis in the early stage. This study presents two approaches to the tumor detection problem focusing on the healthcare domain. A combination of image processing, vision transformer (ViT), and machine learning algorithms is the first approach that focuses on analyzing medical images. The second approach is the parallel model integration technique, where we first integrate two pre-trained deep learning models, ResNet101, and Xception, followed by applying local interpretable model-agnostic explanations (LIME) to explain the model. The results obtained an accuracy of 98.17% for the combination of vision transformer, random forest and contrast-limited adaptive histogram equalization and 99. 67% for the parallel model integration (ResNet101 and Xception). Based on these results, this paper proposed the deep learning approach-parallel model integration technique as the most effective method. Future work aims to extend the model to multi-class classification for tumor type detection and improve model generalization for broader applicability.

A hybrid XAI-driven deep learning framework for robust GI tract disease diagnosis.

Dahan F, Shah JH, Saleem R, Hasnain M, Afzal M, Alfakih TM

pubmed logopapersJul 1 2025
The stomach is one of the main digestive organs in the GIT, essential for digestion and nutrient absorption. However, various gastrointestinal diseases, including gastritis, ulcers, and cancer, affect health and quality of life severely. The precise diagnosis of gastrointestinal (GI) tract diseases is a significant challenge in the field of healthcare, as misclassification leads to late prescriptions and negative consequences for patients. Even with the advancement in machine learning and explainable AI for medical image analysis, existing methods tend to have high false negative rates which compromise critical disease cases. This paper presents a hybrid deep learning based explainable artificial intelligence (XAI) approach to improve the accuracy of gastrointestinal disorder diagnosis, including stomach diseases, from images acquired endoscopically. Swin Transformer with DCNN (EfficientNet-B3, ResNet-50) is integrated to improve both the accuracy of diagnostics and the interpretability of the model to extract robust features. Stacked machine learning classifiers with meta-loss and XAI techniques (Grad-CAM) are combined to minimize false negatives, which helps in early and accurate medical diagnoses in GI tract disease evaluation. The proposed model successfully achieved an accuracy of 93.79% with a lower misclassification rate, which is effective for gastrointestinal tract disease classification. Class-wise performance metrics, such as precision, recall, and F1-score, show considerable improvements with false-negative rates being reduced. AI-driven GI tract disease diagnosis becomes more accessible for medical professionals through Grad-CAM because it provides visual explanations about model predictions. This study makes the prospect of using a synergistic DL with XAI open for improvement towards early diagnosis with fewer human errors and also guiding doctors handling gastrointestinal diseases.

Quantitative ultrasound classification of healthy and chemically degraded ex-vivo cartilage.

Sorriento A, Guachi-Guachi L, Turini C, Lenzi E, Dolzani P, Lisignoli G, Kerdegari S, Valenza G, Canale C, Ricotti L, Cafarelli A

pubmed logopapersJul 1 2025
In this study, we explore the potential of ten quantitative (radiofrequency-based) ultrasound parameters to assess the progressive loss of collagen and proteoglycans, mimicking an osteoarthritis condition in ex-vivo bovine cartilage samples. Most analyzed metrics showed significant changes as the degradation progressed, especially with collagenase treatment. We propose for the first time a combination of these ultrasound parameters through machine learning models aimed at automatically identifying healthy and degraded cartilage samples. The random forest model showed good performance in distinguishing healthy cartilage from trypsin-treated samples, with an accuracy of 60%. The support vector machine demonstrated excellent accuracy (96%) in differentiating healthy cartilage from collagenase-degraded samples. Histological and mechanical analyses further confirmed these findings, with collagenase having a more pronounced impact on both mechanical and histological properties, compared to trypsin. These metrics were obtained using an ultrasound probe having a transmission frequency of 15 MHz, typically used for the diagnosis of musculoskeletal diseases, enabling a fully non-invasive procedure without requiring arthroscopic probes. As a perspective, the proposed quantitative ultrasound assessment has the potential to become a new standard for monitoring cartilage health, enabling the early detection of cartilage pathologies and timely interventions.

Multimodal deep learning-based radiomics for meningioma consistency prediction: integrating T1 and T2 MRI in a multi-center study.

Lin H, Yue Y, Xie L, Chen B, Li W, Yang F, Zhang Q, Chen H

pubmed logopapersJul 1 2025
Meningioma consistency critically impacts surgical planning, as soft tumors are easier to resect than hard tumors. Current assessments of tumor consistency using MRI are subjective and lack quantitative accuracy. Integrating deep learning and radiomics could enhance the predictive accuracy of meningioma consistency. A retrospective study analyzed 204 meningioma patients from two centers: the Second Affiliated Hospital of Guangzhou Medical University and the Southern Theater Command Hospital PLA. Three models-a radiomics model (Rad_Model), a deep learning model (DL_Model), and a combined model (DLR_Model)-were developed. Model performance was evaluated using AUC, accuracy, sensitivity, specificity, and precision. The DLR_Model outperformed other models across all cohorts. In the training set, it achieved AUC 0.957, accuracy of 0.908, and precision of 0.965. In the external test cohort, it maintained superior performance with an AUC of 0.854, accuracy of 0.778, and precision of 0.893, surpassing both the Rad_Model (AUC = 0.768) and DL_Model (AUC = 0.720). Combining radiomics and deep learning features improved predictive performance and robustness. Our study introduced and evaluated a deep learning radiomics model (DLR-Model) to accurately predict the consistency of meningiomas, which has the potential to improve preoperative assessments and surgical planning.

Magnetic resonance image generation using enhanced TransUNet in temporomandibular disorder patients.

Ha EG, Jeon KJ, Lee C, Kim DH, Han SS

pubmed logopapersJul 1 2025
Temporomandibular disorder (TMD) patients experience a variety of clinical symptoms, and MRI is the most effective tool for diagnosing temporomandibular joint (TMJ) disc displacement. This study aimed to develop a transformer-based deep learning model to generate T2-weighted (T2w) images from proton density-weighted (PDw) images, reducing MRI scan time for TMD patients. A dataset of 7226 images from 178 patients who underwent TMJ MRI examinations was used. The proposed model employed a generative adversarial network framework with a TransUNet architecture as the generator for image translation. Additionally, a disc segmentation decoder was integrated to improve image quality in the TMJ disc region. The model performance was evaluated using metrics such as the structural similarity index measure (SSIM), learned perceptual image patch similarity (LPIPS), and Fréchet inception distance (FID). Three experienced oral radiologists also performed a qualitative assessment through the mean opinion score (MOS). The model demonstrated high performance in generating T2w images from PDw images, achieving average SSIM, LPIPS, and FID values of 82.28%, 2.46, and 23.85, respectively, in the disc region. The model also obtained an average MOS score of 4.58, surpassing other models. Additionally, the model showed robust segmentation capabilities for the TMJ disc. The proposed model, integrating a transformer and a disc segmentation task, demonstrated strong performance in MR image generation, both quantitatively and qualitatively. This suggests its potential clinical significance in reducing MRI scan times for TMD patients while maintaining high image quality.

Contrast-enhanced mammography-based interpretable machine learning model for the prediction of the molecular subtype breast cancers.

Ma M, Xu W, Yang J, Zheng B, Wen C, Wang S, Xu Z, Qin G, Chen W

pubmed logopapersJul 1 2025
This study aims to establish a machine learning prediction model to explore the correlation between contrast-enhanced mammography (CEM) imaging features and molecular subtypes of mass-type breast cancer. This retrospective study included women with breast cancer who underwent CEM preoperatively between 2018 and 2021. We included 241 patients, which were randomly assigned to either a training or a test set in a 7:3 ratio. Twenty-one features were visually described, including four clinical features and seventeen radiological features, these radiological features which extracted from the CEM. Three binary classifications of subtypes were performed: Luminal vs. non-Luminal, HER2-enriched vs. non-HER2-enriched, and triple-negative (TNBC) vs. non-triple-negative. A multinomial naive Bayes (MNB) machine learning scheme was employed for the classification, and the least absolute shrink age and selection operator method were used to select the most predictive features for the classifiers. The classification performance was evaluated using the area under the receiver operating characteristic curve. We also utilized SHapley Additive exPlanation (SHAP) values to explain the prediction model. The model that used a combination of low energy (LE) and dual-energy subtraction (DES) achieved the best performance compared to using either of the two images alone, yielding an area under the receiver operating characteristic curve of 0.798 for Luminal vs. non-Luminal subtypes, 0.695 for TNBC vs. non-TNBC, and 0.773 for HER2-enriched vs. non-HER2-enriched. The SHAP algorithm shows that "LE_mass_margin_spiculated," "DES_mass_enhanced_margin_spiculated," and "DES_mass_internal_enhancement_homogeneous" have the most significant impact on the model's performance in predicting Luminal and non-Luminal breast cancer. "mass_calcification_relationship_no," "calcification_ type_no," and "LE_mass_margin_spiculated" have a considerable impact on the model's performance in predicting HER2 and non-HER2 breast cancer. The radiological characteristics of breast tumors extracted from CEM were found to be associated with breast cancer subtypes in our study. Future research is needed to validate these findings.

Automated Fast Prediction of Bone Mineral Density From Low-dose Computed Tomography.

Zhou K, Xin E, Yang S, Luo X, Zhu Y, Zeng Y, Fu J, Ruan Z, Wang R, Geng D, Yang L

pubmed logopapersJul 1 2025
Low-dose chest CT (LDCT) is commonly employed for the early screening of lung cancer. However, it has rarely been utilized in the assessment of volumetric bone mineral density (vBMD) and the diagnosis of osteoporosis (OP). This study investigated the feasibility of using deep learning to establish a system for vBMD prediction and OP classification based on LDCT scans. This study included 551 subjects who underwent both LDCT and QCT examinations. First, the U-net was developed to automatically segment lumbar vertebrae from single 2D LDCT slices near the mid-vertebral level. Then, a prediction model was proposed to estimate vBMD, which was subsequently employed for detecting OP and osteopenia (OA). Specifically, two input modalities were constructed for the prediction model. The performance metrics of the models were calculated and evaluated. The segmentation model exhibited a strong correlation with manual segmentation, achieving a mean Dice similarity coefficient (DSC) of 0.974, sensitivity of 0.964, positive predictive value (PPV) of 0.985, and Hausdorff distance of 3.261 in the test set. Linear regression and Bland-Altman analysis demonstrated strong agreement between the predicted vBMD from two-channel inputs and QCT-derived vBMD, with a root mean square error of 8.958 mg/mm<sup>3</sup> and an R<sup>2</sup> of 0.944. The areas under the curve for detecting OP and OA were 0.800 and 0.878, respectively, with an overall accuracy of 94.2%. The average processing time for this system was 1.5 s. This prediction system could automatically estimate vBMD and detect OP and OA on LDCT scans, providing great potential for the osteoporosis screening.

Deep Learning Estimation of Small Airway Disease from Inspiratory Chest Computed Tomography: Clinical Validation, Repeatability, and Associations with Adverse Clinical Outcomes in Chronic Obstructive Pulmonary Disease.

Chaudhary MFA, Awan HA, Gerard SE, Bodduluri S, Comellas AP, Barjaktarevic IZ, Barr RG, Cooper CB, Galban CJ, Han MK, Curtis JL, Hansel NN, Krishnan JA, Menchaca MG, Martinez FJ, Ohar J, Vargas Buonfiglio LG, Paine R, Bhatt SP, Hoffman EA, Reinhardt JM

pubmed logopapersJul 1 2025
<b>Rationale:</b> Quantifying functional small airway disease (fSAD) requires additional expiratory computed tomography (CT) scans, limiting clinical applicability. Artificial intelligence (AI) could enable fSAD quantification from chest CT scans at total lung capacity (TLC) alone (fSAD<sup>TLC</sup>). <b>Objectives:</b> To evaluate an AI model for estimating fSAD<sup>TLC</sup>, compare it with dual-volume parametric response mapping fSAD (fSAD<sup>PRM</sup>), and assess its clinical associations and repeatability in chronic obstructive pulmonary disease (COPD). <b>Methods:</b> We analyzed 2,513 participants from SPIROMICS (the Subpopulations and Intermediate Outcome Measures in COPD Study). Using a randomly sampled subset (<i>n</i> = 1,055), we developed a generative model to produce virtual expiratory CT scans for estimating fSAD<sup>TLC</sup> in the remaining 1,458 SPIROMICS participants. We compared fSAD<sup>TLC</sup> with dual-volume fSAD<sup>PRM</sup>. We investigated univariate and multivariable associations of fSAD<sup>TLC</sup> with FEV<sub>1</sub>, FEV<sub>1</sub>/FVC ratio, 6-minute-walk distance, St. George's Respiratory Questionnaire score, and FEV<sub>1</sub> decline. The results were validated in a subset of patients from the COPDGene (Genetic Epidemiology of COPD) study (<i>n</i> = 458). Multivariable models were adjusted for age, race, sex, body mass index, baseline FEV<sub>1</sub>, smoking pack-years, smoking status, and percent emphysema. <b>Measurements and Main Results:</b> Inspiratory fSAD<sup>TLC</sup> showed a strong correlation with fSAD<sup>PRM</sup> in SPIROMICS (Pearson's <i>R</i> = 0.895) and COPDGene (<i>R</i> = 0.897) cohorts. Higher fSAD<sup>TLC</sup> levels were significantly associated with lower lung function, including lower postbronchodilator FEV<sub>1</sub> (in liters) and FEV<sub>1</sub>/FVC ratio, and poorer quality of life reflected by higher total St. George's Respiratory Questionnaire scores independent of percent CT emphysema. In SPIROMICS, individuals with higher fSAD<sup>TLC</sup> experienced an annual decline in FEV<sub>1</sub> of 1.156 ml (relative decrease; 95% confidence interval [CI], 0.613-1.699; <i>P</i> < 0.001) per year for every 1% increase in fSAD<sup>TLC</sup>. The rate of decline in the COPDGene cohort was slightly lower at 0.866 ml/yr (relative decrease; 95% CI, 0.345-1.386; <i>P</i> < 0.001) per 1% increase in fSAD<sup>TLC</sup>. Inspiratory fSAD<sup>TLC</sup> demonstrated greater consistency between repeated measurements, with a higher intraclass correlation coefficient of 0.99 (95% CI, 0.98-0.99) compared with fSAD<sup>PRM</sup> (0.83; 95% CI, 0.76-0.88). <b>Conclusions:</b> Small airway disease can be reliably assessed from a single inspiratory CT scan using generative AI, eliminating the need for an additional expiratory CT scan. fSAD estimation from inspiratory CT correlates strongly with fSAD<sup>PRM</sup>, demonstrates a significant association with FEV<sub>1</sub> decline, and offers greater repeatability.
Page 218 of 3623611 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.