Sort by:
Page 11 of 2052045 results

CBCT radiomics features combine machine learning to diagnose cystic lesions in the jaw.

Sha X, Wang C, Sun J, Qi S, Yuan X, Zhang H, Yang J

pubmed logopapersJul 1 2025
The aim of this study was to develop a radiomics model based on cone beam CT (CBCT) to differentiate odontogenic cysts (OCs), odontogenic keratocysts (OKCs), and ameloblastomas (ABs). In this retrospective study, CBCT images were collected from 300 patients diagnosed with OC, OKC, and AB who underwent histopathological diagnosis. These patients were randomly divided into training (70%) and test (30%) cohorts. Radiomics features were extracted from the images, and the optimal features were incorporated into random forest model, support vector classifier (SVC) model, logistic regression model, and a soft VotingClassifier based on the above 3 algorithms. The performance of the models was evaluated using a receiver operating characteristic (ROC) curve and the area under the curve (AUC). The optimal model among these was then used to establish the final radiomics prediction model, whose performance was evaluated using the sensitivity, accuracy, precision, specificity, and F1 score in both the training cohort and the test cohort. The 6 optimal radiomics features were incorporated into a soft VotingClassifier. Its performance was the best overall. The AUC values of the One-vs-Rest (OvR) multi-classification strategy were AB-vs-Rest 0.963; OKC-vs-Rest 0.928; OC-vs-Rest 0.919 in the training cohort and AB-vs-Rest 0.814; OKC-vs-Rest 0.781; OC-vs-Rest 0.849 in the test cohort. The overall accuracy of the model in the training cohort was 0.757, and in the test cohort was 0.711. The VotingClassifier model demonstrated the ability of the CBCT radiomics to distinguish the multiple types of diseases (OC, OKC, and AB) in the jaw and may have the potential to diagnose accurately under non-invasive conditions.

A Preoperative CT-based Multiparameter Deep Learning and Radiomic Model with Extracellular Volume Parameter Images Can Predict the Tumor Budding Grade in Rectal Cancer Patients.

Tang X, Zhuang Z, Jiang L, Zhu H, Wang D, Zhang L

pubmed logopapersJul 1 2025
To investigate a computed tomography (CT)-based multiparameter deep learning-radiomic model (DLRM) for predicting the preoperative tumor budding (TB) grade in patients with rectal cancer. Data from 135 patients with histologically confirmed rectal cancer (85 in the Bd1+2 group and 50 in the Bd3 group) were retrospectively included. Deep learning (DL) features and hand-crafted radiomic (HCR) features were separately extracted and selected from preoperative CT-based extracellular volume (ECV) parameter images and venous-phase images. Six predictive signatures were subsequently constructed from machine learning classification algorithms. Finally, a combined DL and HCR model, the DLRM, was established to predict the TB grade of rectal cancer patients by merging the DL and HCR features from the two image sets. In the training and test cohorts, the AUC values of the DLRM were 0.976 [95% CI: 0.942-0.997] and 0.976 [95% CI: 0.942-1.00], respectively. The DLRM had good output agreement and clinical applicability according to calibration curve analysis and DCA, respectively. The DLRM outperformed the individual DL and HCR signatures in terms of predicting the TB grade of rectal cancer patients (p < 0.05). The DLRM can be used to evaluate the TB grade of rectal cancer patients in a noninvasive manner before surgery, thereby providing support for clinical treatment decision-making for these patients.

Magnetic resonance image generation using enhanced TransUNet in temporomandibular disorder patients.

Ha EG, Jeon KJ, Lee C, Kim DH, Han SS

pubmed logopapersJul 1 2025
Temporomandibular disorder (TMD) patients experience a variety of clinical symptoms, and MRI is the most effective tool for diagnosing temporomandibular joint (TMJ) disc displacement. This study aimed to develop a transformer-based deep learning model to generate T2-weighted (T2w) images from proton density-weighted (PDw) images, reducing MRI scan time for TMD patients. A dataset of 7226 images from 178 patients who underwent TMJ MRI examinations was used. The proposed model employed a generative adversarial network framework with a TransUNet architecture as the generator for image translation. Additionally, a disc segmentation decoder was integrated to improve image quality in the TMJ disc region. The model performance was evaluated using metrics such as the structural similarity index measure (SSIM), learned perceptual image patch similarity (LPIPS), and Fréchet inception distance (FID). Three experienced oral radiologists also performed a qualitative assessment through the mean opinion score (MOS). The model demonstrated high performance in generating T2w images from PDw images, achieving average SSIM, LPIPS, and FID values of 82.28%, 2.46, and 23.85, respectively, in the disc region. The model also obtained an average MOS score of 4.58, surpassing other models. Additionally, the model showed robust segmentation capabilities for the TMJ disc. The proposed model, integrating a transformer and a disc segmentation task, demonstrated strong performance in MR image generation, both quantitatively and qualitatively. This suggests its potential clinical significance in reducing MRI scan times for TMD patients while maintaining high image quality.

Automated Fast Prediction of Bone Mineral Density From Low-dose Computed Tomography.

Zhou K, Xin E, Yang S, Luo X, Zhu Y, Zeng Y, Fu J, Ruan Z, Wang R, Geng D, Yang L

pubmed logopapersJul 1 2025
Low-dose chest CT (LDCT) is commonly employed for the early screening of lung cancer. However, it has rarely been utilized in the assessment of volumetric bone mineral density (vBMD) and the diagnosis of osteoporosis (OP). This study investigated the feasibility of using deep learning to establish a system for vBMD prediction and OP classification based on LDCT scans. This study included 551 subjects who underwent both LDCT and QCT examinations. First, the U-net was developed to automatically segment lumbar vertebrae from single 2D LDCT slices near the mid-vertebral level. Then, a prediction model was proposed to estimate vBMD, which was subsequently employed for detecting OP and osteopenia (OA). Specifically, two input modalities were constructed for the prediction model. The performance metrics of the models were calculated and evaluated. The segmentation model exhibited a strong correlation with manual segmentation, achieving a mean Dice similarity coefficient (DSC) of 0.974, sensitivity of 0.964, positive predictive value (PPV) of 0.985, and Hausdorff distance of 3.261 in the test set. Linear regression and Bland-Altman analysis demonstrated strong agreement between the predicted vBMD from two-channel inputs and QCT-derived vBMD, with a root mean square error of 8.958 mg/mm<sup>3</sup> and an R<sup>2</sup> of 0.944. The areas under the curve for detecting OP and OA were 0.800 and 0.878, respectively, with an overall accuracy of 94.2%. The average processing time for this system was 1.5 s. This prediction system could automatically estimate vBMD and detect OP and OA on LDCT scans, providing great potential for the osteoporosis screening.

Deep Learning Estimation of Small Airway Disease from Inspiratory Chest Computed Tomography: Clinical Validation, Repeatability, and Associations with Adverse Clinical Outcomes in Chronic Obstructive Pulmonary Disease.

Chaudhary MFA, Awan HA, Gerard SE, Bodduluri S, Comellas AP, Barjaktarevic IZ, Barr RG, Cooper CB, Galban CJ, Han MK, Curtis JL, Hansel NN, Krishnan JA, Menchaca MG, Martinez FJ, Ohar J, Vargas Buonfiglio LG, Paine R, Bhatt SP, Hoffman EA, Reinhardt JM

pubmed logopapersJul 1 2025
<b>Rationale:</b> Quantifying functional small airway disease (fSAD) requires additional expiratory computed tomography (CT) scans, limiting clinical applicability. Artificial intelligence (AI) could enable fSAD quantification from chest CT scans at total lung capacity (TLC) alone (fSAD<sup>TLC</sup>). <b>Objectives:</b> To evaluate an AI model for estimating fSAD<sup>TLC</sup>, compare it with dual-volume parametric response mapping fSAD (fSAD<sup>PRM</sup>), and assess its clinical associations and repeatability in chronic obstructive pulmonary disease (COPD). <b>Methods:</b> We analyzed 2,513 participants from SPIROMICS (the Subpopulations and Intermediate Outcome Measures in COPD Study). Using a randomly sampled subset (<i>n</i> = 1,055), we developed a generative model to produce virtual expiratory CT scans for estimating fSAD<sup>TLC</sup> in the remaining 1,458 SPIROMICS participants. We compared fSAD<sup>TLC</sup> with dual-volume fSAD<sup>PRM</sup>. We investigated univariate and multivariable associations of fSAD<sup>TLC</sup> with FEV<sub>1</sub>, FEV<sub>1</sub>/FVC ratio, 6-minute-walk distance, St. George's Respiratory Questionnaire score, and FEV<sub>1</sub> decline. The results were validated in a subset of patients from the COPDGene (Genetic Epidemiology of COPD) study (<i>n</i> = 458). Multivariable models were adjusted for age, race, sex, body mass index, baseline FEV<sub>1</sub>, smoking pack-years, smoking status, and percent emphysema. <b>Measurements and Main Results:</b> Inspiratory fSAD<sup>TLC</sup> showed a strong correlation with fSAD<sup>PRM</sup> in SPIROMICS (Pearson's <i>R</i> = 0.895) and COPDGene (<i>R</i> = 0.897) cohorts. Higher fSAD<sup>TLC</sup> levels were significantly associated with lower lung function, including lower postbronchodilator FEV<sub>1</sub> (in liters) and FEV<sub>1</sub>/FVC ratio, and poorer quality of life reflected by higher total St. George's Respiratory Questionnaire scores independent of percent CT emphysema. In SPIROMICS, individuals with higher fSAD<sup>TLC</sup> experienced an annual decline in FEV<sub>1</sub> of 1.156 ml (relative decrease; 95% confidence interval [CI], 0.613-1.699; <i>P</i> < 0.001) per year for every 1% increase in fSAD<sup>TLC</sup>. The rate of decline in the COPDGene cohort was slightly lower at 0.866 ml/yr (relative decrease; 95% CI, 0.345-1.386; <i>P</i> < 0.001) per 1% increase in fSAD<sup>TLC</sup>. Inspiratory fSAD<sup>TLC</sup> demonstrated greater consistency between repeated measurements, with a higher intraclass correlation coefficient of 0.99 (95% CI, 0.98-0.99) compared with fSAD<sup>PRM</sup> (0.83; 95% CI, 0.76-0.88). <b>Conclusions:</b> Small airway disease can be reliably assessed from a single inspiratory CT scan using generative AI, eliminating the need for an additional expiratory CT scan. fSAD estimation from inspiratory CT correlates strongly with fSAD<sup>PRM</sup>, demonstrates a significant association with FEV<sub>1</sub> decline, and offers greater repeatability.

Dual-Modality Virtual Biopsy System Integrating MRI and MG for Noninvasive Predicting HER2 Status in Breast Cancer.

Wang Q, Zhang ZQ, Huang CC, Xue HW, Zhang H, Bo F, Guan WT, Zhou W, Bai GJ

pubmed logopapersJul 1 2025
Accurate determination of human epidermal growth factor receptor 2 (HER2) expression is critical for guiding targeted therapy in breast cancer. This study aimed to develop and validate a deep learning (DL)-based decision-making visual biomarker system (DM-VBS) for predicting HER2 status using radiomics and DL features derived from magnetic resonance imaging (MRI) and mammography (MG). Radiomics features were extracted from MRI, and DL features were derived from MG. Four submodels were constructed: Model I (MRI-radiomics) and Model III (mammography-DL) for distinguishing HER2-zero/low from HER2-positive cases, and Model II (MRI-radiomics) and Model IV (mammography-DL) for differentiating HER2-zero from HER2-low/positive cases. These submodels were integrated into a XGBoost model for ternary classification of HER2 status. Radiologists assessed imaging features associated with HER2 expression, and model performance was validated using two independent datasets from The Cancer Image Archive. A total of 550 patients were divided into training, internal validation, and external validation cohorts. Models I and III achieved an area under the curve (AUC) of 0.800-0.850 for distinguishing HER2-zero/low from HER2-positive cases, while Models II and IV demonstrated AUC values of 0.793-0.847 for differentiating HER2-zero from HER2-low/positive cases. The DM-VBS achieved average accuracy of 85.42%, 80.4%, and 89.68% for HER2-zero, -low, and -positive patients in the validation cohorts, respectively. Imaging features such as lesion size, number of lesions, enhancement type, and microcalcifications significantly differed across HER2 statuses, except between HER2-zero and -low groups. DM-VBS can predict HER2 status and assist clinicians in making treatment decisions for breast cancer.

CT Differentiation and Prognostic Modeling in COVID-19 and Influenza A Pneumonia.

Chen X, Long Z, Lei Y, Liang S, Sima Y, Lin R, Ding Y, Lin Q, Ma T, Deng Y

pubmed logopapersJul 1 2025
This study aimed to compare CT features of COVID-19 and Influenza A pneumonia, develop a diagnostic differential model, and explore a prognostic model for lesion resolution. A total of 446 patients diagnosed with COVID-19 and 80 with Influenza A pneumonitis underwent baseline chest CT evaluation. Logistic regression analysis was conducted after multivariate analysis and the results were presented as nomograms. Machine learning models were also evaluated for their diagnostic performance. Prognostic factors for lesion resolution were analyzed using Cox regression after excluding patients who were lost to follow-up, with a nomogram being created. COVID-19 patients showed more features such as thickening of bronchovascular bundles, crazy paving sign and traction bronchiectasis. Influenza A patients exhibited more features such as consolidation, coarse banding and pleural effusion (P < 0.05). The logistic regression model achieved AUC values of 0.937 (training) and 0.931 (validation). Machine learning models exhibited area under the curve values ranging from 0.8486 to 0.9017. COVID-19 patients showed better lesion resolution. Independent prognostic factors for resolution at baseline included age, sex, lesion distribution, morphology, coarse banding, and widening of the main pulmonary artery. Distinct imaging features can differentiate COVID-19 from Influenza A pneumonia. The logistic discriminative model and each machine - learning network model constructed in this study demonstrated efficacy. The nomogram for the logistic discriminative model exhibited high utility. Patients with COVID-19 may exhibit a better resolution of lesions. Certain baseline characteristics may act as independent prognostic factors for complete resolution of lesions.

The value of machine learning based on spectral CT quantitative parameters in the distinguishing benign from malignant thyroid micro-nodules.

Song Z, Liu Q, Huang J, Zhang D, Yu J, Zhou B, Ma J, Zou Y, Chen Y, Tang Z

pubmed logopapersJul 1 2025
More cases of thyroid micro-nodules have been diagnosed annually in recent years because of advancements in diagnostic technologies and increased public health awareness. To explore the application value of various machine learning (ML) algorithms based on dual-layer spectral computed tomography (DLCT) quantitative parameters in distinguishing benign from malignant thyroid micro-nodules. All 338 thyroid micro-nodules (177 malignant micro-nodules and 161 benign micro-nodules) were randomly divided into a training cohort (n = 237) and a testing cohort (n = 101) at a ratio of 7:3. Four typical radiological features and 19 DLCT quantitative parameters in the arterial phase and venous phase were measured. Recursive feature elimination was employed for variable selection. Three ML algorithms-support vector machine (SVM), logistic regression (LR), and naive Bayes (NB)-were implemented to construct predictive models. Predictive performance was evaluated via receiver operating characteristic (ROC) curve analysis. A variable set containing 6 key variables with "one standard error" rules was identified in the SVM model, which performed well in the training and testing cohorts (area under the ROC curve (AUC): 0.924 and 0.931, respectively). A variable set containing 2 key variables was identified in the NB model, which performed well in the training and testing cohorts (AUC: 0.882 and 0.899, respectively). A variable set containing 8 key variables was identified in the LR model, which performed well in the training and testing cohorts (AUC: 0.924 and 0.925, respectively). And nine ML models were developed with varying variable sets (2, 6, or 8 variables), all of which consistently achieved AUC values above 0.85 in the training, cross validation (CV)-Training, CV-Validation, and testing cohorts. Artificial intelligence-based DLCT quantitative parameters are promising for distinguishing benign from malignant thyroid micro-nodules.

Denoising Diffusion Probabilistic Model to Simulate Contrast-enhanced spinal MRI of Spinal Tumors: A Multi-Center Study.

Wang C, Zhang S, Xu J, Wang H, Wang Q, Zhu Y, Xing X, Hao D, Lang N

pubmed logopapersJul 1 2025
To generate virtual T1 contrast-enhanced (T1CE) sequences from plain spinal MRI sequences using the denoising diffusion probabilistic model (DDPM) and to compare its performance against one baseline model pix2pix and three advanced models. A total of 1195 consecutive spinal tumor patients who underwent contrast-enhanced MRI at two hospitals were divided into a training set (n = 809, 49 ± 17 years, 437 men), an internal test set (n = 203, 50 ± 16 years, 105 men), and an external test set (n = 183, 52 ± 16 years, 94 men). Input sequences were T1- and T2-weighted images, and T2 fat-saturation images. The output was T1CE images. In the test set, one radiologist read the virtual images and marked all visible enhancing lesions. Results were evaluated using sensitivity (SE) and false discovery rate (FDR). We compared differences in lesion size and enhancement degree between reference and virtual images, and calculated signal-to-noise (SNR) and contrast-to-noise ratios (CNR) for image quality assessment. In the external test set, the mean squared error was 0.0038±0.0065, and structural similarity index 0.78±0.10. Upon evaluation by the reader, the overall SE of the generated T1CE images was 94% with FDR 2%. There was no difference in lesion size or signal intensity ratio between the reference and generated images. The CNR was higher in the generated images than the reference images (9.241 vs. 4.021; P<0.001). The proposed DDPM demonstrates potential as an alternative to gadolinium contrast in spinal MRI examinations of oncologic patients.

An AI-based tool for prosthetic crown segmentation serving automated intraoral scan-to-CBCT registration in challenging high artifact scenarios.

Elgarba BM, Ali S, Fontenele RC, Meeus J, Jacobs R

pubmed logopapersJul 1 2025
Accurately registering intraoral and cone beam computed tomography (CBCT) scans in patients with metal artifacts poses a significant challenge. Whether a cloud-based platform trained for artificial intelligence (AI)-driven segmentation can improve registration is unclear. The purpose of this clinical study was to validate a cloud-based platform trained for the AI-driven segmentation of prosthetic crowns on CBCT scans and subsequent multimodal intraoral scan-to-CBCT registration in the presence of high metal artifact expression. A dataset consisting of 30 time-matched maxillary and mandibular CBCT and intraoral scans, each containing at least 4 prosthetic crowns, was collected. CBCT acquisition involved placing cotton rolls between the cheeks and teeth to facilitate soft tissue delineation. Segmentation and registration were compared using either a semi-automated (SA) method or an AI-automated (AA). SA served as clinical reference, where prosthetic crowns and their radicular parts (natural roots or implants) were threshold-based segmented with point surface-based registration. The AA method included fully automated segmentation and registration based on AI algorithms. Quantitative assessment compared AA's median surface deviation (MSD) and root mean square (RMS) in crown segmentation and subsequent intraoral scan-to-CBCT registration with those of SA. Additionally, segmented crown STL files were voxel-wise analyzed for comparison between AA and SA. A qualitative assessment of AA-based crown segmentation evaluated the need for refinement, while the AA-based registration assessment scrutinized the alignment of the registered-intraoral scan with the CBCT teeth and soft tissue contours. Ultimately, the study compared the time efficiency and consistency of both methods. Quantitative outcomes were analyzed with the Kruskal-Wallis, Mann-Whitney, and Student t tests, and qualitative outcomes with the Wilcoxon test (all α=.05). Consistency was evaluated by using the intraclass correlation coefficient (ICC). Quantitatively, AA methods excelled with a 0.91 Dice Similarity Coefficient for crown segmentation and an MSD of 0.03 ±0.05 mm for intraoral scan-to-CBCT registration. Additionally, AA achieved 91% clinically acceptable matches of teeth and gingiva on CBCT scans, surpassing SA method's 80%. Furthermore, AA was significantly faster than SA (P<.05), being 200 times faster in segmentation and 4.5 times faster in registration. Both AA and SA exhibited excellent consistency in segmentation and registration, with ICC values of 0.99 and 1 for AA and 0.99 and 0.96 for SA, respectively. The novel cloud-based platform demonstrated accurate, consistent, and time-efficient prosthetic crown segmentation, as well as intraoral scan-to-CBCT registration in scenarios with high artifact expression.
Page 11 of 2052045 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.