Sort by:
Page 78 of 1421416 results

AI-based CT assessment of 3117 vertebrae reveals significant sex-specific vertebral height differences.

Palm V, Thangamani S, Budai BK, Skornitzke S, Eckl K, Tong E, Sedaghat S, Heußel CP, von Stackelberg O, Engelhardt S, Kopytova T, Norajitra T, Maier-Hein KH, Kauczor HU, Wielpütz MO

pubmed logopapersJul 1 2025
Predicting vertebral height is complex due to individual factors. AI-based medical imaging analysis offers new opportunities for vertebral assessment. Thereby, these novel methods may contribute to sex-adapted nomograms and vertebral height prediction models, aiding in diagnosing spinal conditions like compression fractures and supporting individualized, sex-specific medicine. In this study an AI-based CT-imaging spine analysis of 262 subjects (mean age 32.36 years, range 20-54 years) was conducted, including a total of 3117 vertebrae, to assess sex-associated anatomical variations. Automated segmentations provided anterior, central, and posterior vertebral heights. Regression analysis with a cubic spline linear mixed-effects model was adapted to age, sex, and spinal segments. Measurement reliability was confirmed by two readers with an intraclass correlation coefficient (ICC) of 0.94-0.98. Female vertebral heights were consistently smaller than males (p < 0.05). The largest differences were found in the upper thoracic spine (T1-T6), with mean differences of 7.9-9.0%. Specifically, T1 and T2 showed differences of 8.6% and 9.0%, respectively. The strongest height increase between consecutive vertebrae was observed from T9 to L1 (mean slope of 1.46; 6.63% for females and 1.53; 6.48% for males). This study highlights significant sex-based differences in vertebral heights, resulting in sex-adapted nomograms that can enhance diagnostic accuracy and support individualized patient assessments.

Determination of the oral carcinoma and sarcoma in contrast enhanced CT images using deep convolutional neural networks.

Warin K, Limprasert W, Paipongna T, Chaowchuen S, Vicharueang S

pubmed logopapersJul 1 2025
Oral cancer is a hazardous disease and a major cause of morbidity and mortality worldwide. The purpose of this study was to develop the deep convolutional neural networks (CNN)-based multiclass classification and object detection models for distinguishing and detection of oral carcinoma and sarcoma in contrast-enhanced CT images. This study included 3,259 slices of CT images of oral cancer cases from the cancer hospital and two regional hospitals from 2016 to 2020. Multiclass classification models were constructed using DenseNet-169, ResNet-50, EfficientNet-B0, ConvNeXt-Base, and ViT-Base-Patch16-224 to accurately differentiate between oral carcinoma and sarcoma. Additionally, multiclass object detection models, including Faster R-CNN, YOLOv8, and YOLOv11, were designed to autonomously identify and localize lesions by placing bounding boxes on CT images. Performance evaluation on a test dataset showed that the best classification model achieved an accuracy of 0.97, while the best detection models yielded a mean average precision (mAP) of 0.87. In conclusion, the CNN-based multiclass models have a great promise for accurately determining and distinguishing oral carcinoma and sarcoma in CT imaging, potentially enhancing early detection and informing treatment strategies.

Muscle-Driven prognostication in gastric cancer: A multicenter deep learning framework integrating Iliopsoas and erector spinae radiomics for 5-Year survival prediction.

Hong Y, Zhang P, Teng Z, Cheng K, Zhang Z, Cheng Y, Cao G, Chen B

pubmed logopapersJul 1 2025
This study developed a 5-year survival prediction model for gastric cancer patients by combining radiomics and deep learning, focusing on CT-based 2D and 3D features of the iliopsoas and erector spinae muscles. Retrospective data from 705 patients across two centers were analyzed, with clinical variables assessed via Cox regression and radiomic features extracted using deep learning. The 2D model outperformed the 3D approach, leading to feature fusion across five dimensions, optimized via logistic regression. Results showed no significant association between clinical baseline characteristics and survival, but the 2D model demonstrated strong prognostic performance (AUC ~ 0.8), with attention heatmaps emphasizing spinal muscle regions. The 3D model underperformed due to irrelevant data. The final integrated model achieved stable predictive accuracy, confirming the link between muscle mass and survival. This approach advances precision medicine by enabling personalized prognosis and exploring 3D imaging feasibility, offering insights for gastric cancer research.

Radiomics and machine learning for osteoporosis detection using abdominal computed tomography: a retrospective multicenter study.

Liu Z, Li Y, Zhang C, Xu H, Zhao J, Huang C, Chen X, Ren Q

pubmed logopapersJul 1 2025
This study aimed to develop and validate a predictive model to detect osteoporosis using radiomic features and machine learning (ML) approaches from lumbar spine computed tomography (CT) images during an abdominal CT examination. A total of 509 patients who underwent both quantitative CT (QCT) and abdominal CT examinations (training group, n = 279; internal validation group, n = 120; external validation group, n = 110) were analyzed in this retrospective study from two centers. Radiomic features were extracted from the lumbar spine CT images. Seven radiomic-based ML models, including logistic regression (LR), Bernoulli, Gaussian NB, SGD, decision tree, support vector machine (SVM), and K-nearest neighbor (KNN) models, were constructed. The performance of the models was assessed using the area under the curve (AUC) of receiver operating characteristic (ROC) curve analysis and decision curve analysis (DCA). The radiomic model based on LR in the internal validation group and external validation group had excellent performance, with an AUC of 0.960 and 0.786 for differentiating osteoporosis from normal BMD and osteopenia, respectively. The radiomic model based on LR in the internal validation group and Gaussian NB model in the external validation group yielded the highest performance, with an AUC of 0.905 and 0.839 for discriminating normal BMD from osteopenia and osteoporosis, respectively. DCA in the internal validation group revealed that the LR model had greater net benefit than the other models in differentiating osteoporosis from normal BMD and osteopenia. Radiomic-based ML approaches may be used to predict osteoporosis from abdominal CT images and as a tool for opportunistic osteoporosis screening.

Development and validation of CT-based fusion model for preoperative prediction of invasion and lymph node metastasis in adenocarcinoma of esophagogastric junction.

Cao M, Xu R, You Y, Huang C, Tong Y, Zhang R, Zhang Y, Yu P, Wang Y, Chen W, Cheng X, Zhang L

pubmed logopapersJul 1 2025
In the context of precision medicine, radiomics has become a key technology in solving medical problems. For adenocarcinoma of esophagogastric junction (AEG), developing a preoperative CT-based prediction model for AEG invasion and lymph node metastasis is crucial. We retrospectively collected 256 patients with AEG from two centres. The radiomics features were extracted from the preoperative diagnostic CT images, and the feature selection method and machine learning method were applied to reduce the feature size and establish the predictive imaging features. By comparing the three machine learning methods, the best radiomics nomogram was selected, and the average AUC was obtained by 20 repeats of fivefold cross-validation for comparison. The fusion model was constructed by logistic regression combined with clinical factors. On this basis, ROC curve, calibration curve and decision curve of the fusion model are constructed. The predictive efficacy of fusion model for tumour invasion depth was higher than that of radiomics nomogram, with an AUC of 0.764 vs. 0.706 in the test set, P < 0.001, internal validation set 0.752 vs. 0.697, P < 0.001, and external validation set 0.756 vs. 0.687, P < 0.001, respectively. The predictive efficacy of the lymph node metastasis fusion model was higher than that of the radiomics nomogram, with an AUC of 0.809 vs. 0.732 in the test set, P < 0.001, internal validation set 0.841 vs. 0.718, P < 0.001, and external validation set 0.801 vs. 0.680, P < 0.001, respectively. We have developed a fusion model combining radiomics and clinical risk factors, which is crucial for the accurate preoperative diagnosis and treatment of AEG, advancing precision medicine. It may also spark discussions on the imaging feature differences between AEG and GC (Gastric cancer).

Automated 3D segmentation of the hyoid bone in CBCT using nnU-Net v2: a retrospective study on model performance and potential clinical utility.

Gümüssoy I, Haylaz E, Duman SB, Kalabalik F, Say S, Celik O, Bayrakdar IS

pubmed logopapersJul 1 2025
This study aimed to identify the hyoid bone (HB) using the nnU-Net based artificial intelligence (AI) model in cone beam computed tomography (CBCT) images and assess the model's success in automatic segmentation. CBCT images of 190 patients were randomly selected. The raw data was converted to DICOM format and transferred to the 3D Slicer Imaging Software (Version 4.10.2; MIT, Cambridge, MA, USA). HB was labeled manually using the 3D Slicer. The dataset was divided into training, validation, and test sets in a ratio of 8:1:1. The nnU-Net v2 architecture was utilized to process the training and test datasets, generating the algorithm weight factors. To assess the model's accuracy and performance, a confusion matrix was employed. F1-score, Dice coefficient (DC), 95% Hausdorff distance (95% HD), and Intersection over Union (IoU) metrics were calculated to evaluate the results. The model's performance metrics were as follows: DC = 0.9434, IoU = 0.8941, F1-score = 0.9446, and 95% HD = 1.9998. The receiver operating characteristic (ROC) curve was generated, yielding an AUC value of 0.98. The results indicated that the nnU-Net v2 model achieved high precision and accuracy in HB segmentation on CBCT images. Automatic segmentation of HB can enhance clinicians' decision-making speed and accuracy in diagnosing and treating various clinical conditions. Not applicable.

Computed tomography-based radiomics predicts prognostic and treatment-related levels of immune infiltration in the immune microenvironment of clear cell renal cell carcinoma.

Song S, Ge W, Qi X, Che X, Wang Q, Wu G

pubmed logopapersJul 1 2025
The composition of the tumour microenvironment is very complex, and measuring the extent of immune cell infiltration can provide an important guide to clinically significant treatments for cancer, such as immune checkpoint inhibition therapy and targeted therapy. We used multiple machine learning (ML) models to predict differences in immune infiltration in clear cell renal cell carcinoma (ccRCC), with computed tomography (CT) imaging pictures serving as a model for machine learning. We also statistically analysed and compared the results of multiple typing models and explored an excellent non-invasive and convenient method for treatment of ccRCC patients and explored a better, non-invasive and convenient prediction method for ccRCC patients. The study included 539 ccRCC samples with clinicopathological information and associated genetic information from The Cancer Genome Atlas (TCGA) database. The Single Sample Gene Set Enrichment Analysis (ssGSEA) algorithm was used to obtain the immune cell infiltration results as well as the cluster analysis results. ssGSEA-based analysis was used to obtain the immune cell infiltration levels, and the Boruta algorithm was further used to downscale the obtained positive/negative gene sets to obtain the immune infiltration level groupings. Multifactor Cox regression analysis was used to calculate the immunotherapy response of subgroups according to Tumor Immune Dysfunction and Exclusion (TIDE) algorithm and subgraph algorithm to detect the difference in survival time and immunotherapy response of ccRCC patients with immune infiltration. Radiomics features were screened using LASSO analysis. Eight ML algorithms were selected for diagnostic analysis of the test set. Receiver operating characteristic (ROC) curve was used to evaluate the performance of the model. Draw decision curve analysis (DCA) to evaluate the clinical personalized medical value of the predictive model. The high/low subtypes of immune infiltration levels obtained by optimisation based on the Boruta algorithm were statistically different in the survival analysis of ccRCC patients. Multifactorial immune infiltration level combined with clinical factors better predicted survival of ccRCC patients, and ccRCC with high immune infiltration may benefit more from anti-PD-1 therapy. Among the eight machine learning models, ExtraTrees had the highest test and training set ROC AUCs of 1.000 and 0.753; in the test set, LR and LightGBM had the highest sensitivity of 0.615; LR, SVM, ExtraTrees, LightGBM and MLP had higher specificities of 0.789, 1.000, 0.842, 0.789 and 0.789, respectively; and LR, ExtraTrees and LightGBM had the highest accuracy of 0. 719, 0.688 and 0.719 respectively. Therefore, the CT-based ML achieved good predictive results in predicting immune infiltration in ccRCC, with the ExtraTrees machine learning algorithm being optimal. The use of radiomics model based on renal CT images can be noninvasively used to predict the immune infiltration level of ccRCC as well as combined with clinical information to create columnar plots predicting total survival in people with ccRCC and to predict responsiveness to ICI therapy, findings that may be useful in stratifying the prognosis of patients with ccRCC and guiding clinical practitioners to develop individualized regimens in the treatment of their patients.

Enhanced pulmonary nodule detection with U-Net, YOLOv8, and swin transformer.

Wang X, Wu H, Wang L, Chen J, Li Y, He X, Chen T, Wang M, Guo L

pubmed logopapersJul 1 2025
Lung cancer remains the leading cause of cancer-related mortality worldwide, emphasizing the critical need for early pulmonary nodule detection to improve patient outcomes. Current methods encounter challenges in detecting small nodules and exhibit high false positive rates, placing an additional diagnostic burden on radiologists. This study aimed to develop a two-stage deep learning model integrating U-Net, Yolov8s, and the Swin transformer to enhance pulmonary nodule detection in computer tomography (CT) images, particularly for small nodules, with the goal of improving detection accuracy and reducing false positives. We utilized the LUNA16 dataset (888 CT scans) and an additional 308 CT scans from Tianjin Chest Hospital. Images were preprocessed for consistency. The proposed model first employs U-Net for precise lung segmentation, followed by Yolov8s augmented with the Swin transformer for nodule detection. The Shape-aware IoU (SIoU) loss function was implemented to improve bounding box predictions. For the LUNA16 dataset, the model achieved a precision of 0.898, a recall of 0.851, and a mean average precision at 50% IoU (mAP50) of 0.879, outperforming state-of-the-art models. The Tianjin Chest Hospital dataset has a precision of 0.855, a recall of 0.872, and an mAP50 of 0.862. This study presents a two-stage deep learning model that leverages U-Net, Yolov8s, and the Swin transformer for enhanced pulmonary nodule detection in CT images. The model demonstrates high accuracy and a reduced false positive rate, suggesting its potential as a useful tool for early lung cancer diagnosis and treatment.

Evaluation of a fusion model combining deep learning models based on enhanced CT images with radiological and clinical features in distinguishing lipid-poor adrenal adenoma from metastatic lesions.

Wang SC, Yin SN, Wang ZY, Ding N, Ji YD, Jin L

pubmed logopapersJul 1 2025
To evaluate the diagnostic performance of a machine learning model combining deep learning models based on enhanced CT images with radiological and clinical features in differentiating lipid-poor adrenal adenomas from metastatic tumors, and to explain the model's prediction results through SHAP(Shapley Additive Explanations) analysis. A retrospective analysis was conducted on abdominal contrast-enhanced CT images and clinical data from 416 pathologically confirmed adrenal tumor patients at our hospital from July 2019 to December 2024. Patients were randomly divided into training and testing sets in a 7:3 ratio. Six convolutional neural network (CNN)-based deep learning models were employed, and the model with the highest diagnostic performance was selected based on the area under curve(AUC) of the ROC. Subsequently, multiple machine learning models incorporating clinical and radiological features were developed and evaluated using various indicators and AUC.The best-performing machine learning model was further analyzed using SHAP plots to enhance interpretability and quantify feature contributions. All six deep learning models demonstrated excellent diagnostic performance, with AUC values exceeding 0.8, among which ResNet50 achieved the highest AUC. Among the 10 machine learning models incorporating clinical and imaging features, the extreme gradient boosting(XGBoost) model exhibited the best accuracy(ACC), sensitivity, and AUC, indicating superior diagnostic performance.SHAP analysis revealed contributions from ResNet50, RPW, age, and other key features in model predictions. Machine learning models based on contrast-enhanced CT combined with clinical and imaging features exhibit outstanding diagnostic performance in differentiating lipid-poor adrenal adenomas from metastases.
Page 78 of 1421416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.