Sort by:
Page 36 of 1611610 results

Evaluation of a fusion model combining deep learning models based on enhanced CT images with radiological and clinical features in distinguishing lipid-poor adrenal adenoma from metastatic lesions.

Wang SC, Yin SN, Wang ZY, Ding N, Ji YD, Jin L

pubmed logopapersJul 1 2025
To evaluate the diagnostic performance of a machine learning model combining deep learning models based on enhanced CT images with radiological and clinical features in differentiating lipid-poor adrenal adenomas from metastatic tumors, and to explain the model's prediction results through SHAP(Shapley Additive Explanations) analysis. A retrospective analysis was conducted on abdominal contrast-enhanced CT images and clinical data from 416 pathologically confirmed adrenal tumor patients at our hospital from July 2019 to December 2024. Patients were randomly divided into training and testing sets in a 7:3 ratio. Six convolutional neural network (CNN)-based deep learning models were employed, and the model with the highest diagnostic performance was selected based on the area under curve(AUC) of the ROC. Subsequently, multiple machine learning models incorporating clinical and radiological features were developed and evaluated using various indicators and AUC.The best-performing machine learning model was further analyzed using SHAP plots to enhance interpretability and quantify feature contributions. All six deep learning models demonstrated excellent diagnostic performance, with AUC values exceeding 0.8, among which ResNet50 achieved the highest AUC. Among the 10 machine learning models incorporating clinical and imaging features, the extreme gradient boosting(XGBoost) model exhibited the best accuracy(ACC), sensitivity, and AUC, indicating superior diagnostic performance.SHAP analysis revealed contributions from ResNet50, RPW, age, and other key features in model predictions. Machine learning models based on contrast-enhanced CT combined with clinical and imaging features exhibit outstanding diagnostic performance in differentiating lipid-poor adrenal adenomas from metastases.

Preoperative MRI-based deep learning reconstruction and classification model for assessing rectal cancer.

Yuan Y, Ren S, Lu H, Chen F, Xiang L, Chamberlain R, Shao C, Lu J, Shen F, Chen L

pubmed logopapersJul 1 2025
To determine whether deep learning reconstruction (DLR) could improve the image quality of rectal MR images, and to explore the discrimination of the TN stage of rectal cancer by different readers and deep learning classification models, compared with conventional MR images without DLR. Images of high-resolution T2-weighted, diffusion-weighted imaging (DWI), and contrast-enhanced T1-weighted imaging (CE-T1WI) from patients with pathologically diagnosed rectal cancer were retrospectively processed with and without DLR and assessed by five readers. The first two readers measured the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of the lesions. The overall image quality and lesion display performance for each sequence with and without DLR were independently scored using a five-point scale, and the TN stage of rectal cancer lesions was evaluated by the other three readers. Fifty of the patients were randomly selected to further make a comparison between DLR and traditional denoising filter. Deep learning classification models were developed and compared for the TN stage. Receiver operating characteristic (ROC) curve analysis and decision curve analysis (DCA) were used to evaluate the diagnostic performance of the proposed model. Overall, 178 patients were evaluated. The SNR and CNR of the lesion on images with DLR were significantly higher than those without DLR, for T2WI, DWI and CE-T1WI, respectively (p < 0.0001). A significant difference was observed in overall image quality and lesion display performance between images with and without DLR (p < 0.0001). The image quality scores, SNR, and CNR values of DLR image set were significantly larger than those of original and filter enhancement image sets (all p values < 0.05) for all the three sequences, respectively. The deep learning classification models with DLR achieved good discrimination of the TN stage, with area under the curve (AUC) values of 0.937 (95% CI 0.839-0.977) and 0.824 (95% CI 0.684-0.913) in the test sets, respectively. Deep learning reconstruction and classification models could improve the image quality of rectal MRI images and enhance the diagnostic performance for determining the TN stage of patients with rectal cancer.

Enhanced pulmonary nodule detection with U-Net, YOLOv8, and swin transformer.

Wang X, Wu H, Wang L, Chen J, Li Y, He X, Chen T, Wang M, Guo L

pubmed logopapersJul 1 2025
Lung cancer remains the leading cause of cancer-related mortality worldwide, emphasizing the critical need for early pulmonary nodule detection to improve patient outcomes. Current methods encounter challenges in detecting small nodules and exhibit high false positive rates, placing an additional diagnostic burden on radiologists. This study aimed to develop a two-stage deep learning model integrating U-Net, Yolov8s, and the Swin transformer to enhance pulmonary nodule detection in computer tomography (CT) images, particularly for small nodules, with the goal of improving detection accuracy and reducing false positives. We utilized the LUNA16 dataset (888 CT scans) and an additional 308 CT scans from Tianjin Chest Hospital. Images were preprocessed for consistency. The proposed model first employs U-Net for precise lung segmentation, followed by Yolov8s augmented with the Swin transformer for nodule detection. The Shape-aware IoU (SIoU) loss function was implemented to improve bounding box predictions. For the LUNA16 dataset, the model achieved a precision of 0.898, a recall of 0.851, and a mean average precision at 50% IoU (mAP50) of 0.879, outperforming state-of-the-art models. The Tianjin Chest Hospital dataset has a precision of 0.855, a recall of 0.872, and an mAP50 of 0.862. This study presents a two-stage deep learning model that leverages U-Net, Yolov8s, and the Swin transformer for enhanced pulmonary nodule detection in CT images. The model demonstrates high accuracy and a reduced false positive rate, suggesting its potential as a useful tool for early lung cancer diagnosis and treatment.

Computed tomography-based radiomics predicts prognostic and treatment-related levels of immune infiltration in the immune microenvironment of clear cell renal cell carcinoma.

Song S, Ge W, Qi X, Che X, Wang Q, Wu G

pubmed logopapersJul 1 2025
The composition of the tumour microenvironment is very complex, and measuring the extent of immune cell infiltration can provide an important guide to clinically significant treatments for cancer, such as immune checkpoint inhibition therapy and targeted therapy. We used multiple machine learning (ML) models to predict differences in immune infiltration in clear cell renal cell carcinoma (ccRCC), with computed tomography (CT) imaging pictures serving as a model for machine learning. We also statistically analysed and compared the results of multiple typing models and explored an excellent non-invasive and convenient method for treatment of ccRCC patients and explored a better, non-invasive and convenient prediction method for ccRCC patients. The study included 539 ccRCC samples with clinicopathological information and associated genetic information from The Cancer Genome Atlas (TCGA) database. The Single Sample Gene Set Enrichment Analysis (ssGSEA) algorithm was used to obtain the immune cell infiltration results as well as the cluster analysis results. ssGSEA-based analysis was used to obtain the immune cell infiltration levels, and the Boruta algorithm was further used to downscale the obtained positive/negative gene sets to obtain the immune infiltration level groupings. Multifactor Cox regression analysis was used to calculate the immunotherapy response of subgroups according to Tumor Immune Dysfunction and Exclusion (TIDE) algorithm and subgraph algorithm to detect the difference in survival time and immunotherapy response of ccRCC patients with immune infiltration. Radiomics features were screened using LASSO analysis. Eight ML algorithms were selected for diagnostic analysis of the test set. Receiver operating characteristic (ROC) curve was used to evaluate the performance of the model. Draw decision curve analysis (DCA) to evaluate the clinical personalized medical value of the predictive model. The high/low subtypes of immune infiltration levels obtained by optimisation based on the Boruta algorithm were statistically different in the survival analysis of ccRCC patients. Multifactorial immune infiltration level combined with clinical factors better predicted survival of ccRCC patients, and ccRCC with high immune infiltration may benefit more from anti-PD-1 therapy. Among the eight machine learning models, ExtraTrees had the highest test and training set ROC AUCs of 1.000 and 0.753; in the test set, LR and LightGBM had the highest sensitivity of 0.615; LR, SVM, ExtraTrees, LightGBM and MLP had higher specificities of 0.789, 1.000, 0.842, 0.789 and 0.789, respectively; and LR, ExtraTrees and LightGBM had the highest accuracy of 0. 719, 0.688 and 0.719 respectively. Therefore, the CT-based ML achieved good predictive results in predicting immune infiltration in ccRCC, with the ExtraTrees machine learning algorithm being optimal. The use of radiomics model based on renal CT images can be noninvasively used to predict the immune infiltration level of ccRCC as well as combined with clinical information to create columnar plots predicting total survival in people with ccRCC and to predict responsiveness to ICI therapy, findings that may be useful in stratifying the prognosis of patients with ccRCC and guiding clinical practitioners to develop individualized regimens in the treatment of their patients.

Development and validation of a machine learning model for central compartmental lymph node metastasis in solitary papillary thyroid microcarcinoma via ultrasound imaging features and clinical parameters.

Han H, Sun H, Zhou C, Wei L, Xu L, Shen D, Hu W

pubmed logopapersJul 1 2025
Papillary thyroid microcarcinoma (PTMC) is the most common malignant subtype of thyroid cancer. Preoperative assessment of the risk of central compartment lymph node metastasis (CCLNM) can provide scientific support for personalized treatment decisions prior to microwave ablation of thyroid nodules. The objective of this study was to develop a predictive model for CCLNM in patients with solitary PTMC on the basis of a combination of ultrasound radiomics and clinical parameters. We retrospectively analyzed data from 480 patients diagnosed with PTMC via postoperative pathological examination. The patients were randomly divided into a training set (n = 336) and a validation set (n = 144) at a 7:3 ratio. The cohort was stratified into a metastasis group and a nonmetastasis group on the basis of postoperative pathological results. Ultrasound radiomic features were extracted from routine thyroid ultrasound images, and multiple feature selection methods were applied to construct radiomic models for each group. Independent risk factors, along with radiomics features identified through multivariate logistic regression analysis, were subsequently refined through additional feature selection techniques to develop combined predictive models. The performance of each model was then evaluated. The combined model, which incorporates age, the presence of Hashimoto's thyroiditis (HT), and radiomics features selected via an optimal feature selection approach (percentage-based), exhibited superior predictive efficacy, with AUC values of 0.767 (95% CI: 0.716-0.818) in the training set and 0.729 (95% CI: 0.648-0.810) in the validation set. A machine learning-based model combining ultrasound radiomics and clinical variables shows promise for the preoperative risk stratification of CCLNM in patients with PTMC. However, further validation in larger, more diverse cohorts is needed before clinical application. Not applicable.

Automated 3D segmentation of the hyoid bone in CBCT using nnU-Net v2: a retrospective study on model performance and potential clinical utility.

Gümüssoy I, Haylaz E, Duman SB, Kalabalik F, Say S, Celik O, Bayrakdar IS

pubmed logopapersJul 1 2025
This study aimed to identify the hyoid bone (HB) using the nnU-Net based artificial intelligence (AI) model in cone beam computed tomography (CBCT) images and assess the model's success in automatic segmentation. CBCT images of 190 patients were randomly selected. The raw data was converted to DICOM format and transferred to the 3D Slicer Imaging Software (Version 4.10.2; MIT, Cambridge, MA, USA). HB was labeled manually using the 3D Slicer. The dataset was divided into training, validation, and test sets in a ratio of 8:1:1. The nnU-Net v2 architecture was utilized to process the training and test datasets, generating the algorithm weight factors. To assess the model's accuracy and performance, a confusion matrix was employed. F1-score, Dice coefficient (DC), 95% Hausdorff distance (95% HD), and Intersection over Union (IoU) metrics were calculated to evaluate the results. The model's performance metrics were as follows: DC = 0.9434, IoU = 0.8941, F1-score = 0.9446, and 95% HD = 1.9998. The receiver operating characteristic (ROC) curve was generated, yielding an AUC value of 0.98. The results indicated that the nnU-Net v2 model achieved high precision and accuracy in HB segmentation on CBCT images. Automatic segmentation of HB can enhance clinicians' decision-making speed and accuracy in diagnosing and treating various clinical conditions. Not applicable.

Accelerating brain T2-weighted imaging using artificial intelligence-assisted compressed sensing combined with deep learning-based reconstruction: a feasibility study at 5.0T MRI.

Wen Y, Ma H, Xiang S, Feng Z, Guan C, Li X

pubmed logopapersJul 1 2025
T2-weighted imaging (T2WI), renowned for its sensitivity to edema and lesions, faces clinical limitations due to prolonged scanning time, increasing patient discomfort, and motion artifacts. The individual applications of artificial intelligence-assisted compressed sensing (ACS) and deep learning-based reconstruction (DLR) technologies have demonstrated effectiveness in accelerated scanning. However, the synergistic potential of ACS combined with DLR at 5.0T remains unexplored. This study systematically evaluates the diagnostic efficacy of the integrated ACS-DLR technique for T2WI at 5.0T, comparing it to conventional parallel imaging (PI) protocols. The prospective analysis was performed on 98 participants who underwent brain T2WI scans using ACS, DLR, and PI techniques. Two observers evaluated the overall image quality, truncation artifacts, motion artifacts, cerebrospinal fluid flow artifacts, vascular pulsation artifacts, and the significance of lesions. Subjective rating differences among the three sequences were compared. Objective assessment involved the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) in gray matter, white matter, and cerebrospinal fluid for each sequence. The SNR, CNR, and acquisition time of each sequence were compared. The acquisition time for ACS and DLR was reduced by 78%. The overall image quality of DLR is higher than that of ACS (P < 0.001) and equivalent to PI (P > 0.05). The SNR of the DLR sequence is the highest, and the CNR of DLR is higher than that of the ACS sequence (P < 0.001) and equivalent to PI (P > 0.05). The integration of ACS and DLR enables the ultrafast acquisition of brain T2WI while maintaining superior SNR and comparable CNR compared to PI sequences. Not applicable.

Development and validation of CT-based fusion model for preoperative prediction of invasion and lymph node metastasis in adenocarcinoma of esophagogastric junction.

Cao M, Xu R, You Y, Huang C, Tong Y, Zhang R, Zhang Y, Yu P, Wang Y, Chen W, Cheng X, Zhang L

pubmed logopapersJul 1 2025
In the context of precision medicine, radiomics has become a key technology in solving medical problems. For adenocarcinoma of esophagogastric junction (AEG), developing a preoperative CT-based prediction model for AEG invasion and lymph node metastasis is crucial. We retrospectively collected 256 patients with AEG from two centres. The radiomics features were extracted from the preoperative diagnostic CT images, and the feature selection method and machine learning method were applied to reduce the feature size and establish the predictive imaging features. By comparing the three machine learning methods, the best radiomics nomogram was selected, and the average AUC was obtained by 20 repeats of fivefold cross-validation for comparison. The fusion model was constructed by logistic regression combined with clinical factors. On this basis, ROC curve, calibration curve and decision curve of the fusion model are constructed. The predictive efficacy of fusion model for tumour invasion depth was higher than that of radiomics nomogram, with an AUC of 0.764 vs. 0.706 in the test set, P < 0.001, internal validation set 0.752 vs. 0.697, P < 0.001, and external validation set 0.756 vs. 0.687, P < 0.001, respectively. The predictive efficacy of the lymph node metastasis fusion model was higher than that of the radiomics nomogram, with an AUC of 0.809 vs. 0.732 in the test set, P < 0.001, internal validation set 0.841 vs. 0.718, P < 0.001, and external validation set 0.801 vs. 0.680, P < 0.001, respectively. We have developed a fusion model combining radiomics and clinical risk factors, which is crucial for the accurate preoperative diagnosis and treatment of AEG, advancing precision medicine. It may also spark discussions on the imaging feature differences between AEG and GC (Gastric cancer).

Differential dementia detection from multimodal brain images in a real-world dataset.

Leming M, Im H

pubmed logopapersJul 1 2025
Artificial intelligence (AI) models have been applied to differential dementia detection tasks in brain images from curated, high-quality benchmark databases, but not real-world data in hospitals. We describe a deep learning model specially trained for disease detection in heterogeneous clinical images from electronic health records without focusing on confounding factors. It encodes up to 14 multimodal images, alongside age and demographics, and outputs the likelihood of vascular dementia, Alzheimer's, Lewy body dementia, Pick's disease, mild cognitive impairment, and unspecified dementia. We use data from Massachusetts General Hospital (183,018 images from 11,015 patients) for training and external data (125,493 images from 6,662 patients) for testing. Performance ranged between 0.82 and 0.94 area under the curve (AUC) on data from 1003 sites. Analysis shows that the model focused on subcortical brain structures as the basis for its decisions. By detecting biomarkers in real-world data, the presented techniques will help with clinical translation of disease detection AI. Our artificial intelligence (AI) model can detect neurodegenerative disorders in brain imaging electronic health record (EHR) data. It encodes up to 14 brain images and text information from a single patient's EHR. Attention maps show that the model focuses on subcortical brain structures. Performance ranged from 0.82 to 0.94 area under the curve (AUC) on data from 1003 external sites.
Page 36 of 1611610 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.