Sort by:
Page 35 of 2182174 results

Deep Learning and Radiomics Discrimination of Coronary Chronic Total Occlusion and Subtotal Occlusion using CTA.

Zhou Z, Bo K, Gao Y, Zhang W, Zhang H, Chen Y, Chen Y, Wang H, Zhang N, Huang Y, Mao X, Gao Z, Zhang H, Xu L

pubmed logopapersJul 1 2025
Coronary chronic total occlusion (CTO) and subtotal occlusion (STO) pose diagnostic challenges, differing in treatment strategies. Artificial intelligence and radiomics are promising tools for accurate discrimination. This study aimed to develop deep learning (DL) and radiomics models using coronary computed tomography angiography (CCTA) to differentiate CTO from STO lesions and compare their performance with that of the conventional method. CTO and STO were identified retrospectively from a tertiary hospital and served as training and validation sets for developing and validating the DL and radiomics models to distinguish CTO from STO. An external test cohort was recruited from two additional tertiary hospitals with identical eligibility criteria. All participants underwent CCTA within 1 month before invasive coronary angiography. A total of 581 participants (mean age, 50 years ± 11 [SD]; 474 [81.6%] men) with 600 lesions were enrolled, including 403 CTO and 197 STO lesions. The DL and radiomics models exhibited better discrimination performance than the conventional method, with areas under the curve of 0.908 and 0.860, respectively, vs. 0.794 in the validation set (all p<0.05), and 0.893 and 0.827, respectively, vs. 0.746 in the external test set (all p<0.05). The proposed CCTA-based DL and radiomics models achieved efficient and accurate discrimination of coronary CTO and STO.

Stratifying trigeminal neuralgia and characterizing an abnormal property of brain functional organization: a resting-state fMRI and machine learning study.

Wu M, Qiu J, Chen Y, Jiang X

pubmed logopapersJul 1 2025
Increasing evidence suggests that primary trigeminal neuralgia (TN), including classical TN (CTN) and idiopathic TN (ITN), share biological, neuropsychological, and clinical features, despite differing diagnostic criteria. Neuroimaging studies have shown neurovascular compression (NVC) differences in these disorders. However, changes in brain dynamics across these two TN subtypes remain unknown. The authors aimed to examine the functional connectivity differences in CTN, ITN, and pain-free controls. A total of 93 subjects, 50 TN patients and 43 pain-free controls, underwent resting-state functional magnetic resonance imaging (rs-fMRI). All TN patients underwent surgery, and the NVC type was verified. Functional connectivity and spontaneous brain activity were analyzed, and the significant alterations in rs-fMRI indices were selected to train classification models. The patients with TN showed increased connectivity between several brain regions, such as the medial prefrontal cortex (mPFC) and left planum temporale and decreased connectivity between the mPFC and left superior frontal gyrus. CTN patients exhibited a further reduction in connectivity between the left insular lobe and left occipital pole. Compared to controls, TN patients had heightened neural activity in the frontal regions. The CTN patients showed reduced activity in the right temporal pole compared to that in the ITN patients. These patterns effectively distinguished TN patients from controls, with an accuracy of 74.19% and an area under the receiver operating characteristic curve of 0.80. This study revealed alterations in rs-fMRI metrics in TN patients compared to those in controls and is the first to show differences between CTN and ITN. The support vector machine model of rs-fMRI indices exhibited moderate performance on discriminating TN patients from controls. These findings have unveiled potential biomarkers for TN and its subtypes, which can be used for additional investigation of the pathophysiology of the disease.

A Novel Visual Model for Predicting Prognosis of Resected Hepatoblastoma: A Multicenter Study.

He Y, An C, Dong K, Lyu Z, Qin S, Tan K, Hao X, Zhu C, Xiu W, Hu B, Xia N, Wang C, Dong Q

pubmed logopapersJul 1 2025
This study aimed to evaluate the application of a contrast-enhanced CT-based visual model in predicting postoperative prognosis in patients with hepatoblastoma (HB). We analyzed data from 224 patients across three centers (178 in the training cohort, 46 in the validation cohort). Visual features were extracted from contrast-enhanced CT images, and key features, along with clinicopathological data, were identified using LASSO Cox regression. Visual (DINOv2_score) and clinical (Clinical_score) models were developed, and a combined model integrating DINOv2_score and clinical risk factors was constructed. Nomograms were created for personalized risk assessment, with calibration curves and decision curve analysis (DCA) used to evaluate model performance. The DINOv2_score was recognized as a key prognostic indicator for HB. In both the training and validation cohorts, the combined model demonstrated superior performance in predicting disease-free survival (DFS) [C-index (95% CI): 0.886 (0.879-0.895) and 0.873 (0.837-0.909), respectively] and overall survival (OS) [C-index (95% CI): 0.887 (0.877-0.897) and 0.882 (0.858-0.906), respectively]. Calibration curves showed strong alignment between predicted and observed outcomes, while DCA demonstrated that the combined model provided greater clinical net benefit than the clinical or visual models alone across a range of threshold probabilities. The contrast-enhanced CT-based visual model serves as an effective tool for predicting postoperative prognosis in HB patients. The combined model, integrating the DINOv2_score and clinical risk factors, demonstrated superior performance in survival prediction, offering more precise guidance for personalized treatment strategies.

2.5D deep learning radiomics and clinical data for predicting occult lymph node metastasis in lung adenocarcinoma.

Huang X, Huang X, Wang K, Bai H, Lu X, Jin G

pubmed logopapersJul 1 2025
Occult lymph node metastasis (OLNM) refers to lymph node involvement that remains undetectable by conventional imaging techniques, posing a significant challenge in the accurate staging of lung adenocarcinoma. This study aims to investigate the potential of combining 2.5D deep learning radiomics with clinical data to predict OLNM in lung adenocarcinoma. Retrospective contrast-enhanced CT images were collected from 1,099 patients diagnosed with lung adenocarcinoma across two centers. Multivariable analysis was performed to identify independent clinical risk factors for constructing clinical signatures. Radiomics features were extracted from the enhanced CT images to develop radiomics signatures. A 2.5D deep learning approach was used to extract deep learning features from the images, which were then aggregated using multi-instance learning (MIL) to construct MIL signatures. Deep learning radiomics (DLRad) signatures were developed by integrating the deep learning features with radiomic features. These were subsequently combined with clinical features to form the combined signatures. The performance of the resulting signatures was evaluated using the area under the curve (AUC). The clinical model achieved AUCs of 0.903, 0.866, and 0.785 in the training, validation, and external test cohorts The radiomics model yielded AUCs of 0.865, 0.892, and 0.796 in the training, validation, and external test cohorts. The MIL model demonstrated AUCs of 0.903, 0.900, and 0.852 in the training, validation, and external test cohorts, respectively. The DLRad model showed AUCs of 0.910, 0.908, and 0.875 in the training, validation, and external test cohorts. Notably, the combined model consistently outperformed all other models, achieving AUCs of 0.940, 0.923, and 0.898 in the training, validation, and external test cohorts. The integration of 2.5D deep learning radiomics with clinical data demonstrates strong capability for OLNM in lung adenocarcinoma, potentially aiding clinicians in developing more personalized treatment strategies.

Evaluation of a fusion model combining deep learning models based on enhanced CT images with radiological and clinical features in distinguishing lipid-poor adrenal adenoma from metastatic lesions.

Wang SC, Yin SN, Wang ZY, Ding N, Ji YD, Jin L

pubmed logopapersJul 1 2025
To evaluate the diagnostic performance of a machine learning model combining deep learning models based on enhanced CT images with radiological and clinical features in differentiating lipid-poor adrenal adenomas from metastatic tumors, and to explain the model's prediction results through SHAP(Shapley Additive Explanations) analysis. A retrospective analysis was conducted on abdominal contrast-enhanced CT images and clinical data from 416 pathologically confirmed adrenal tumor patients at our hospital from July 2019 to December 2024. Patients were randomly divided into training and testing sets in a 7:3 ratio. Six convolutional neural network (CNN)-based deep learning models were employed, and the model with the highest diagnostic performance was selected based on the area under curve(AUC) of the ROC. Subsequently, multiple machine learning models incorporating clinical and radiological features were developed and evaluated using various indicators and AUC.The best-performing machine learning model was further analyzed using SHAP plots to enhance interpretability and quantify feature contributions. All six deep learning models demonstrated excellent diagnostic performance, with AUC values exceeding 0.8, among which ResNet50 achieved the highest AUC. Among the 10 machine learning models incorporating clinical and imaging features, the extreme gradient boosting(XGBoost) model exhibited the best accuracy(ACC), sensitivity, and AUC, indicating superior diagnostic performance.SHAP analysis revealed contributions from ResNet50, RPW, age, and other key features in model predictions. Machine learning models based on contrast-enhanced CT combined with clinical and imaging features exhibit outstanding diagnostic performance in differentiating lipid-poor adrenal adenomas from metastases.

Knowledge Graph-Based Few-Shot Learning for Label of Medical Imaging Reports.

Li T, Zhang Y, Su D, Liu M, Ge M, Chen L, Li C, Tang J

pubmed logopapersJul 1 2025
The application of artificial intelligence (AI) in the field of automatic imaging report labeling faces the challenge of manually labeling large datasets. To propose a data augmentation method by using knowledge graph (KG) and few-shot learning. A KG of lumbar spine X-ray images was constructed, and 2000 data were annotated based on the KG, which were divided into training, validation, and test sets in a ratio of 7:2:1. The training dataset was augmented based on the synonym/replacement attributes of the KG and was the augmented data was input into the BERT (Bidirectional Encoder Representations from Transformers) model for automatic annotation training. The performance of the model under different augmentation ratios (1:10, 1:100, 1:1000) and augmentation methods (synonyms only, replacements only, combination of synonyms and replacements) was evaluated using the precision and F1 scores. In addition, with the augmentation ratio was fixed, iterative experiments were performed by supplementing the data of nodes that perform poorly in the validation set to further improve model's performance. Prior to data augmentation, the precision was 0.728 and the F1 score was 0.666. By adjusting the augmentation ratio, the precision increased from 0.912 at a 1:10 augmentation ratio to 0.932 at a 1:100 augmentation ratio (P<.05), while F1 score improved from 0.853 at a 1:10 augmentation ratio to 0.881 at a 1:100 augmentation ratio (P<.05). Additionally, the effectiveness of various augmentation methods was compared at a 1:100 augmentation ratio. The augmentation method that combined synonyms and replacements (F1=0.881) was superior to the methods that only used synonyms (F1=0.815) and only used replacements (F1=0.753) (P<.05). For nodes that exhibited suboptimal performance on the validation set, supplementing the training set with target data improved model performance, increasing the average F1 score to 0.979 (P<.05). Based on the KG, this study trained an automatic labeling model of radiology reports using a few-shot data set. This method effectively reduces the workload of manual labeling, improves the efficiency and accuracy of image data labeling, and provides an important research strategy for the application of AI in the domain of automatic labeling of image reports.

Preoperative MRI-based deep learning reconstruction and classification model for assessing rectal cancer.

Yuan Y, Ren S, Lu H, Chen F, Xiang L, Chamberlain R, Shao C, Lu J, Shen F, Chen L

pubmed logopapersJul 1 2025
To determine whether deep learning reconstruction (DLR) could improve the image quality of rectal MR images, and to explore the discrimination of the TN stage of rectal cancer by different readers and deep learning classification models, compared with conventional MR images without DLR. Images of high-resolution T2-weighted, diffusion-weighted imaging (DWI), and contrast-enhanced T1-weighted imaging (CE-T1WI) from patients with pathologically diagnosed rectal cancer were retrospectively processed with and without DLR and assessed by five readers. The first two readers measured the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of the lesions. The overall image quality and lesion display performance for each sequence with and without DLR were independently scored using a five-point scale, and the TN stage of rectal cancer lesions was evaluated by the other three readers. Fifty of the patients were randomly selected to further make a comparison between DLR and traditional denoising filter. Deep learning classification models were developed and compared for the TN stage. Receiver operating characteristic (ROC) curve analysis and decision curve analysis (DCA) were used to evaluate the diagnostic performance of the proposed model. Overall, 178 patients were evaluated. The SNR and CNR of the lesion on images with DLR were significantly higher than those without DLR, for T2WI, DWI and CE-T1WI, respectively (p < 0.0001). A significant difference was observed in overall image quality and lesion display performance between images with and without DLR (p < 0.0001). The image quality scores, SNR, and CNR values of DLR image set were significantly larger than those of original and filter enhancement image sets (all p values < 0.05) for all the three sequences, respectively. The deep learning classification models with DLR achieved good discrimination of the TN stage, with area under the curve (AUC) values of 0.937 (95% CI 0.839-0.977) and 0.824 (95% CI 0.684-0.913) in the test sets, respectively. Deep learning reconstruction and classification models could improve the image quality of rectal MRI images and enhance the diagnostic performance for determining the TN stage of patients with rectal cancer.

Enhanced pulmonary nodule detection with U-Net, YOLOv8, and swin transformer.

Wang X, Wu H, Wang L, Chen J, Li Y, He X, Chen T, Wang M, Guo L

pubmed logopapersJul 1 2025
Lung cancer remains the leading cause of cancer-related mortality worldwide, emphasizing the critical need for early pulmonary nodule detection to improve patient outcomes. Current methods encounter challenges in detecting small nodules and exhibit high false positive rates, placing an additional diagnostic burden on radiologists. This study aimed to develop a two-stage deep learning model integrating U-Net, Yolov8s, and the Swin transformer to enhance pulmonary nodule detection in computer tomography (CT) images, particularly for small nodules, with the goal of improving detection accuracy and reducing false positives. We utilized the LUNA16 dataset (888 CT scans) and an additional 308 CT scans from Tianjin Chest Hospital. Images were preprocessed for consistency. The proposed model first employs U-Net for precise lung segmentation, followed by Yolov8s augmented with the Swin transformer for nodule detection. The Shape-aware IoU (SIoU) loss function was implemented to improve bounding box predictions. For the LUNA16 dataset, the model achieved a precision of 0.898, a recall of 0.851, and a mean average precision at 50% IoU (mAP50) of 0.879, outperforming state-of-the-art models. The Tianjin Chest Hospital dataset has a precision of 0.855, a recall of 0.872, and an mAP50 of 0.862. This study presents a two-stage deep learning model that leverages U-Net, Yolov8s, and the Swin transformer for enhanced pulmonary nodule detection in CT images. The model demonstrates high accuracy and a reduced false positive rate, suggesting its potential as a useful tool for early lung cancer diagnosis and treatment.

Computed tomography-based radiomics predicts prognostic and treatment-related levels of immune infiltration in the immune microenvironment of clear cell renal cell carcinoma.

Song S, Ge W, Qi X, Che X, Wang Q, Wu G

pubmed logopapersJul 1 2025
The composition of the tumour microenvironment is very complex, and measuring the extent of immune cell infiltration can provide an important guide to clinically significant treatments for cancer, such as immune checkpoint inhibition therapy and targeted therapy. We used multiple machine learning (ML) models to predict differences in immune infiltration in clear cell renal cell carcinoma (ccRCC), with computed tomography (CT) imaging pictures serving as a model for machine learning. We also statistically analysed and compared the results of multiple typing models and explored an excellent non-invasive and convenient method for treatment of ccRCC patients and explored a better, non-invasive and convenient prediction method for ccRCC patients. The study included 539 ccRCC samples with clinicopathological information and associated genetic information from The Cancer Genome Atlas (TCGA) database. The Single Sample Gene Set Enrichment Analysis (ssGSEA) algorithm was used to obtain the immune cell infiltration results as well as the cluster analysis results. ssGSEA-based analysis was used to obtain the immune cell infiltration levels, and the Boruta algorithm was further used to downscale the obtained positive/negative gene sets to obtain the immune infiltration level groupings. Multifactor Cox regression analysis was used to calculate the immunotherapy response of subgroups according to Tumor Immune Dysfunction and Exclusion (TIDE) algorithm and subgraph algorithm to detect the difference in survival time and immunotherapy response of ccRCC patients with immune infiltration. Radiomics features were screened using LASSO analysis. Eight ML algorithms were selected for diagnostic analysis of the test set. Receiver operating characteristic (ROC) curve was used to evaluate the performance of the model. Draw decision curve analysis (DCA) to evaluate the clinical personalized medical value of the predictive model. The high/low subtypes of immune infiltration levels obtained by optimisation based on the Boruta algorithm were statistically different in the survival analysis of ccRCC patients. Multifactorial immune infiltration level combined with clinical factors better predicted survival of ccRCC patients, and ccRCC with high immune infiltration may benefit more from anti-PD-1 therapy. Among the eight machine learning models, ExtraTrees had the highest test and training set ROC AUCs of 1.000 and 0.753; in the test set, LR and LightGBM had the highest sensitivity of 0.615; LR, SVM, ExtraTrees, LightGBM and MLP had higher specificities of 0.789, 1.000, 0.842, 0.789 and 0.789, respectively; and LR, ExtraTrees and LightGBM had the highest accuracy of 0. 719, 0.688 and 0.719 respectively. Therefore, the CT-based ML achieved good predictive results in predicting immune infiltration in ccRCC, with the ExtraTrees machine learning algorithm being optimal. The use of radiomics model based on renal CT images can be noninvasively used to predict the immune infiltration level of ccRCC as well as combined with clinical information to create columnar plots predicting total survival in people with ccRCC and to predict responsiveness to ICI therapy, findings that may be useful in stratifying the prognosis of patients with ccRCC and guiding clinical practitioners to develop individualized regimens in the treatment of their patients.
Page 35 of 2182174 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.