Sort by:
Page 112 of 1411410 results

A systematic review on deep learning-enabled coronary CT angiography for plaque and stenosis quantification and cardiac risk prediction.

Shrivastava P, Kashikar S, Parihar PH, Kasat P, Bhangale P, Shrivastava P

pubmed logopapersJun 1 2025
Coronary artery disease (CAD) is a major worldwide health concern, contributing significantly to the global burden of cardiovascular diseases (CVDs). According to the 2023 World Health Organization (WHO) report, CVDs account for approximately 17.9 million deaths annually. This emphasizies the need for advanced diagnostic tools such as coronary computed tomography angiography (CCTA). The incorporation of deep learning (DL) technologies could significantly improve CCTA analysis by automating the quantification of plaque and stenosis, thus enhancing the precision of cardiac risk assessments. A recent meta-analysis highlights the evolving role of CCTA in patient management, showing that CCTA-guided diagnosis and management reduced adverse cardiac events and improved event-free survival in patients with stable and acute coronary syndromes. An extensive literature search was carried out across various electronic databases, such as MEDLINE, Embase, and the Cochrane Library. This search utilized a specific strategy that included both Medical Subject Headings (MeSH) terms and pertinent keywords. The review adhered to PRISMA guidelines and focused on studies published between 2019 and 2024 that employed deep learning (DL) for coronary computed tomography angiography (CCTA) in patients aged 18 years or older. After implementing specific inclusion and exclusion criteria, a total of 10 articles were selected for systematic evaluation regarding quality and bias. This systematic review included a total of 10 studies, demonstrating the high diagnostic performance and predictive capabilities of various deep learning models compared to different imaging modalities. This analysis highlights the effectiveness of these models in enhancing diagnostic accuracy in imaging techniques. Notably, strong correlations were observed between DL-derived measurements and intravascular ultrasound findings, enhancing clinical decision-making and risk stratification for CAD. Deep learning-enabled CCTA represents a promising advancement in the quantification of coronary plaques and stenosis, facilitating improved cardiac risk prediction and enhancing clinical workflow efficiency. Despite variability in study designs and potential biases, the findings support the integration of DL technologies into routine clinical practice for better patient outcomes in CAD management.

Computed Tomography Radiomics-based Combined Model for Predicting Thymoma Risk Subgroups: A Multicenter Retrospective Study.

Liu Y, Luo C, Wu Y, Zhou S, Ruan G, Li H, Chen W, Lin Y, Liu L, Quan T, He X

pubmed logopapersJun 1 2025
Accurately distinguishing histological subtypes and risk categorization of thymomas is difficult. To differentiate the histologic risk categories of thymomas, we developed a combined radiomics model based on non-enhanced and contrast-enhanced computed tomography (CT) radiomics, clinical, and semantic features. In total, 360 patients with pathologically-confirmed thymomas who underwent CT examinations were retrospectively recruited from three centers. Patients were classified using improved pathological classification criteria as low-risk (LRT: types A and AB) or high-risk (HRT: types B1, B2, and B3). The training and external validation sets comprised 274 (from centers 1 and 2) and 86 (center 3) patients, respectively. A clinical-semantic model was built using clinical and semantic variables. Radiomics features were filtered using intraclass correlation coefficients, correlation analysis, and univariate logistic regression. An optimal radiomics model (Rad_score) was constructed using the AutoML algorithm, while a combined model was constructed by integrating Rad_score with clinical and semantic features. The predictive and clinical performances of the models were evaluated using receiver operating characteristic/calibration curve analyses and decision-curve analysis, respectively. Radiomics and combined models (area under curve: training set, 0.867 and 0.884; external validation set, 0.792 and 0.766, respectively) exhibited performance superior to the clinical-semantic model. The combined model had higher accuracy than the radiomics model (0.79 vs. 0.78, p<0.001) in the entire cohort. The original_firstorder_median of venous phase had the highest relative importance among features in the radiomics model. Radiomics and combined radiomics models may serve as noninvasive discrimination tools to differentiate thymoma risk classifications.

Deep Learning-Assisted Diagnosis of Malignant Cerebral Edema Following Endovascular Thrombectomy.

Song Y, Hong J, Liu F, Liu J, Chen Y, Li Z, Su J, Hu S, Fu J

pubmed logopapersJun 1 2025
Malignant cerebral edema (MCE) is a significant complication following endovascular thrombectomy (EVT) in the treatment of acute ischemic stroke. This study aimed to develop and validate a deep learning-assisted diagnosis model based on the hyperattenuated imaging marker (HIM), characterized by hyperattenuation on head non-contrast computed tomography immediately after thrombectomy, to facilitate radiologists in predicting MCE in patients receiving EVT. This study included 271 patients, with 168 in the training cohort, 43 in the validation cohort, and 60 in the prospective internal test cohort. Deep learning models including ResNet 50, ResNet 101, ResNeXt50_32×4d, ResNeXt101_32×8d, and DenseNet 121 were constructed. The performance of senior and junior radiologists with and without optimal model assistance was compared. ResNeXt101_32×8d had the best predictive performance, the analysis of the receiver operating characteristic curve indicated an area under the curve (AUC) of 0.897 for the prediction of MCE in the validation group and an AUC of 0.889 in the test group. Moreover, with the assistance of the model, radiologists exhibited a significant improvement in diagnostic performance, the AUC increased by 0.137 for the junior radiologist and 0.096 for the junior radiologist respectively. Our study utilized the ResNeXt-101 neural network, combined with HIM, to validate a deep learning model for predicting MCE post-EVT. The developed deep learning model demonstrated high discriminative ability, and can serve as a valuable adjunct to radiologists in clinical practice.

Intra-Individual Reproducibility of Automated Abdominal Organ Segmentation-Performance of TotalSegmentator Compared to Human Readers and an Independent nnU-Net Model.

Abel L, Wasserthal J, Meyer MT, Vosshenrich J, Yang S, Donners R, Obmann M, Boll D, Merkle E, Breit HC, Segeroth M

pubmed logopapersJun 1 2025
The purpose of this study is to assess segmentation reproducibility of artificial intelligence-based algorithm, TotalSegmentator, across 34 anatomical structures using multiphasic abdominal CT scans comparing unenhanced, arterial, and portal venous phases in the same patients. A total of 1252 multiphasic abdominal CT scans acquired at our institution between January 1, 2012, and December 31, 2022, were retrospectively included. TotalSegmentator was used to derive volumetric measurements of 34 abdominal organs and structures from the total of 3756 CT series. Reproducibility was evaluated across three contrast phases per CT and compared to two human readers and an independent nnU-Net trained on the BTCV dataset. Relative deviation in segmented volumes and absolute volume deviations (AVD) were reported. Volume deviation within 5% was considered reproducible. Thus, non-inferiority testing was conducted using a 5% margin. Twenty-nine out of 34 structures had volume deviations within 5% and were considered reproducible. Volume deviations for the adrenal glands, gallbladder, spleen, and duodenum were above 5%. Highest reproducibility was observed for bones (- 0.58% [95% CI: - 0.58, - 0.57]) and muscles (- 0.33% [- 0.35, - 0.32]). Among abdominal organs, volume deviation was 1.67% (1.60, 1.74). TotalSegmentator outperformed the reproducibility of the nnU-Net trained on the BTCV dataset with an AVD of 6.50% (6.41, 6.59) vs. 10.03% (9.86, 10.20; p < 0.0001), most notably in cases with pathologic findings. Similarly, TotalSegmentator's AVD between different contrast phases was superior compared to the interreader AVD for the same contrast phase (p = 0.036). TotalSegmentator demonstrated high intra-individual reproducibility for most abdominal structures in multiphasic abdominal CT scans. Although reproducibility was lower in pathologic cases, it outperforms both human readers and a nnU-Net trained on the BTCV dataset.

Effect of Deep Learning Image Reconstruction on Image Quality and Pericoronary Fat Attenuation Index.

Mei J, Chen C, Liu R, Ma H

pubmed logopapersJun 1 2025
To compare the image quality and fat attenuation index (FAI) of coronary artery CT angiography (CCTA) under different tube voltages between deep learning image reconstruction (DLIR) and adaptive statistical iterative reconstruction V (ASIR-V). Three hundred one patients who underwent CCTA with automatic tube current modulation were prospectively enrolled and divided into two groups: 120 kV group and low tube voltage group. Images were reconstructed using ASIR-V level 50% (ASIR-V50%) and high-strength DLIR (DLIR-H). In the low tube voltage group, the voltage was selected according to Chinese BMI classification: 70 kV (BMI < 24 kg/m<sup>2</sup>), 80 kV (24 kg/m<sup>2</sup> ≤ BMI < 28 kg/m<sup>2</sup>), 100 kV (BMI ≥ 28 kg/m<sup>2</sup>). At the same tube voltage, the subjective and objective image quality, edge rise distance (ERD), and FAI between different algorithms were compared. Under different tube voltages, we used DLIR-H to compare the differences between subjective, objective image quality, and ERD. Compared with the 120 kV group, the DLIR-H image noise of 70 kV, 80 kV, and 100 kV groups increased by 36%, 25%, and 12%, respectively (all P < 0.001); contrast-to-noise ratio (CNR), subjective score, and ERD were similar (all P > 0.05). In the 70 kV, 80 kV, 100 kV, and 120 kV groups, compared with ASIR-V50%, DLIR-H image noise decreased by 50%, 53%, 47%, and 38-50%, respectively; CNR, subjective score, and FAI value increased significantly (all P < 0.001), ERD decreased. Compared with 120 kV tube voltage, the combination of DLIR-H and low tube voltage maintains image quality. At the same tube voltage, compared with ASIR-V, DLIR-H improves image quality and FAI value.

A Robust Deep Learning Method with Uncertainty Estimation for the Pathological Classification of Renal Cell Carcinoma Based on CT Images.

Yao N, Hu H, Chen K, Huang H, Zhao C, Guo Y, Li B, Nan J, Li Y, Han C, Zhu F, Zhou W, Tian L

pubmed logopapersJun 1 2025
This study developed and validated a deep learning-based diagnostic model with uncertainty estimation to aid radiologists in the preoperative differentiation of pathological subtypes of renal cell carcinoma (RCC) based on computed tomography (CT) images. Data from 668 consecutive patients with pathologically confirmed RCC were retrospectively collected from Center 1, and the model was trained using fivefold cross-validation to classify RCC subtypes into clear cell RCC (ccRCC), papillary RCC (pRCC), and chromophobe RCC (chRCC). An external validation with 78 patients from Center 2 was conducted to evaluate the performance of the model. In the fivefold cross-validation, the area under the receiver operating characteristic curve (AUC) for the classification of ccRCC, pRCC, and chRCC was 0.868 (95% CI, 0.826-0.923), 0.846 (95% CI, 0.812-0.886), and 0.839 (95% CI, 0.802-0.88), respectively. In the external validation set, the AUCs were 0.856 (95% CI, 0.838-0.882), 0.787 (95% CI, 0.757-0.818), and 0.793 (95% CI, 0.758-0.831) for ccRCC, pRCC, and chRCC, respectively. The model demonstrated robust performance in predicting the pathological subtypes of RCC, while the incorporated uncertainty emphasized the importance of understanding model confidence. The proposed approach, integrated with uncertainty estimation, offers clinicians a dual advantage: accurate RCC subtype predictions complemented by diagnostic confidence metrics, thereby promoting informed decision-making for patients with RCC.

Identification of Bipolar Disorder and Schizophrenia Based on Brain CT and Deep Learning Methods.

Li M, Hou X, Yan W, Wang D, Yu R, Li X, Li F, Chen J, Wei L, Liu J, Wang H, Zeng Q

pubmed logopapersJun 1 2025
With the increasing prevalence of mental illness, accurate clinical diagnosis of mental illness is crucial. Compared with MRI, CT has the advantages of wide application, low price, short scanning time, and high patient cooperation. This study aims to construct a deep learning (DL) model based on CT images to make identification of bipolar disorder (BD) and schizophrenia (SZ). A total of 506 patients (BD = 227, SZ = 279) and 179 healthy controls (HC) was collected from January 2022 to May 2023 at two hospitals, and divided into an internal training set and an internal validation set according to a ratio of 4:1. An additional 65 patients (BD = 35, SZ = 30) and 40 HC were recruited from different hospitals, and served as an external test set. All subjects accepted the conventional brain CT examination. The DenseMD model for identify BD and SZ using multiple instance learning was developed and compared with other classical DL models. The results showed that DenseMD performed excellently with an accuracy of 0.745 in the internal validation set, whereas the accuracy of the ResNet-18, ResNeXt-50, and DenseNet-121model was 0.672, 0.664, and 0.679, respectively. For the external test set, DenseMD again outperformed other models with an accuracy of 0.724; however, the accuracy of the ResNet-18, ResNeXt-50, and DenseNet-121model was 0.657, 0.638, and 0.676, respectively. Therefore, the potential of DL models for identification of BD and SZ based on brain CT images was established, and identification ability of the DenseMD model was better than other classical DL models.

Children Are Not Small Adults: Addressing Limited Generalizability of an Adult Deep Learning CT Organ Segmentation Model to the Pediatric Population.

Chatterjee D, Kanhere A, Doo FX, Zhao J, Chan A, Welsh A, Kulkarni P, Trang A, Parekh VS, Yi PH

pubmed logopapersJun 1 2025
Deep learning (DL) tools developed on adult data sets may not generalize well to pediatric patients, posing potential safety risks. We evaluated the performance of TotalSegmentator, a state-of-the-art adult-trained CT organ segmentation model, on a subset of organs in a pediatric CT dataset and explored optimization strategies to improve pediatric segmentation performance. TotalSegmentator was retrospectively evaluated on abdominal CT scans from an external adult dataset (n = 300) and an external pediatric data set (n = 359). Generalizability was quantified by comparing Dice scores between adult and pediatric external data sets using Mann-Whitney U tests. Two DL optimization approaches were then evaluated: (1) 3D nnU-Net model trained on only pediatric data, and (2) an adult nnU-Net model fine-tuned on the pediatric cases. Our results show TotalSegmentator had significantly lower overall mean Dice scores on pediatric vs. adult CT scans (0.73 vs. 0.81, P < .001) demonstrating limited generalizability to pediatric CT scans. Stratified by organ, there was lower mean pediatric Dice score for four organs (P < .001, all): right and left adrenal glands (right adrenal, 0.41 [0.39-0.43] vs. 0.69 [0.66-0.71]; left adrenal, 0.35 [0.32-0.37] vs. 0.68 [0.65-0.71]); duodenum (0.47 [0.45-0.49] vs. 0.67 [0.64-0.69]); and pancreas (0.73 [0.72-0.74] vs. 0.79 [0.77-0.81]). Performance on pediatric CT scans improved by developing pediatric-specific models and fine-tuning an adult-trained model on pediatric images where both methods significantly improved segmentation accuracy over TotalSegmentator for all organs, especially for smaller anatomical structures (e.g., > 0.2 higher mean Dice for adrenal glands; P < .001).

Prediction of Malignancy and Pathological Types of Solid Lung Nodules on CT Scans Using a Volumetric SWIN Transformer.

Chen H, Wen Y, Wu W, Zhang Y, Pan X, Guan Y, Qin D

pubmed logopapersJun 1 2025
Lung adenocarcinoma and squamous cell carcinoma are the two most common pathological lung cancer subtypes. Accurate diagnosis and pathological subtyping are crucial for lung cancer treatment. Solitary solid lung nodules with lobulation and spiculation signs are often indicative of lung cancer; however, in some cases, postoperative pathology finds benign solid lung nodules. It is critical to accurately identify solid lung nodules with lobulation and spiculation signs before surgery; however, traditional diagnostic imaging is prone to misdiagnosis, and studies on artificial intelligence-assisted diagnosis are few. Therefore, we introduce a volumetric SWIN Transformer-based method. It is a multi-scale, multi-task, and highly interpretable model for distinguishing between benign solid lung nodules with lobulation and spiculation signs, lung adenocarcinomas, and lung squamous cell carcinoma. The technique's effectiveness was improved by using 3-dimensional (3D) computed tomography (CT) images instead of conventional 2-dimensional (2D) images to combine as much information as possible. The model was trained using 352 of the 441 CT image sequences and validated using the rest. The experimental results showed that our model could accurately differentiate between benign lung nodules with lobulation and spiculation signs, lung adenocarcinoma, and squamous cell carcinoma. On the test set, our model achieves an accuracy of 0.9888, precision of 0.9892, recall of 0.9888, and an F1-score of 0.9888, along with a class activation mapping (CAM) visualization of the 3D model. Consequently, our method could be used as a preoperative tool to assist in diagnosing solitary solid lung nodules with lobulation and spiculation signs accurately and provide a theoretical basis for developing appropriate clinical diagnosis and treatment plans for the patients.

Foundational Segmentation Models and Clinical Data Mining Enable Accurate Computer Vision for Lung Cancer.

Swinburne NC, Jackson CB, Pagano AM, Stember JN, Schefflein J, Marinelli B, Panyam PK, Autz A, Chopra MS, Holodny AI, Ginsberg MS

pubmed logopapersJun 1 2025
This study aims to assess the effectiveness of integrating Segment Anything Model (SAM) and its variant MedSAM into the automated mining, object detection, and segmentation (MODS) methodology for developing robust lung cancer detection and segmentation models without post hoc labeling of training images. In a retrospective analysis, 10,000 chest computed tomography scans from patients with lung cancer were mined. Line measurement annotations were converted to bounding boxes, excluding boxes < 1 cm or > 7 cm. The You Only Look Once object detection architecture was used for teacher-student learning to label unannotated lesions on the training images. Subsequently, a final tumor detection model was trained and employed with SAM and MedSAM for tumor segmentation. Model performance was assessed on a manually annotated test dataset, with additional evaluations conducted on an external lung cancer dataset before and after detection model fine-tuning. Bootstrap resampling was used to calculate 95% confidence intervals. Data mining yielded 10,789 line annotations, resulting in 5403 training boxes. The baseline detection model achieved an internal F1 score of 0.847, improving to 0.860 after self-labeling. Tumor segmentation using the final detection model attained internal Dice similarity coefficients (DSCs) of 0.842 (SAM) and 0.822 (MedSAM). After fine-tuning, external validation showed an F1 of 0.832 and DSCs of 0.802 (SAM) and 0.804 (MedSAM). Integrating foundational segmentation models into the MODS framework results in high-performing lung cancer detection and segmentation models using only mined clinical data. Both SAM and MedSAM hold promise as foundational segmentation models for radiology images.
Page 112 of 1411410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.