Sort by:
Page 61 of 1421416 results

Latent Class Analysis Identifies Distinct Patient Phenotypes Associated With Mistaken Treatment Decisions and Adverse Outcomes in Coronary Artery Disease.

Qi J, Wang Z, Ma X, Wang Z, Li Y, Yang L, Shi D, Zhou Y

pubmed logopapersJul 19 2025
This study aimed to identify patient characteristics linked to mistaken treatments and major adverse cardiovascular events (MACE) in percutaneous coronary intervention (PCI) for coronary artery disease (CAD) using deep learning-based fractional flow reserve (DEEPVESSEL-FFR, DVFFR). A retrospective cohort of 3,840 PCI patients was analyzed using latent class analysis (LCA) based on eight factors. Mistaken treatment was defined as negative DVFFR patients undergoing revascularization or positive DVFFR patients not receiving it. MACE included all-cause mortality, rehospitalization for unstable angina, and non-fatal myocardial infarction. Patients were classified into comorbidities (Class 1), smoking-drinking (Class 2), and relatively healthy (Class 3) groups. Mistaken treatment was highest in Class 2 (15.4% vs. 6.7%, <i>P</i> < .001), while MACE was highest in Class 1 (7.0% vs. 4.8%, <i>P</i> < .001). Adjusted analyses showed increased mistaken treatment risk in Class 1 (OR 1.96; 95% CI 1.49-2.57) and Class 2 (OR 1.69; 95% CI 1.28-2.25) compared with Class 3. Class 1 also had higher MACE risk (HR 1.53; 95% CI 1.10-2.12). In conclusion, comorbidities and smoking-drinking classes had higher mistaken treatment and MACE risks compared with the relatively healthy class.

Automated Quantitative Evaluation of Age-Related Thymic Involution on Plain Chest CT.

Okamura YT, Endo K, Toriihara A, Fukuda I, Isogai J, Sato Y, Yasuoka K, Kagami SI

pubmed logopapersJul 19 2025
The thymus is an important immune organ involved in T-cell generation. Age-related involution of the thymus has been linked to various age-related pathologies in recent studies. However, there has been no method proposed to quantify age-related thymic involution based on a clinical image. The purpose of this study was to establish an objective and automatic method to quantify age-related thymic involution based on plain chest computed tomography (CT) images. We newly defined the thymic region for quantification (TRQ) as the target anatomical region. We manually segmented the TRQ in 135 CT studies, followed by construction of segmentation neural network (NN) models using the data. We developed the estimator of thymic volume (ETV), a quantitative indicator of the thymic tissue volume inside the segmented TRQ, based on simple mathematical modeling. The Hounsfield unit (HU) value and volume of the NN-segmented TRQ were measured, and the ETV was calculated in each CT study from 853 healthy subjects. We investigated how these measures were related to age and sex using quantile additive regression models. A significant correlation between the NN-segmented and manually segmented TRQ was seen for both the HU value and volume (r = 0.996 and r = 0.986, respectively). ETV declined exponentially with age (p < 0.001), consistent with age-related decline in the thymic tissue volume. In conclusion, our method enabled robust quantification of age-related thymic involution. Our method may aid in the prediction and risk classification of pathologies related to thymic involution.

2.5D Deep Learning-Based Prediction of Pathological Grading of Clear Cell Renal Cell Carcinoma Using Contrast-Enhanced CT: A Multicenter Study.

Yang Z, Jiang H, Shan S, Wang X, Kou Q, Wang C, Jin P, Xu Y, Liu X, Zhang Y, Zhang Y

pubmed logopapersJul 19 2025
To develop and validate a deep learning model based on arterial phase-enhanced CT for predicting the pathological grading of clear cell renal cell carcinoma (ccRCC). Data from 564 patients diagnosed with ccRCC from five distinct hospitals were retrospectively analyzed. Patients from centers 1 and 2 were randomly divided into a training set (n=283) and an internal test set (n=122). Patients from centers 3, 4, and 5 served as external validation sets 1 (n=60), 2 (n=38), and 3 (n=61), respectively. A 2D model, a 2.5D model (three-slice input), and a radiomics-based multi-layer perceptron (MLP) model were developed. Model performance was evaluated using the area under the curve (AUC), accuracy, and sensitivity. The 2.5D model outperformed the 2D and MLP models. Its AUCs were 0.959 (95% CI: 0.9438-0.9738) for the training set, 0.879 (95% CI: 0.8401-0.9180) for the internal test set, and 0.870 (95% CI: 0.8076-0.9334), 0.862 (95% CI: 0.7581-0.9658), and 0.849 (95% CI: 0.7766-0.9216) for the three external validation sets, respectively. The corresponding accuracy values were 0.895, 0.836, 0.827, 0.825, and 0.839. Compared to the MLP model, the 2.5D model achieved significantly higher AUCs (increases of 0.150 [p<0.05], 0.112 [p<0.05], and 0.088 [p<0.05]) and accuracies (increases of 0.077 [p<0.05], 0.075 [p<0.05], and 0.101 [p<0.05]) in the external validation sets. The 2.5D model based on 2.5D CT image input demonstrated improved predictive performance for the WHO/ISUP grading of ccRCC.

Accuracy and Time Efficiency of Artificial Intelligence-Driven Tooth Segmentation on CBCT Images: A Validation Study Using Two Implant Planning Software Programs.

Ntovas P, Sirirattanagool P, Asavanamuang P, Jain S, Tavelli L, Revilla-León M, Galarraga-Vinueza ME

pubmed logopapersJul 18 2025
To assess the accuracy and time efficiency of manual versus artificial intelligence (AI)-driven tooth segmentation on cone-beam computed tomography (CBCT) images, using AI tools integrated within implant planning software, and to evaluate the impact of artifacts, dental arch, tooth type, and region. Fourteen patients who underwent CBCT scans were randomly selected for this study. Using the acquired datasets, 67 extracted teeth were segmented using one manual and two AI-driven tools. The segmentation time for each method was recorded. The extracted teeth were scanned with an intraoral scanner to serve as the reference. The virtual models generated by each segmentation method were superimposed with the surface scan models to calculate volumetric discrepancies. The discrepancy between the evaluated AI-driven and manual segmentation methods ranged from 0.10 to 0.98 mm, with a mean RMS of 0.27 (0.11) mm. Manual segmentation resulted in less RMS deviation compared to both AI-driven methods (CDX; BSB) (p < 0.05). Significant differences were observed between all investigated segmentation methods, both for the overall tooth area and each region, with the apical portion of the root showing the lowest accuracy (p < 0.05). Tooth type did not have a significant effect on segmentation (p > 0.05). Both AI-driven segmentation methods reduced segmentation time compared to manual segmentation (p < 0.05). AI-driven segmentation can generate reliable virtual 3D tooth models, with accuracy comparable to that of manual segmentation performed by experienced clinicians, while also significantly improving time efficiency. To further enhance accuracy in cases involving restoration artifacts, continued development and optimization of AI-driven tooth segmentation models are necessary.

Machine learning and discriminant analysis model for predicting benign and malignant pulmonary nodules.

Li Z, Zhang W, Huang J, Lu L, Xie D, Zhang J, Liang J, Sui Y, Liu L, Zou J, Lin A, Yang L, Qiu F, Hu Z, Wu M, Deng Y, Zhang X, Lu J

pubmed logopapersJul 18 2025
Pulmonary Nodules (PNs) are a trend considered as the early manifestation of lung cancer. Among them, PNs that remain stable for more than two years or whose pathological results suggest not being lung cancer are considered benign PNs (BPNs), while PNs that conform to the growth pattern of tumors or whose pathological results indicate lung cancer are considered malignant PNs (MPNs). Currently, more than 90% of PNs detected by screening tests are benign, with a false positive rate of up to 96.4%. While a range of predictive models have been developed for the identification of MPNs, there are still some challenges in distinguishing between BPNs and MPNs. We included a total of 5197 patients for the case-control study according to the preset exclusion criteria and sample size. Among them, 4735 with BPNs and 2509 with MPNs were randomly divided into training, validation, and test sets according to a 7:1.5:1.5 ratio. Three widely applicable machine learning algorithms (Random Forests, Gradient Boosting Machine, and XGBoost) were used to screen the metrics, and then the corresponding predictive models were constructed using discriminative analysis, and the best performing model was selected as the target model. The model is internally validated with 10-fold cross validation and compared with PKUPH and Block models. We collated information from chest CT examinations performed from 2018 to 2021 in the physical examination population and found that the detection rate of PNs was 21.57% and showed an overall upward trend. The GMU_D model constructed by discriminative analysis based on machine learning screening features had an excellent discriminative performance (AUC = 0.866, 95% CI: 0.858-0.874), and higher accuracy than the PKUPH model (AUC = 0.559, 95% CI: 0.552-0.567) and the Block model (AUC = 0.823, 95% CI: 0.814-0.833). Moreover, the cross-validation results also exhibit excellent performance (AUC = 0.866, 95% CI: 0.858-0.874). The detection rate of PNs was 21.57% in the physical examination population undergoing chest CT. Meanwhile, based on real-world studies of PNs, a greater prediction tool was developed and validated that can be used to accurately distinguish between BPNs and MPNs with the excellent predictive performance and differentiation.

Sex estimation with parameters of the facial canal by computed tomography using machine learning algorithms and artificial neural networks.

Secgin Y, Kaya S, Harmandaoğlu O, Öztürk O, Senol D, Önbaş Ö, Yılmaz N

pubmed logopapersJul 18 2025
The skull is highly durable and plays a significant role in sex determination as one of the most dimorphic bones. The facial canal (FC), a clinically significant canal within the temporal bone, houses the facial nerve. This study aims to estimate sex using morphometric measurements from the FC through machine learning (ML) and artificial neural networks (ANNs). The study utilized Computed Tomography (CT) images of 200 individuals (100 females, 100 males) aged 19-65 years. These images were retrospectively retrieved from the Picture Archiving and Communication Systems (PACS) at Düzce University Faculty of Medicine, Department of Radiology, covering 2021-2024. Bilateral measurements of nine temporal bone parameters were performed in axial, coronal, and sagittal planes. ML algorithms including Quadratic Discriminant Analysis (QDA), Linear Discriminant Analysis (LDA), Decision Tree (DT), Extra Tree Classifier (ETC), Random Forest (RF), Logistic Regression (LR), Gaussian Naive Bayes (GaussianNB), and k-Nearest Neighbors (k-NN) were used, alongside a multilayer perceptron classifier (MLPC) from ANN algorithms. Except for QDA (Acc 0.93), all algorithms achieved an accuracy rate of 0.97. SHapley Additive exPlanations (SHAP) analysis revealed the five most impactful parameters: right SGAs, left SGAs, right TSWs, left TSWs and, the inner mouth width of the left FN, respectively. FN-centered morphometric measurements show high accuracy in sex determination and may aid in understanding FN positioning across sexes and populations. These findings may support rapid and reliable sex estimation in forensic investigations-especially in cases with fragmented craniofacial remains-and provide auxiliary diagnostic data for preoperative planning in otologic and skull base surgeries. They are thus relevant for surgeons, anthropologists, and forensic experts. Not applicable.

Deep learning reconstruction enhances image quality in contrast-enhanced CT venography for deep vein thrombosis.

Asari Y, Yasaka K, Kurashima J, Katayama A, Kurokawa M, Abe O

pubmed logopapersJul 18 2025
This study aimed to evaluate and compare the diagnostic performance and image quality of deep learning reconstruction (DLR) with hybrid iterative reconstruction (Hybrid IR) and filtered back projection (FBP) in contrast-enhanced CT venography for deep vein thrombosis (DVT). A retrospective analysis was conducted on 51 patients who underwent lower limb CT venography, including 20 with DVT lesions and 31 without DVT lesions. CT images were reconstructed using DLR, Hybrid IR, and FBP. Quantitative image quality metrics, such as contrast-to-noise ratio (CNR) and image noise, were measured. Three radiologists independently assessed DVT lesion detection, depiction of DVT lesions and normal structures, subjective image noise, artifacts, and overall image quality using scoring systems. Diagnostic performance was evaluated using sensitivity and area under the receiver operating characteristic curve (AUC). The paired t-test and Wilcoxon signed-rank test compared the results for continuous variables and ordinal scales, respectively, between DLR and Hybrid IR as well as between DLR and FBP. DLR significantly improved CNR and reduced image noise compared to Hybrid IR and FBP (p < 0.001). AUC and sensitivity for DVT detection were not statistically different across reconstruction methods. Two readers reported improved lesion visualization with DLR. DLR was also rated superior in image quality, normal structure depiction, and noise suppression by all readers (p < 0.001). DLR enhances image quality and anatomical clarity in CT venography. These findings support the utility of DLR in improving diagnostic confidence and image interpretability in DVT assessment.

Explainable CT-based deep learning model for predicting hematoma expansion including intraventricular hemorrhage growth.

Zhao X, Zhang Z, Shui J, Xu H, Yang Y, Zhu L, Chen L, Chang S, Du C, Yao Z, Fang X, Shi L

pubmed logopapersJul 18 2025
Hematoma expansion (HE), including intraventricular hemorrhage (IVH) growth, significantly affects outcomes in patients with intracerebral hemorrhage (ICH). This study aimed to develop, validate, and interpret a deep learning model, HENet, for predicting three definitions of HE. Using CT scans and clinical data from 718 ICH patients across three hospitals, the multicenter retrospective study focused on revised hematoma expansion (RHE) definitions 1 and 2, and conventional HE (CHE). HENet's performance was compared with 2D models and physician predictions using two external validation sets. Results showed that HENet achieved high AUC values for RHE1, RHE2, and CHE predictions, surpassing physicians' predictions and 2D models in net reclassification index and integrated discrimination index for RHE1 and RHE2 outcomes. The Grad-CAM technique provided visual insights into the model's decision-making process. These findings suggest that integrating HENet into clinical practice could improve prediction accuracy and patient outcomes in ICH cases.

CT derived fractional flow reserve: Part 1 - Comprehensive review of methodologies.

Shaikh K, Lozano PR, Evangelou S, Wu EH, Nurmohamed NS, Madan N, Verghese D, Shekar C, Waheed A, Siddiqui S, Kolossváry M, Almeida S, Coombes T, Suchá D, Trivedi SJ, Ihdayhid AR

pubmed logopapersJul 18 2025
Advancements in cardiac computed tomography angiography (CCTA) have enabled the extraction of physiological data from an anatomy-based imaging modality. This review outlines the key methodologies for deriving fractional flow reserve (FFR) from CCTA, with a focus on two primary methods: 1) computational fluid dynamics-based FFR (CT-FFR) and 2) plaque-derived ischemia assessment using artificial intelligence and quantitative plaque metrics. These techniques have expanded the role of CCTA beyond anatomical assessment, allowing for concurrent evaluation of coronary physiology without the need for invasive testing. This review provides an overview of the principles, workflows, and limitations of each technique and aims to inform on the current state and future direction of non-invasive coronary physiology assessment.

Diagnostic Performance of Artificial Intelligence in Detecting and Distinguishing Pancreatic Ductal Adenocarcinoma via Computed Tomography: A Systematic Review and Meta-Analysis.

Harandi H, Gouravani M, Alikarami S, Shahrabi Farahani M, Ghavam M, Mohammadi S, Salehi MA, Reynolds S, Dehghani Firouzabadi F, Huda F

pubmed logopapersJul 18 2025
We conducted a systematic review and meta-analysis in diagnostic performance of studies that tried to use artificial intelligence (AI) algorithms in detecting pancreatic ductal adenocarcinoma (PDAC) and distinguishing them from other types of pancreatic lesions. We systematically searched for studies on pancreatic lesions and AI from January 2014 to May 2024. Data were extracted and a meta-analysis was performed using contingency tables and a random-effects model to calculate pooled sensitivity and specificity. Quality assessment was done using modified TRIPOD and PROBAST tools. We included 26 studies in this systematic review, with 22 studies chosen for meta-analysis. The evaluation of AI algorithms' performance in internal validation exhibited a pooled sensitivity of 93% (95% confidence interval [CI], 90 to 95) and specificity of 95% (95% CI, 92 to 97). Additionally, externally validated AI algorithms demonstrated a combined sensitivity of 89% (95% CI, 85 to 92) and specificity of 91% (95% CI, 85 to 95). Subgroup analysis indicated that diagnostic performance differed by comparator group, image contrast, segmentation technique, and algorithm type, with contrast-enhanced imaging and specific AI models (e.g., random forest for sensitivity and CNN for specificity) demonstrating superior accuracy. Although the potential biases should be further addressed, results of this systematic review and meta-analysis showed that AI models have the potential to be incorporated in clinical settings for the detection of smaller tumors and underpinning early signs of PDAC.
Page 61 of 1421416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.