Sort by:
Page 28 of 1411410 results

Comparison of segmentation performance of cnns, vision transformers, and hybrid networks for paranasal sinuses with sinusitis on CT images.

Song D, Yang S, Han JY, Kim KG, Kim ST, Yi WJ

pubmed logopapersSep 1 2025
Accurate segmentation of the paranasal sinuses, including the frontal sinus (FS), ethmoid sinus (ES), sphenoid sinus (SS), and maxillary sinus (MS), plays an important role in supporting image-guided surgery (IGS) for sinusitis, facilitating safer intraoperative navigation by identifying anatomical variations and delineating surgical landmarks on CT imaging. To the best of our knowledge, no comparative studies of convolutional neural networks (CNNs), vision transformers (ViTs), and hybrid networks for segmenting each paranasal sinus in patients with sinusitis have been conducted. Therefore, the objective of this study was to compare the segmentation performance of CNNs, ViTs, and hybrid networks for individual paranasal sinuses with varying degrees of anatomical complexity and morphological and textural variations caused by sinusitis on CT images. The performance of CNNs, ViTs, and hybrid networks was compared using Jaccard Index (JI), Dice similarity coefficient (DSC), precision (PR), recall (RC), and 95% Hausdorff Distance (HD95) for segmentation accuracy metrics and the number of parameters (Params) and inference time (IT) for computational efficiency. The Swin UNETR hybrid network outperformed the other networks, achieving the highest segmentation scores, with a JI of 0.719, a DSC of 0.830, a PR of 0.935, and a RC of 0.758, and the lowest HD95 value of 10.529 with the smallest number of the model architectural parameter, with 15.705 M Params. Also, CoTr, another hybrid network, demonstrated superior segmentation performance compared to CNNs and ViTs, and achieved the fastest inference time with 0.149 IT. Compared with CNNs and ViTs, hybrid networks significantly reduced false positives and enabled more precise boundary delineation, effectively capturing anatomical relationships among the sinuses and surrounding structures. This resulted in the lowest segmentation errors near critical surgical landmarks. In conclusion, hybrid networks may provide a more balanced trade-off between segmentation accuracy and computational efficiency, with potential applicability in clinical decision support systems for sinusitis.

Automated coronary analysis in ultrahigh-spatial resolution photon-counting detector CT angiography: Clinical validation and intra-individual comparison with energy-integrating detector CT.

Kravchenko D, Hagar MT, Varga-Szemes A, Schoepf UJ, Schoebinger M, O'Doherty J, Gülsün MA, Laghi A, Laux GS, Vecsey-Nagy M, Emrich T, Tremamunno G

pubmed logopapersSep 1 2025
To evaluate a deep-learning algorithm for automated coronary artery analysis on ultrahigh-resolution photon-counting detector coronary computed tomography (CT) angiography and compared its performance to expert readers using invasive coronary angiography as reference. Thirty-two patients (mean age 68.6 years; 81 ​% male) underwent both energy-integrating detector and ultrahigh-resolution photon-counting detector CT within 30 days. Expert readers scored each image using the Coronary Artery Disease-Reporting and Data System classification, and compared to invasive angiography. After a three-month wash-out, one reader reanalyzed the photon-counting detector CT images assisted by the algorithm. Sensitivity, specificity, accuracy, inter-reader agreement, and reading times were recorded for each method. On 401 arterial segments, inter-reader agreement improved from substantial (κ ​= ​0.75) on energy-integrating detector CT to near-perfect (κ ​= ​0.86) on photon-counting detector CT. The algorithm alone achieved 85 ​% sensitivity, 91 ​% specificity, and 90 ​% accuracy on energy-integrating detector CT, and 85 ​%, 96 ​%, and 95 ​% on photon-counting detector CT. Compared to invasive angiography on photon-counting detector CT, manual and automated reads had similar sensitivity (67 ​%), but manual assessment slightly outperformed regarding specificity (85 ​% vs. 79 ​%) and accuracy (84 ​% vs. 78 ​%). When the reader was assisted by the algorithm, specificity rose to 97 ​% (p ​< ​0.001), accuracy to 95 ​%, and reading time decreased by 54 ​% (p ​< ​0.001). This deep-learning algorithm demonstrates high agreement with experts and improved diagnostic performance on photon-counting detector CT. Expert review augmented by the algorithm further increases specificity and dramatically reduces interpretation time.

Automatic detection of mandibular fractures on CT scan using deep learning.

Liu Y, Wang X, Tu Y, Chen W, Shi F, You M

pubmed logopapersSep 1 2025
This study explores the application of artificial intelligence (AI), specifically deep learning, in the detection and classification of mandibular fractures using CT scans. Data from 459 patients were retrospectively obtained from West China Hospital of Stomatology, Sichuan University, spanning from 2020 to 2023. The CT scans were divided into training, testing, and independent validation sets. This research focuses on training and validating a deep learning model using the nnU-Net segmentation framework for pixel-level accuracy in identifying fracture locations. Additionally, a 3D-ResNet with pre-trained weights was employed to classify fractures into 3 types based on severity. Performance metrics included sensitivity, precision, specificity, and area under the receiver operating characteristic curve (AUC). The study achieved high diagnostic accuracy in mandibule fracture detection, with sensitivity >0.93, precision >0.79, and specificity >0.80. For mandibular fracture classification, accuracies were all above 0.718, with a mean AUC of 0.86. Detection and classification of mandibular fractures in CT images can be significantly enhanced using the nnU-Net segmentation framework, aiding in clinical diagnosis.

An innovative bimodal computed tomography data-driven deep learning model for predicting aortic dissection: a multi-center study.

Li Z, Chen L, Zhang S, Zhang X, Zhang J, Ying M, Zhu J, Li R, Song M, Feng Z, Zhang J, Liang W

pubmed logopapersSep 1 2025
Aortic dissection (AD) is a lethal emergency requiring prompt diagnosis. Current computed tomography angiography (CTA)-based diagnosis requires contrast agents, which expends time, whereas existing deep learning (DL) models only support single-modality inputs [non-contrast computed tomography (CT) or CTA]. In this study, we propose a bimodal DL framework to independently process both types, enabling dual-path detection and improving diagnostic efficiency. Patients who underwent non-contrast CT and CTA from February 2016 to September 2021 were retrospectively included from three institutions, including the First Affiliated Hospital, Zhejiang University School of Medicine (Center I), Zhejiang Hospital (Center II), and Yiwu Central Hospital (Center III). A two-stage DL model for predicting AD was developed. The first stage used an aorta detection network (AoDN) to localize the aorta in non-contrast CT or CTA images. Image patches that contained detected aorta were cut from CT images and combined to form an image patch sequence, which was inputted to an aortic dissection diagnosis network (ADDiN) to diagnose AD in the second stage. The following performances were assessed: aorta detection and diagnosis using average precision at the intersection over union threshold 0.5 ([email protected]) and area under the receiver operating characteristic curve (AUC). The first cohort, comprising 102 patients (53±15 years, 80 men) from two institutions, was used for the AoDN, whereas the second cohort, consisting of 861 cases (55±15 years, 623 men) from three institutions, was used for the ADDiN. For the AD task, the AoDN achieved [email protected] 99.14% on the non-contrast CT test set and 99.34% on the CTA test set, respectively. For the AD diagnosis task, the ADDiN obtained an AUCs of 0.98 on the non-contrast CT test set and 0.99 on the CTA test set. The proposed bimodal CT data-driven DL model accurately diagnoses AD, facilitating prompt hospital diagnosis and treatment of AD.

Pulmonary Embolism Survival Prediction Using Multimodal Learning Based on Computed Tomography Angiography and Clinical Data.

Zhong Z, Zhang H, Fayad FH, Lancaster AC, Sollee J, Kulkarni S, Lin CT, Li J, Gao X, Collins S, Greineder CF, Ahn SH, Bai HX, Jiao Z, Atalay MK

pubmed logopapersSep 1 2025
Pulmonary embolism (PE) is a significant cause of mortality in the United States. The objective of this study is to implement deep learning (DL) models using computed tomography pulmonary angiography (CTPA), clinical data, and PE Severity Index (PESI) scores to predict PE survival. In total, 918 patients (median age 64 y, range 13 to 99 y, 48% male) with 3978 CTPAs were identified via retrospective review across 3 institutions. To predict survival, an AI model was used to extract disease-related imaging features from CTPAs. Imaging features and clinical variables were then incorporated into independent DL models to predict survival outcomes. Cross-modal fusion CoxPH models were used to develop multimodal models from combinations of DL models and calculated PESI scores. Five multimodal models were developed as follows: (1) using CTPA imaging features only, (2) using clinical variables only, (3) using both CTPA and clinical variables, (4) using CTPA and PESI score, and (5) using CTPA, clinical variables, and PESI score. Performance was evaluated using the concordance index (c-index). Kaplan-Meier analysis was performed to stratify patients into high-risk and low-risk groups. Additional factor-risk analysis was conducted to account for right ventricular (RV) dysfunction. For both data sets, the multimodal models incorporating CTPA features, clinical variables, and PESI score achieved higher c-indices than PESI alone. Following the stratification of patients into high-risk and low-risk groups by models, survival outcomes differed significantly (both P <0.001). A strong correlation was found between high-risk grouping and RV dysfunction. Multiomic DL models incorporating CTPA features, clinical data, and PESI achieved higher c-indices than PESI alone for PE survival prediction.

Development of an age estimation method for the coxal bone and lumbar vertebrae obtained from post-mortem computed tomography images using a convolutional neural network.

Imaizumi K, Usui S, Nagata T, Hayakawa H, Shiotani S

pubmed logopapersSep 1 2025
Age estimation plays a major role in the identification of unknown dead bodies, including skeletal remains. We present a novel age estimation method developed by applying a deep-learning network to the coxal bone and lumbar vertebrae on post-mortem computed tomography (PMCT) images. The coxal bone and lumbar vertebrae were targeted in this study. Volume-rendered images of these bones from 1,229 individuals were captured and input to a convolutional neural network based on the visual geometry group 16 network. A transfer learning strategy was employed. The predictive capabilities of age estimation models were assessed by a 10-fold cross-validation procedure, with mean absolute error (MAE) and correlation coefficients between chronological and estimated ages calculated for validation. In addition, gradient-weighted class activation mapping (Grad-CAM) was conducted to visualize the regions of interest in learning. The estimation models created showed low MAE (range, 7.27-6.44 years) and high correlation coefficients (range, 0.84-0.91) in the validation. Aging-induced shape changes were grossly observed at the vertebral body, coxal bone surface, and other sites. The Grad-CAM results identified these as regions of interest in learning. The present method has the potential to become an age estimation tool that is routinely applied in the examination of unknown dead bodies, including skeletal remains.

CT-based deep learning radiomics model for predicting proliferative hepatocellular carcinoma: application in transarterial chemoembolization and radiofrequency ablation.

Zhang H, Zhang Z, Zhang K, Gao Z, Shen Z, Shen W

pubmed logopapersSep 1 2025
Proliferative hepatocellular carcinoma (HCC) is an aggressive tumor with varying prognosis depending on the different disease stages and subsequent treatment. This study aims to develop and validate a deep learning radiomics (DLR) model based on contrast-enhanced CT to predict proliferative HCC and to implement risk prediction in patients treated with transarterial chemoembolization (TACE) and radiofrequency ablation (RFA). 312 patients (mean age, 58 years ± 10 [SD]; 261 men and 51 women) with HCC undergoing surgery at two medical centers were included, who were divided into a training set (<i>n</i> = 182), an internal test set (<i>n</i> = 46) and an external test set (<i>n</i> = 84). DLR features were extracted from preoperative contrast-enhanced CT images. Multiple machine learning algorithms were used to develop and validate proliferative HCC prediction models in training and test sets. Subsequently, patients from two independent new sets (RFA and TACE sets) were divided into high- and low-risk groups using the DLR score generated by the optimal model. The risk prediction value of DLR scores in recurrence-free survival (RFS) and time to progression (TTP) was examined separately in RFA and TACE sets. The DLR proliferative HCC prediction model demonstrated excellent predictive performance with an AUC of 0.906 (95% CI 0.861–0.952) in the training set, 0.901 (95% CI 0.779–1.000) in the internal test set and 0.837 (95% CI 0.746–0.928) in the external test set. The DLR score effectively enables risk prediction for patients in RFA and TACE sets. For the RFA set, the low-risk group had significantly longer RFS compared to the high-risk group (<i>P</i> = 0.037). Similarly, the low-risk group showed a longer TTP than the high-risk group for the TACE set (<i>P</i> = 0.034). The DLR-based contrast-enhanced CT model enables non-invasive prediction of proliferative HCC. Furthermore, the DLR risk prediction helps identify high-risk patients undergoing RFA or TACE, providing prognostic insights for personalized management. The online version contains supplementary material available at 10.1186/s12880-025-01913-9.

Impact of pre-test probability on AI-LVO detection: a systematic review of LVO prevalence across clinical contexts.

Olivé-Gadea M, Mayol J, Requena M, Rodrigo-Gisbert M, Rizzo F, Garcia-Tornel A, Simonetti R, Diana F, Muchada M, Pagola J, Rodriguez-Luna D, Rodriguez-Villatoro N, Rubiera M, Molina CA, Tomasello A, Hernandez D, de Dios Lascuevas M, Ribo M

pubmed logopapersAug 31 2025
Rapid identification of large vessel occlusion (LVO) in acute ischemic stroke (AIS) is essential for reperfusion therapy. Screening tools, including Artificial Intelligence (AI) based algorithms, have been developed to accelerate detection but rely heavily on pre-test LVO prevalence. This study aimed to review LVO prevalence across clinical contexts and analyze its impact on AI-algorithm performance. We systematically reviewed studies reporting consecutive suspected AIS cohorts. Cohorts were grouped into four clinical scenarios based on patient selection criteria: (a) high suspicion of LVO by stroke specialists (direct-to-angiosuite candidates), (b) high suspicion of LVO according to pre-hospital scales, (c) and (d) any suspected AIS without considering severity cut-off in a hospital or pre-hospital setting, respectively. We analyzed LVO prevalence in each scenario and assessed the false discovery rate (FDR) - number of positive studies needed to encounter a false positive, if applying eight commercially available LVO-detecting algorithms. We included 87 cohorts from 80 studies. Median LVO prevalence was: (a) 84% (77-87%), (b) 35% (26-42%), (c) 19% (14-25%), and (d) 14% (8-22%). At high prevalence levels: (a) FDR ranged between 0.007 (1 false positive in 142 positives) and 0.023 (1 in 43), whereas in low prevalence scenarios (Ccand d), FDR ranged between 0.168 (1 in 6) and 0.543 (over 1 in 2). To ensure meaningful clinical impact, AI algorithms must be evaluated within the specific populations and care pathways where they are applied.

Utilisation of artificial intelligence to enhance the detection rates of renal cancer on cross-sectional imaging: protocol for a systematic review and meta-analysis.

Ofagbor O, Bhardwaj G, Zhao Y, Baana M, Arkwazi M, Lami M, Bolton E, Heer R

pubmed logopapersAug 31 2025
The incidence of renal cell carcinoma has steadily been on the increase due to the increased use of imaging to identify incidental masses. Although survival has also improved because of early detection, overdiagnosis and overtreatment of benign renal masses are associated with significant morbidity, as patients with a suspected renal malignancy on imaging undergo invasive and risky procedures for a definitive diagnosis. Therefore, accurately characterising a renal mass as benign or malignant on imaging is paramount to improving patient outcomes. Artificial intelligence (AI) poses an exciting solution to the problem, augmenting traditional radiological diagnosis to increase detection accuracy. This report aims to investigate and summarise the current evidence about the diagnostic accuracy of AI in characterising renal masses on imaging. This will involve systematically searching PubMed, MEDLINE, Embase, Web of Science, Scopus and Cochrane databases. Publications of research that have evaluated the use of automated AI, fully or to some extent, in cross-sectional imaging for diagnosing or characterising malignant renal tumours will be included if published between July 2016 and June 2025 and in English. The protocol adheres to the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols 2015 checklist. The Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) score will be used to evaluate the quality and risk of bias across included studies. Furthermore, in line with Checklist for Artificial Intelligence in Medical Imaging recommendations, studies will be evaluated for including the minimum necessary information on AI research reporting. Ethical clearance will not be necessary for conducting this systematic review, and results will be disseminated through peer-reviewed publications and presentations at both national and international conferences. CRD42024529929.

Noncontrast CT-based deep learning for predicting intracerebral hemorrhage expansion incorporating growth of intraventricular hemorrhage.

Ning Y, Yu Q, Fan X, Jiang W, Chen X, Jiang H, Xie K, Liu R, Zhou Y, Zhang X, Lv F, Xu X, Peng J

pubmed logopapersAug 31 2025
Intracerebral hemorrhage (ICH) is a severe form of stroke with high mortality and disability, where early hematoma expansion (HE) critically influences prognosis. Previous studies suggest that revised hematoma expansion (rHE), defined to include intraventricular hemorrhage (IVH) growth, provides improved prognostic accuracy. Therefore, this study aimed to develop a deep learning model based on noncontrast CT (NCCT) to predict high-risk rHE in ICH patients, enabling timely intervention. A retrospective dataset of 775 spontaneous ICH patients with baseline and follow-up CT scans was collected from two centers and split into training (n = 389), internal-testing (n = 167), and external-testing (n = 219) cohorts. 2D/3D convolutional neural network (CNN) models based on ResNet-101, ResNet-152, DenseNet-121, and DenseNet-201 were separately developed using baseline NCCT images, and the activation areas of the optimal deep learning model were visualized using gradient-weighted class activation mapping (Grad-CAM). Two baseline logistic regression clinical models based on the BRAIN score and independent clinical-radiologic predictors were also developed, along with combined-logistic and combined-SVM models incorporating handcrafted radiomics features and clinical-radiologic factors. Model performance was assessed using the area under the receiver operating characteristic curve (AUC). The 2D-ResNet-101 model outperformed others, with an AUC of 0.777 (95%CI, 0.716-0.830) in the external-testing set, surpassing the baseline clinical-radiologic model and the BRAIN score (AUC increase of 0.087, p = 0.022; 0.119, p = 0.003). Compared to the combined-logistic and combined-SVM models, AUC increased by 0.083 (p = 0.029) and 0.074 (p < 0.058), respectively. The deep learning model can identify ICH patients with high-risk rHE with favorable predictive performance than traditional baseline models based on clinical-radiologic variables and radiomics features.
Page 28 of 1411410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.