Sort by:
Page 93 of 1021015 results

Fully Automated Evaluation of Condylar Remodeling after Orthognathic Surgery in Skeletal Class II Patients Using Deep Learning and Landmarks.

Jia W, Wu H, Mei L, Wu J, Wang M, Cui Z

pubmed logopapersMay 17 2025
Condylar remodeling is a key prognostic indicator in maxillofacial surgery for skeletal class II patients. This study aimed to develop and validate a fully automated method leveraging landmark-guided segmentation and registration for efficient assessment of condylar remodeling. A V-Net-based deep learning workflow was developed to automatically segment the mandible and localize anatomical landmarks from CT images. Cutting planes were computed based on the landmarks to segment the condylar and ramus volumes from the mandible mask. The stable ramus served as a reference for registering pre- and post-operative condyles using the Iterative Closest Point (ICP) algorithm. Condylar remodeling was subsequently assessed through mesh registration, heatmap visualization, and quantitative metrics of surface distance and volumetric change. Experts also rated the concordance between automated assessments and clinical diagnoses. In the test set, condylar segmentation achieved a Dice coefficient of 0.98, and landmark prediction yielded a mean absolute error of 0.26 mm. The automated evaluation process was completed in 5.22 seconds, approximately 150 times faster than manual assessments. The method accurately quantified condylar volume changes, ranging from 2.74% to 50.67% across patients. Expert ratings for all test cases averaged 9.62. This study introduced a consistent, accurate, and fully automated approach for condylar remodeling evaluation. The well-defined anatomical landmarks guided precise segmentation and registration, while deep learning supported an end-to-end automated workflow. The test results demonstrated its broad clinical applicability across various degrees of condylar remodeling and high concordance with expert assessments. By integrating anatomical landmarks and deep learning, the proposed method improves efficiency by 150 times without compromising accuracy, thereby facilitating an efficient and accurate assessment of orthognathic prognosis. The personalized 3D condylar remodeling models aid in visualizing sequelae, such as joint pain or skeletal relapse, and guide individualized management of TMJ disorders.

Intracranial hemorrhage segmentation and classification framework in computer tomography images using deep learning techniques.

Ahmed SN, Prakasam P

pubmed logopapersMay 17 2025
By helping the neurosurgeon create treatment strategies that increase the survival rate, automotive diagnosis and CT (Computed Tomography) hemorrhage segmentation (CT) could be beneficial. Owing to the significance of medical image segmentation and the difficulties in carrying out human operations, a wide variety of automated techniques for this purpose have been developed, with a primary focus on particular image modalities. In this paper, MUNet (Multiclass-UNet) based Intracranial Hemorrhage Segmentation and Classification Framework (IHSNet) is proposed to successfully segment multiple kinds of hemorrhages while the fully connected layers help in classifying the type of hemorrhages.The segmentation accuracy rates for hemorrhages are 98.53% with classification accuracy stands at 98.71% when using the suggested approach. There is potential for this suggested approach to be expanded in the future to handle further medical picture segmentation issues. Intraventricular hemorrhage (IVH), Epidural hemorrhage (EDH), Intraparenchymal hemorrhage (IPH), Subdural hemorrhage (SDH), Subarachnoid hemorrhage (SAH) are the subtypes involved in intracranial hemorrhage (ICH) whose DICE coefficients are 0.77, 0.84, 0.64, 0.80, and 0.92 respectively.The proposed method has great deal of clinical application potential for computer-aided diagnostics, which can be expanded in the future to handle further medical picture segmentation and to tackle with the involved issues.

Development of a deep-learning algorithm for etiological classification of subarachnoid hemorrhage using non-contrast CT scans.

Chen L, Wang X, Li Y, Bao Y, Wang S, Zhao X, Yuan M, Kang J, Sun S

pubmed logopapersMay 17 2025
This study aims to develop a deep learning algorithm for differentiating aneurysmal subarachnoid hemorrhage (aSAH) from non-aneurysmal subarachnoid hemorrhage (naSAH) using non-contrast computed tomography (NCCT) scans. This retrospective study included 618 patients diagnosed with SAH. The dataset was divided into a training and internal validation cohort (533 cases: aSAH = 305, naSAH = 228) and an external test cohort (85 cases: aSAH = 55, naSAH = 30). Hemorrhage regions were automatically segmented using a U-Net + + architecture. A ResNet-based deep learning model was trained to classify the etiology of SAH. The model achieved robust performance in distinguishing aSAH from naSAH. In the internal validation cohort, it yielded an average sensitivity of 0.898, specificity of 0.877, accuracy of 0.889, Matthews correlation coefficient (MCC) of 0.777, and an area under the curve (AUC) of 0.948 (95% CI: 0.929-0.967). In the external test cohort, the model demonstrated an average sensitivity of 0.891, specificity of 0.880, accuracy of 0.887, MCC of 0.761, and AUC of 0.914 (95% CI: 0.889-0.940), outperforming junior radiologists (average accuracy: 0.836; MCC: 0.660). The study presents a deep learning architecture capable of accurately identifying SAH etiology from NCCT scans. The model's high diagnostic performance highlights its potential to support rapid and precise clinical decision-making in emergency settings. Question Differentiating aneurysmal from naSAH is crucial for timely treatment, yet existing imaging modalities are not universally accessible or convenient for rapid diagnosis. Findings A ResNet-variant-based deep learning model utilizing non-contrast CT scans demonstrated high accuracy in classifying SAH etiology and enhanced junior radiologists' diagnostic performance. Clinical relevance AI-driven analysis of non-contrast CT scans provides a fast, cost-effective, and non-invasive solution for preoperative SAH diagnosis. This approach facilitates early identification of patients needing aneurysm surgery while minimizing unnecessary angiography in non-aneurysmal cases, enhancing clinical workflow efficiency.

Pancreas segmentation using AI developed on the largest CT dataset with multi-institutional validation and implications for early cancer detection.

Mukherjee S, Antony A, Patnam NG, Trivedi KH, Karbhari A, Nagaraj M, Murlidhar M, Goenka AH

pubmed logopapersMay 16 2025
Accurate and fully automated pancreas segmentation is critical for advancing imaging biomarkers in early pancreatic cancer detection and for biomarker discovery in endocrine and exocrine pancreatic diseases. We developed and evaluated a deep learning (DL)-based convolutional neural network (CNN) for automated pancreas segmentation using the largest single-institution dataset to date (n = 3031 CTs). Ground truth segmentations were performed by radiologists, which were used to train a 3D nnU-Net model through five-fold cross-validation, generating an ensemble of top-performing models. To assess generalizability, the model was externally validated on the multi-institutional AbdomenCT-1K dataset (n = 585), for which volumetric segmentations were newly generated by expert radiologists and will be made publicly available. In the test subset (n = 452), the CNN achieved a mean Dice Similarity Coefficient (DSC) of 0.94 (SD 0.05), demonstrating high spatial concordance with radiologist-annotated volumes (Concordance Correlation Coefficient [CCC]: 0.95). On the AbdomenCT-1K dataset, the model achieved a DSC of 0.96 (SD 0.04) and a CCC of 0.98, confirming its robustness across diverse imaging conditions. The proposed DL model establishes new performance benchmarks for fully automated pancreas segmentation, offering a scalable and generalizable solution for large-scale imaging biomarker research and clinical translation.

Pretrained hybrid transformer for generalizable cardiac substructures segmentation from contrast and non-contrast CTs in lung and breast cancers

Aneesh Rangnekar, Nikhil Mankuzhy, Jonas Willmann, Chloe Choi, Abraham Wu, Maria Thor, Andreas Rimner, Harini Veeraraghavan

arxiv logopreprintMay 16 2025
AI automated segmentations for radiation treatment planning (RTP) can deteriorate when applied in clinical cases with different characteristics than training dataset. Hence, we refined a pretrained transformer into a hybrid transformer convolutional network (HTN) to segment cardiac substructures lung and breast cancer patients acquired with varying imaging contrasts and patient scan positions. Cohort I, consisting of 56 contrast-enhanced (CECT) and 124 non-contrast CT (NCCT) scans from patients with non-small cell lung cancers acquired in supine position, was used to create oracle with all 180 training cases and balanced (CECT: 32, NCCT: 32 training) HTN models. Models were evaluated on a held-out validation set of 60 cohort I patients and 66 patients with breast cancer from cohort II acquired in supine (n=45) and prone (n=21) positions. Accuracy was measured using DSC, HD95, and dose metrics. Publicly available TotalSegmentator served as the benchmark. The oracle and balanced models were similarly accurate (DSC Cohort I: 0.80 \pm 0.10 versus 0.81 \pm 0.10; Cohort II: 0.77 \pm 0.13 versus 0.80 \pm 0.12), outperforming TotalSegmentator. The balanced model, using half the training cases as oracle, produced similar dose metrics as manual delineations for all cardiac substructures. This model was robust to CT contrast in 6 out of 8 substructures and patient scan position variations in 5 out of 8 substructures and showed low correlations of accuracy to patient size and age. A HTN demonstrated robustly accurate (geometric and dose metrics) cardiac substructures segmentation from CTs with varying imaging and patient characteristics, one key requirement for clinical use. Moreover, the model combining pretraining with balanced distribution of NCCT and CECT scans was able to provide reliably accurate segmentations under varied conditions with far fewer labeled datasets compared to an oracle model.

Patient-Specific Dynamic Digital-Physical Twin for Coronary Intervention Training: An Integrated Mixed Reality Approach

Shuo Wang, Tong Ren, Nan Cheng, Rong Wang, Li Zhang

arxiv logopreprintMay 16 2025
Background and Objective: Precise preoperative planning and effective physician training for coronary interventions are increasingly important. Despite advances in medical imaging technologies, transforming static or limited dynamic imaging data into comprehensive dynamic cardiac models remains challenging. Existing training systems lack accurate simulation of cardiac physiological dynamics. This study develops a comprehensive dynamic cardiac model research framework based on 4D-CTA, integrating digital twin technology, computer vision, and physical model manufacturing to provide precise, personalized tools for interventional cardiology. Methods: Using 4D-CTA data from a 60-year-old female with three-vessel coronary stenosis, we segmented cardiac chambers and coronary arteries, constructed dynamic models, and implemented skeletal skinning weight computation to simulate vessel deformation across 20 cardiac phases. Transparent vascular physical models were manufactured using medical-grade silicone. We developed cardiac output analysis and virtual angiography systems, implemented guidewire 3D reconstruction using binocular stereo vision, and evaluated the system through angiography validation and CABG training applications. Results: Morphological consistency between virtual and real angiography reached 80.9%. Dice similarity coefficients for guidewire motion ranged from 0.741-0.812, with mean trajectory errors below 1.1 mm. The transparent model demonstrated advantages in CABG training, allowing direct visualization while simulating beating heart challenges. Conclusion: Our patient-specific digital-physical twin approach effectively reproduces both anatomical structures and dynamic characteristics of coronary vasculature, offering a dynamic environment with visual and tactile feedback valuable for education and clinical planning.

Evaluation of tumour pseudocapsule using computed tomography-based radiomics in pancreatic neuroendocrine tumours to predict prognosis and guide surgical strategy: a cohort study.

Wang Y, Gu W, Huang D, Zhang W, Chen Y, Xu J, Li Z, Zhou C, Chen J, Xu X, Tang W, Yu X, Ji S

pubmed logopapersMay 16 2025
To date, indications for a surgical approach of small pancreatic neuroendocrine tumours (PanNETs) remain controversial. This cohort study aimed to identify the pseudocapsule status preoperatively to estimate the rationality of enucleation and survival prognosis of PanNETs, particularly in small tumours. Clinicopathological data were collected from patients with PanNETs who underwent the first pancreatectomy at our hospital (n = 578) between February 2012 and September 2023. Kaplan-Meier curves were constructed to visualise prognostic differences. Five distinct tissue samples were obtained for single-cell RNA sequencing (scRNA-seq) to evaluate variations in the tumour microenvironment. Radiological features were extracted from preoperative arterial-phase contrast-enhanced computed tomography. The performance of the pseudocapsule radiomics model was assessed using the area under the curve (AUC) metric. 475 cases (mean [SD] age, 53.01 [12.20] years; female vs male, 1.24:1) were eligible for this study. The mean pathological diameter of tumour was 2.99 cm (median: 2.50 cm; interquartile range [IQR]: 1.50-4.00 cm). These cases were stratified into complete (223, 46.95%) and incomplete (252, 53.05%) pseudocapsule groups. A statistically significant difference in aggressive indicators was observed between the two groups (P < 0.001). Through scRNA-seq analysis, we identified that the incomplete group presented a markedly immunosuppressive microenvironment. Regarding the impact on recurrence-free survival, the 3-year and 5-year rates were 94.8% and 92.5%, respectively, for the complete pseudocapsule group, compared to 76.7% and 70.4% for the incomplete pseudocapsule group. The radiomics-predictive model has a significant discrimination for the state of the pseudocapsule, particularly in small tumours (AUC, 0.744; 95% CI, 0.652-0.837). By combining computed tomography-based radiomics and machine learning for preoperative identification of pseudocapsule status, the intact group is more likely to benefit from enucleation.

Automated Real-time Assessment of Intracranial Hemorrhage Detection AI Using an Ensembled Monitoring Model (EMM)

Zhongnan Fang, Andrew Johnston, Lina Cheuy, Hye Sun Na, Magdalini Paschali, Camila Gonzalez, Bonnie A. Armstrong, Arogya Koirala, Derrick Laurel, Andrew Walker Campion, Michael Iv, Akshay S. Chaudhari, David B. Larson

arxiv logopreprintMay 16 2025
Artificial intelligence (AI) tools for radiology are commonly unmonitored once deployed. The lack of real-time case-by-case assessments of AI prediction confidence requires users to independently distinguish between trustworthy and unreliable AI predictions, which increases cognitive burden, reduces productivity, and potentially leads to misdiagnoses. To address these challenges, we introduce Ensembled Monitoring Model (EMM), a framework inspired by clinical consensus practices using multiple expert reviews. Designed specifically for black-box commercial AI products, EMM operates independently without requiring access to internal AI components or intermediate outputs, while still providing robust confidence measurements. Using intracranial hemorrhage detection as our test case on a large, diverse dataset of 2919 studies, we demonstrate that EMM successfully categorizes confidence in the AI-generated prediction, suggesting different actions and helping improve the overall performance of AI tools to ultimately reduce cognitive burden. Importantly, we provide key technical considerations and best practices for successfully translating EMM into clinical settings.

Research on Machine Learning Models Based on Cranial CT Scan for Assessing Prognosis of Emergency Brain Injury.

Qin J, Shen R, Fu J, Sun J

pubmed logopapersMay 16 2025
To evaluate the prognosis of patients with traumatic brain injury according to the Computed Tomography (CT) findings of skull fracture and cerebral parenchymal hemorrhage. Retrospectively collected data from adult patients who received non-surgical or surgical treatment after the first CT scan with craniocerebral injuries from January 2020 to August 2021. The radiomics features were extracted by Pyradiomics. Dimensionality reduction was then performed using the max relevance and min-redundancy algorithm (mRMR) and the least absolute shrinkage and selection operator (LASSO), with ten-fold cross-validation to select the best radiomics features. Three parsimonious machine learning classifiers, multinomial logistic regression (LR), a support vector machine (SVM), and a naive Bayes (Gaussian distribution), were used to construct radiomics models. A personalized emergency prognostic nomogram for cranial injuries was erected using a logistic regression model based on selected radiomic labels and patients' baseline information at emergency admission. The mRMR algorithm and the LASSO regression model finally extracted 22 top-ranked radiological features and based on these image histological features, the emergency brain injury prediction model was built with SVM, LG, and naive Bayesian classifiers, respectively. The SVM model showed the largest AUC area in training cohort for the three classifications, indicating that the SVM model is more stable and accurate. Moreover, a nomogram prediction model for GOS prognostic score in patients was constructed. We established a nomogram for predicting patients' prognosis through radiomic features and clinical characteristics, provides some data support and guidance for clinical prediction of patients' brain injury prognosis and intervention.
Page 93 of 1021015 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.