Sort by:
Page 43 of 45442 results

Automated Thoracolumbar Stump Rib Detection and Analysis in a Large CT Cohort

Hendrik Möller, Hanna Schön, Alina Dima, Benjamin Keinert-Weth, Robert Graf, Matan Atad, Johannes Paetzold, Friederike Jungmann, Rickmer Braren, Florian Kofler, Bjoern Menze, Daniel Rueckert, Jan S. Kirschke

arxiv logopreprintMay 8 2025
Thoracolumbar stump ribs are one of the essential indicators of thoracolumbar transitional vertebrae or enumeration anomalies. While some studies manually assess these anomalies and describe the ribs qualitatively, this study aims to automate thoracolumbar stump rib detection and analyze their morphology quantitatively. To this end, we train a high-resolution deep-learning model for rib segmentation and show significant improvements compared to existing models (Dice score 0.997 vs. 0.779, p-value < 0.01). In addition, we use an iterative algorithm and piece-wise linear interpolation to assess the length of the ribs, showing a success rate of 98.2%. When analyzing morphological features, we show that stump ribs articulate more posteriorly at the vertebrae (-19.2 +- 3.8 vs -13.8 +- 2.5, p-value < 0.01), are thinner (260.6 +- 103.4 vs. 563.6 +- 127.1, p-value < 0.01), and are oriented more downwards and sideways within the first centimeters in contrast to full-length ribs. We show that with partially visible ribs, these features can achieve an F1-score of 0.84 in differentiating stump ribs from regular ones. We publish the model weights and masks for public use.

Quantitative analysis and clinical determinants of orthodontically induced root resorption using automated tooth segmentation from CBCT imaging.

Lin J, Zheng Q, Wu Y, Zhou M, Chen J, Wang X, Kang T, Zhang W, Chen X

pubmed logopapersMay 8 2025
Orthodontically induced root resorption (OIRR) is difficult to assess accurately using traditional 2D imaging due to distortion and low sensitivity. While CBCT offers more precise 3D evaluation, manual segmentation remains labor-intensive and prone to variability. Recent advances in deep learning enable automatic, accurate tooth segmentation from CBCT images. This study applies deep learning and CBCT technology to quantify OIRR and analyze its risk factors, aiming to improve assessment accuracy, efficiency, and clinical decision-making. This study retrospectively analyzed CBCT scans of 108 orthodontic patients to assess OIRR using deep learning-based tooth segmentation and volumetric analysis. Statistical analysis was performed using linear regression to evaluate the influence of patient-related factors. A significance level of p < 0.05 was considered statistically significant. Root volume significantly decreased after orthodontic treatment (p < 0.001). Age, gender, open (deep) bite, severe crowding, and other factors significantly influenced root resorption rates in different tooth positions. Multivariable regression analysis showed these factors can predict root resorption, explaining 3% to 15.4% of the variance. This study applied a deep learning model to accurately assess root volume changes using CBCT, revealing significant root volume reduction after orthodontic treatment. It found that underage patients experienced less root resorption, while factors like anterior open bite and deep overbite influenced resorption in specific teeth, though skeletal pattern, overjet, and underbite were not significant predictors.

Predicting treatment response to systemic therapy in advanced gallbladder cancer using multiphase enhanced CT images.

Wu J, Zheng Z, Li J, Shen X, Huang B

pubmed logopapersMay 8 2025
Accurate estimation of treatment response can help clinicians identify patients who would potentially benefit from systemic therapy. This study aimed to develop and externally validate a model for predicting treatment response to systemic therapy in advanced gallbladder cancer (GBC). We recruited 399 eligible GBC patients across four institutions. Multivariable logistic regression analysis was performed to identify independent clinical factors related to therapeutic efficacy. This deep learning (DL) radiomics signature was developed for predicting treatment response using multiphase enhanced CT images. Then, the DL radiomic-clinical (DLRSC) model was built by combining the DL signature and significant clinical factors, and its predictive performance was evaluated using area under the curve (AUC). Gradient-weighted class activation mapping analysis was performed to help clinicians better understand the predictive results. Furthermore, patients were stratified into low- and high-score groups by the DLRSC model. The progression-free survival (PFS) and overall survival (OS) between the two different groups were compared. Multivariable analysis revealed that tumor size was a significant predictor of efficacy. The DLRSC model showed great predictive performance, with AUCs of 0.86 (95% CI, 0.82-0.89) and 0.84 (95% CI, 0.80-0.87) in the internal and external test datasets, respectively. This model showed great discrimination, calibration, and clinical utility. Moreover, Kaplan-Meier survival analysis revealed that low-score group patients who were insensitive to systemic therapy predicted by the DLRSC model had worse PFS and OS. The DLRSC model allows for predicting treatment response in advanced GBC patients receiving systemic therapy. The survival benefit provided by the DLRSC model was also assessed. Question No effective tools exist for identifying patients who would potentially benefit from systemic therapy in clinical practice. Findings Our combined model allows for predicting treatment response to systemic therapy in advanced gallbladder cancer. Clinical relevance With the help of this model, clinicians could inform patients of the risk of potential ineffective treatment. Such a strategy can reduce unnecessary adverse events and effectively help reallocate societal healthcare resources.

Automated Emergent Large Vessel Occlusion Detection Using Viz.ai Software and Its Impact on Stroke Workflow Metrics and Patient Outcomes in Stroke Centers: A Systematic Review and Meta-analysis.

Sarhan K, Azzam AY, Moawad MHED, Serag I, Abbas A, Sarhan AE

pubmed logopapersMay 8 2025
The implementation of artificial intelligence (AI), particularly Viz.ai software in stroke care, has emerged as a promising tool to enhance the detection of large vessel occlusion (LVO) and to improve stroke workflow metrics and patient outcomes. The aim of this systematic review and meta-analysis is to evaluate the impact of Viz.ai on stroke workflow efficiency in hospitals and on patients' outcomes. Following the PRISMA guidelines, we conducted a comprehensive search on electronic databases, including PubMed, Web of Science, and Scopus databases, to obtain relevant studies until 25 October 2024. Our primary outcomes were door-to-groin puncture (DTG) time, CT scan-to-start of endovascular treatment (EVT) time, CT scan-to-recanalization time, and door-in-door-out time. Secondary outcomes included symptomatic intracranial hemorrhage (ICH), any ICH, mortality, mRS score < 2 at 90 days, and length of hospital stay. A total of 12 studies involving 15,595 patients were included in our analysis. The pooled analysis demonstrated that the implementation of the Viz.ai algorithm was associated with lesser CT scan to EVT time (SMD -0.71, 95% CI [-0.98, -0.44], p < 0.001) and DTG time (SMD -0.50, 95% CI [-0.66, -0.35], p < 0.001) as well as CT to recanalization time (SMD -0.55, 95% CI [-0.76, -0.33], p < 0.001). Additionally, patients in the post-AI group had significantly lower door-in door-out time than the pre-AI group (SMD -0.49, 95% CI [-0.71, -0.28], p < 0.001). Despite the workflow metrics improvement, our analysis did not reveal statistically significant differences in patient clinical outcomes (p > 0.05). Our results suggest that the integration of the Viz.ai platform in stroke care holds significant potential for reducing EVT delays in patients with LVO and optimizing stroke flow metrics in comprehensive stroke centers. Further studies are required to validate its efficacy in improving clinical outcomes in patients with LVO.

A hybrid AI method for lung cancer classification using explainable AI techniques.

Shivwanshi RR, Nirala NS

pubmed logopapersMay 8 2025
The use of Artificial Intelligence (AI) methods for the analysis of CT (computed tomography) images has greatly contributed to the development of an effective computer-assisted diagnosis (CAD) system for lung cancer (LC). However, complex structures, multiple radiographic interrelations, and the dynamic locations of abnormalities within lung CT images make extracting relevant information to process and implement LC CAD systems difficult. These prominent problems are addressed in this paper by presenting a hybrid method of LC malignancy classification, which may help researchers and experts properly engineer the model's performance by observing how the model makes decisions. The proposed methodology is named IncCat-LCC: Explainer (Inception Net Cat Boost LC Classification: Explainer), which consists of feature extraction (FE) using the handcrafted radiomic Feature (HcRdF) extraction technique, InceptionNet CNN Feature (INCF) extraction, Vision Transformer Feature (ViTF) extraction, and XGBOOST (XGB)-based feature selection, and the GPU based CATBOOST (CB) classification technique. The proposed framework achieves better and highest performance scores for lung nodule multiclass malignancy classification when evaluated using metrics such as accuracy, precision, recall, f-1 score, specificity, and area under the roc curve as 96.74 %, 93.68 %, 96.74 %, 95.19 %, 98.47 % and 99.76 % consecutively for classifying highly normal class. Observing the explainable artificial intelligence (XAI) explanations will help readers understand the model performance and the statistical outcomes of the evaluation parameter. The work presented in this article may improve the existing LC CAD system and help assess the important parameters using XAI to recognize the factors contributing to enhanced performance and reliability.

nnU-Net-based high-resolution CT features quantification for interstitial lung diseases.

Lin Q, Zhang Z, Xiong X, Chen X, Ma T, Chen Y, Li T, Long Z, Luo Q, Sun Y, Jiang L, He W, Deng Y

pubmed logopapersMay 8 2025
To develop a new high-resolution (HR)CT abnormalities quantification tool (CVILDES) for interstitial lung diseases (ILDs) based on the nnU-Net network structure and to determine whether the quantitative parameters derived from this new software could offer a reliable and precise assessment in a clinical setting that is in line with expert visual evaluation. HRCT scans from 83 cases of ILDs and 20 cases of other diffuse lung diseases were labeled section by section by multiple radiologists and were used as training data for developing a deep learning model based on nnU-Net, employing a supervised learning approach. For clinical validation, a cohort including 51 cases of interstitial pneumonia with autoimmune features (IPAF) and 14 cases of idiopathic pulmonary fibrosis (IPF) had CT parenchymal patterns evaluated quantitatively with CVILDES and by visual evaluation. Subsequently, we assessed the correlation of the two methodologies for ILD features quantification. Furthermore, the correlation between the quantitative results derived from the two methods and pulmonary function parameters (DL<sub>CO</sub>%, FVC%, and FEV%) was compared. All CT data were successfully quantified using CVILDES. CVILDES-quantified results (total ILD extent, ground-glass opacity, consolidation, reticular pattern and honeycombing) showed a strong correlation with visual evaluation and were numerically close to the visual evaluation results (r = 0.64-0.89, p < 0.0001), particularly for the extent of fibrosis (r = 0.82, p < 0.0001). As judged by correlation with pulmonary function parameters, CVILDES quantification was comparable or even superior to visual evaluation. nnU-Net-based CVILDES was comparable to visual evaluation for ILD abnormalities quantification. Question Visual assessment of ILD on HRCT is time-consuming and exhibits poor inter-observer agreement, making it challenging to accurately evaluate the therapeutic efficacy. Findings nnU-Net-based Computer vision-based ILD evaluation system (CVILDES) accurately segmented and quantified the HRCT features of ILD, and results were comparable to visual evaluation. Clinical relevance This study developed a new tool that has the potential to be applied in the quantitative assessment of ILD.

Automated Detection of Black Hole Sign for Intracerebral Hemorrhage Patients Using Self-Supervised Learning.

Wang H, Schwirtlich T, Houskamp EJ, Hutch MR, Murphy JX, do Nascimento JS, Zini A, Brancaleoni L, Giacomozzi S, Luo Y, Naidech AM

pubmed logopapersMay 7 2025
Intracerebral Hemorrhage (ICH) is a devastating form of stroke. Hematoma expansion (HE), growth of the hematoma on interval scans, predicts death and disability. Accurate prediction of HE is crucial for targeted interventions to improve patient outcomes. The black hole sign (BHS) on non-contrast computed tomography (CT) scans is a predictive marker for HE. An automated method to recognize the BHS and predict HE could speed precise patient selection for treatment. In. this paper, we presented a novel framework leveraging self-supervised learning (SSL) techniques for BHS identification on head CT images. A ResNet-50 encoder model was pre-trained on over 1.7 million unlabeled head CT images. Layers for binary classification were added on top of the pre-trained model. The resulting model was fine-tuned using the training data and evaluated on the held-out test set to collect AUC and F1 scores. The evaluations were performed on scan and slice levels. We ran different panels, one using two multi-center datasets for external validation and one including parts of them in the pre-training RESULTS: Our model demonstrated strong performance in identifying BHS when compared with the baseline model. Specifically, the model achieved scan-level AUC scores between 0.75-0.89 and F1 scores between 0.60-0.70. Furthermore, it exhibited robustness and generalizability across an external dataset, achieving a scan-level AUC score of up to 0.85 and an F1 score of up to 0.60, while it performed less well on another dataset with more heterogeneous samples. The negative effects could be mitigated after including parts of the external datasets in the fine-tuning process. This study introduced a novel framework integrating SSL into medical image classification, particularly on BHS identification from head CT scans. The resulting pre-trained head CT encoder model showed potential to minimize manual annotation, which would significantly reduce labor, time, and costs. After fine-tuning, the framework demonstrated promising performance for a specific downstream task, identifying the BHS to predict HE, upon comprehensive evaluation on diverse datasets. This approach holds promise for enhancing medical image analysis, particularly in scenarios with limited data availability. ICH = Intracerebral Hemorrhage; HE = Hematoma Expansion; BHS = Black Hole Sign; CT = Computed Tomography; SSL = Self-supervised Learning; AUC = Area Under the receiver operator Curve; CNN = Convolutional Neural Network; SimCLR = Simple framework for Contrastive Learning of visual Representation; HU = Hounsfield Unit; CLAIM = Checklist for Artificial Intelligence in Medical Imaging; VNA = Vendor Neutral Archive; DICOM = Digital Imaging and Communications in Medicine; NIfTI = Neuroimaging Informatics Technology Initiative; INR = International Normalized Ratio; GPU= Graphics Processing Unit; NIH= National Institutes of Health.

A deep learning model combining circulating tumor cells and radiological features in the multi-classification of mediastinal lesions in comparison with thoracic surgeons: a large-scale retrospective study.

Wang F, Bao M, Tao B, Yang F, Wang G, Zhu L

pubmed logopapersMay 7 2025
CT images and circulating tumor cells (CTCs) are indispensable for diagnosing the mediastinal lesions by providing radiological and intra-tumoral information. This study aimed to develop and validate a deep multimodal fusion network (DMFN) combining CTCs and CT images for the multi-classification of mediastinal lesions. In this retrospective diagnostic study, we enrolled 1074 patients with 1500 enhanced CT images and 1074 CTCs results between Jan 1, 2020, and Dec 31, 2023. Patients were divided into the training cohort (n = 434), validation cohort (n = 288), and test cohort (n = 352). The DMFN and monomodal convolutional neural network (CNN) models were developed and validated using the CT images and CTCs results. The diagnostic performances of DMFN and monomodal CNN models were based on the Paraffin-embedded pathologies from surgical tissues. The predictive abilities were compared with thoracic resident physicians, attending physicians, and chief physicians by the area under the receiver operating characteristic (ROC) curve, and diagnostic results were visualized in the heatmap. For binary classification, the predictive performances of DMFN (AUC = 0.941, 95% CI 0.901-0.982) were better than the monomodal CNN model (AUC = 0.710, 95% CI 0.664-0.756). In addition, the DMFN model achieved better predictive performances than the thoracic chief physicians, attending physicians, and resident physicians (P = 0.054, 0.020, 0.016) respectively. For the multiclassification, the DMFN achieved encouraging predictive abilities (AUC = 0.884, 95%CI 0.837-0.931), significantly outperforming the monomodal CNN (AUC = 0.722, 95%CI 0.705-0.739), also better than the chief physicians (AUC = 0.787, 95%CI 0.714-0.862), attending physicians (AUC = 0.632, 95%CI 0.612-0.654), and resident physicians (AUC = 0.541, 95%CI 0.508-0.574). This study showed the feasibility and effectiveness of CNN model combing CT images and CTCs levels in predicting the diagnosis of mediastinal lesions. It could serve as a useful method to assist thoracic surgeons in improving diagnostic accuracy and has the potential to make management decisions.

Potential of artificial intelligence for radiation dose reduction in computed tomography -A scoping review.

Bani-Ahmad M, England A, McLaughlin L, Hadi YH, McEntee M

pubmed logopapersMay 7 2025
Artificial intelligence (AI) is now transforming medical imaging, with extensive ramifications for nearly every aspect of diagnostic imaging, including computed tomography (CT). This current work aims to review, evaluate, and summarise the role of AI in radiation dose optimisation across three fundamental domains in CT: patient positioning, scan range determination, and image reconstruction. A comprehensive scoping review of the literature was performed. Electronic databases including Scopus, Ovid, EBSCOhost and PubMed were searched between January 2018 and December 2024. Relevant articles were identified from their titles had their abstracts evaluated, and those deemed relevant had their full text reviewed. Extracted data from selected studies included the application of AI, radiation dose, anatomical part, and any relevant evaluation metrics based on the CT parameter in which AI is applied. 90 articles met the selection criteria. Included studies evaluated the performance of AI for dose optimisation through patient positioning, scan range determination, and reconstruction across various CT scans, including the abdomen, chest, head, neck, and pelvis, as well as CT angiography. A concise overview of the present state of AI in these three domains, emphasising benefits, limitations, and impact on the transformation of dose reduction in CT scanning, is provided. AI methods can help minimise positioning offsets and over-scanning caused by manual errors and helped to overcome the limitation associated with low-dose CT settings through deep learning image reconstruction algorithms. Further clinical integration of AI will continue to allow for improvements in optimising CT scan protocols and radiation dose. This review underscores the significance of AI in optimizing radiation doses in CT imaging, focusing on three key areas: patient positioning, scan range determination, and image reconstruction.

Stacking classifiers based on integrated machine learning model: fusion of CT radiomics and clinical biomarkers to predict lymph node metastasis in locally advanced gastric cancer patients after neoadjuvant chemotherapy.

Ling T, Zuo Z, Huang M, Ma J, Wu L

pubmed logopapersMay 6 2025
The early prediction of lymph node positivity (LN+) after neoadjuvant chemotherapy (NAC) is crucial for optimizing individualized treatment strategies. This study aimed to integrate radiomic features and clinical biomarkers through machine learning (ML) approaches to enhance prediction accuracy by focusing on patients with locally advanced gastric cancer (LAGC). We retrospectively enrolled 277 patients with LAGC and randomly divided them into training (n = 193) and validation (n = 84) sets at a 7:3 ratio. In total, 1,130 radiomics features were extracted from pre-treatment portal venous phase computed tomography scans. These features were linearly combined to develop a radiomics score (rad score) through feature engineering. Then, using the rad score and clinical biomarkers as input features, we applied simple statistical strategies (relying on a single ML model) and integrated statistical strategies (including classification model integration techniques, such as hard voting, soft voting, and stacking) to predict LN+ post-NAC. The diagnostic performance of the model was assessed using receiver operating characteristic curves with corresponding areas under the curve (AUC). Of all ML models, the stacking classifier, an integrated statistical strategy, exhibited the best performance, achieving an AUC of 0.859 for predicting LN+ in patients with LAGC. This predictive model was transformed into a publicly available online risk calculator. We developed a stacking classifier that integrates radiomics and clinical biomarkers to predict LN+ in patients with LAGC undergoing surgical resection, providing personalized treatment insights.
Page 43 of 45442 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.