Sort by:
Page 111 of 1411410 results

Comparing Artificial Intelligence and Traditional Regression Models in Lung Cancer Risk Prediction Using A Systematic Review and Meta-Analysis.

Leonard S, Patel MA, Zhou Z, Le H, Mondal P, Adams SJ

pubmed logopapersJun 1 2025
Accurately identifying individuals who are at high risk of lung cancer is critical to optimize lung cancer screening with low-dose CT (LDCT). We sought to compare the performance of traditional regression models and artificial intelligence (AI)-based models in predicting future lung cancer risk. A systematic review and meta-analysis were conducted with reporting according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. We searched MEDLINE, Embase, Scopus, and the Cumulative Index to Nursing and Allied Health Literature databases for studies reporting the performance of AI or traditional regression models for predicting lung cancer risk. Two researchers screened articles, and a third researcher resolved conflicts. Model characteristics and predictive performance metrics were extracted. The quality of studies was assessed using the Prediction model Risk of Bias Assessment Tool. A meta-analysis assessed the discrimination performance of models, based on area under the receiver operating characteristic curve (AUC). One hundred forty studies met inclusion criteria and included 185 traditional and 64 AI-based models. Of these, 16 AI models and 65 traditional models have been externally validated. The pooled AUC of external validations of AI models was 0.82 (95% confidence interval [CI], 0.80-0.85), and the pooled AUC for traditional regression models was 0.73 (95% CI, 0.72-0.74). In a subgroup analysis, AI models that included LDCT had a pooled AUC of 0.85 (95% CI, 0.82-0.88). Overall risk of bias was high for both AI and traditional models. AI-based models, particularly those using imaging data, show promise for improving lung cancer risk prediction over traditional regression models. Future research should focus on prospective validation of AI models and direct comparisons with traditional methods in diverse populations.

Measurement of adipose body composition using an artificial intelligence-based CT Protocol and its association with severe acute pancreatitis in hospitalized patients.

Cortés P, Mistretta TA, Jackson B, Olson CG, Al Qady AM, Stancampiano FF, Korfiatis P, Klug JR, Harris DM, Dan Echols J, Carter RE, Ji B, Hardway HD, Wallace MB, Kumbhari V, Bi Y

pubmed logopapersJun 1 2025
The clinical utility of body composition in predicting the severity of acute pancreatitis (AP) remains unclear. We aimed to measure body composition using artificial intelligence (AI) to predict severe AP in hospitalized patients. We performed a retrospective study of patients hospitalized with AP at three tertiary care centers in 2018. Patients with computer tomography (CT) imaging of the abdomen at admission were included. A fully automated and validated abdominal segmentation algorithm was used for body composition analysis. The primary outcome was severe AP, defined as having persistent single- or multi-organ failure as per the revised Atlanta classification. 352 patients were included. Severe AP occurred in 35 patients (9.9%). In multivariable analysis, adjusting for male sex and first episode of AP, intermuscular adipose tissue (IMAT) was associated with severe AP, OR = 1.06 per 5 cm<sup>2</sup>, p = 0.0207. Subcutaneous adipose tissue (SAT) area approached significance, OR = 1.05, p = 0.17. Neither visceral adipose tissue (VAT) nor skeletal muscle (SM) was associated with severe AP. In obese patients, a higher SM was associated with severe AP in unadjusted analysis (86.7 vs 75.1 and 70.3 cm<sup>2</sup> in moderate and mild, respectively p = 0.009). In this multi-site retrospective study using AI to measure body composition, we found elevated IMAT to be associated with severe AP. Although SAT was non-significant for severe AP, it approached statistical significance. Neither VAT nor SM were significant. Further research in larger prospective studies may be beneficial.

Comparison of Sarcopenia Assessment in Liver Transplant Recipients by Computed Tomography Freehand Region-of-Interest versus an Automated Deep Learning System.

Miller W, Fate K, Fisher J, Thul J, Ko Y, Kim KW, Pruett T, Teigen L

pubmed logopapersJun 1 2025
Sarcopenia, or the loss of muscle quality and quantity, has been associated with poor clinical outcomes in liver transplantation such as infection, increased length of stay, and increased patient mortality. Abdominal computed tomography (CT) scans are utilized to measure patient core musculature as a measurement of sarcopenia. Methods to extract information on core body musculature can be through either freehand region-of-interest (ROI) or machine learning algorithms to quantitate total body muscle within a given area. This study directly compares these two collection methods leveraging length of stay (LOS) outcomes previously found to be associated with freehand ROI measurements. A total of 50 individuals were included who underwent liver transplantation from our single center between January 1, 2016, and May 30, 2021, and had a non-contrast abdominal CT scan within 6-months of surgery. CT-derived skeletal muscle measures at the third lumbar vertebrae were obtained using freehand ROI and an automated deep learning system. Correlation analysis of freehand psoas muscle measures, psoas area index (PAI) and mean Hounsfield units (mHU), were significantly correlated to the automated deep learning system's total skeletal muscle measures at the level of the L3, skeletal muscle index (SMI) and skeletal muscle density (SMD), respectively (R<sup>2</sup> = 0.4221; p value < 0.0001; R<sup>2</sup> = 0.6297; p value < 0.0001). The automated deep learning model's SMI predicted ∼20% of the variability (R<sup>2</sup> = 0.2013; hospital length of stay) while the PAI variable only predicted about 10% of the variability (R<sup>2</sup> = 0.0919; total healthcare length of stay) of the length of stay variables. In contrast, both the freehand ROI mHU and the automated deep learning model's muscle density variables were associated with ∼20% of the variability in the inpatient length of stay (R<sup>2</sup> = 0.2383 and 0.1810, respectively) and total healthcare length of stay variables (R<sup>2</sup> = 0.2190 and 0.1947, respectively). Sarcopenia measurements represent an important risk stratification tool for liver transplantation outcomes. For muscle sarcopenia assessment association with LOS, freehand measures of sarcopenia perform similarly to automated deep learning system measurements.

A Large Convolutional Neural Network for Clinical Target and Multi-organ Segmentation in Gynecologic Brachytherapy with Multi-stage Learning

Mingzhe Hu, Yuan Gao, Yuheng Li, Ricahrd LJ Qiu, Chih-Wei Chang, Keyur D. Shah, Priyanka Kapoor, Beth Bradshaw, Yuan Shao, Justin Roper, Jill Remick, Zhen Tian, Xiaofeng Yang

arxiv logopreprintJun 1 2025
Purpose: Accurate segmentation of clinical target volumes (CTV) and organs-at-risk is crucial for optimizing gynecologic brachytherapy (GYN-BT) treatment planning. However, anatomical variability, low soft-tissue contrast in CT imaging, and limited annotated datasets pose significant challenges. This study presents GynBTNet, a novel multi-stage learning framework designed to enhance segmentation performance through self-supervised pretraining and hierarchical fine-tuning strategies. Methods: GynBTNet employs a three-stage training strategy: (1) self-supervised pretraining on large-scale CT datasets using sparse submanifold convolution to capture robust anatomical representations, (2) supervised fine-tuning on a comprehensive multi-organ segmentation dataset to refine feature extraction, and (3) task-specific fine-tuning on a dedicated GYN-BT dataset to optimize segmentation performance for clinical applications. The model was evaluated against state-of-the-art methods using the Dice Similarity Coefficient (DSC), 95th percentile Hausdorff Distance (HD95), and Average Surface Distance (ASD). Results: Our GynBTNet achieved superior segmentation performance, significantly outperforming nnU-Net and Swin-UNETR. Notably, it yielded a DSC of 0.837 +/- 0.068 for CTV, 0.940 +/- 0.052 for the bladder, 0.842 +/- 0.070 for the rectum, and 0.871 +/- 0.047 for the uterus, with reduced HD95 and ASD compared to baseline models. Self-supervised pretraining led to consistent performance improvements, particularly for structures with complex boundaries. However, segmentation of the sigmoid colon remained challenging, likely due to anatomical ambiguities and inter-patient variability. Statistical significance analysis confirmed that GynBTNet's improvements were significant compared to baseline models.

External validation and performance analysis of a deep learning-based model for the detection of intracranial hemorrhage.

Nada A, Sayed AA, Hamouda M, Tantawi M, Khan A, Alt A, Hassanein H, Sevim BC, Altes T, Gaballah A

pubmed logopapersJun 1 2025
PurposeWe aimed to investigate the external validation and performance of an FDA-approved deep learning model in labeling intracranial hemorrhage (ICH) cases on a real-world heterogeneous clinical dataset. Furthermore, we delved deeper into evaluating how patients' risk factors influenced the model's performance and gathered feedback on satisfaction from radiologists of varying ranks.MethodsThis prospective IRB approved study included 5600 non-contrast CT scans of the head in various clinical settings, that is, emergency, inpatient, and outpatient units. The patients' risk factors were collected and tested for impacting the performance of DL model utilizing univariate and multivariate regression analyses. The performance of DL model was contrasted to the radiologists' interpretation to determine the presence or absence of ICH with subsequent classification into subcategories of ICH. Key metrics, including accuracy, sensitivity, specificity, positive predictive value, and negative predictive value, were calculated. Receiver operating characteristics curve, along with the area under the curve, were determined. Additionally, a questionnaire was conducted with radiologists of varying ranks to assess their experience with the model.ResultsThe model exhibited outstanding performance, achieving a high sensitivity of 89% and specificity of 96%. Additional performance metrics, including positive predictive value (82%), negative predictive value (97%), and overall accuracy (94%), underscore its robust capabilities. The area under the ROC curve further demonstrated the model's efficacy, reaching 0.954. Multivariate logistic regression revealed statistical significance for age, sex, history of trauma, operative intervention, HTN, and smoking.ConclusionOur study highlights the satisfactory performance of the DL model on a diverse real-world dataset, garnering positive feedback from radiology trainees.

HResFormer: Hybrid Residual Transformer for Volumetric Medical Image Segmentation.

Ren S, Li X

pubmed logopapersJun 1 2025
Vision Transformer shows great superiority in medical image segmentation due to the ability to learn long-range dependency. For medical image segmentation from 3-D data, such as computed tomography (CT), existing methods can be broadly classified into 2-D-based and 3-D-based methods. One key limitation in 2-D-based methods is that the intraslice information is ignored, while the limitation in 3-D-based methods is the high computation cost and memory consumption, resulting in a limited feature representation for inner slice information. During the clinical examination, radiologists primarily use the axial plane and then routinely review both axial and coronal planes to form a 3-D understanding of anatomy. Motivated by this fact, our key insight is to design a hybrid model that can first learn fine-grained inner slice information and then generate a 3-D understanding of anatomy by incorporating 3-D information. We present a novel Hybrid Residual TransFormer (HResFormer) for 3-D medical image segmentation. Building upon standard 2-D and 3-D Transformer backbones, HResFormer involves two novel key designs: 1) a Hybrid Local-Global fusion Module (HLGM) to effectively and adaptively fuse inner slice information from 2-D Transformers and intraslice information from 3-D volumes for 3-D Transformers with local fine-grained and global long-range representation and 2) residual learning of the hybrid model, which can effectively leverage the inner slice and intraslice information for better 3-D understanding of anatomy. Experiments show that our HResFormer outperforms prior art on widely used medical image segmentation benchmarks. This article sheds light on an important but neglected way to design Transformers for 3-D medical image segmentation.

Automated Coronary Artery Segmentation with 3D PSPNET using Global Processing and Patch Based Methods on CCTA Images.

Chachadi K, Nirmala SR, Netrakar PG

pubmed logopapersJun 1 2025
The prevalence of coronary artery disease (CAD) has become the major cause of death across the world in recent years. The accurate segmentation of coronary artery is important in clinical diagnosis and treatment of coronary artery disease (CAD) such as stenosis detection and plaque analysis. Deep learning techniques have been shown to assist medical experts in diagnosing diseases using biomedical imaging. There are many methods which employ 2D DL models for medical image segmentation. The 2D Pyramid Scene Parsing Neural Network (PSPNet) has potential in this domain but not explored for the segmentation of coronary arteries from 3D Coronary Computed Tomography Angiography (CCTA) images. The contribution of present research work is to propose the modification of 2D PSPNet into 3D PSPNet for segmenting the coronary arteries from 3D CCTA images. The innovative factor is to evaluate the network performance by employing Global processing and Patch based processing methods. The experimental results achieved a Dice Similarity Coefficient (DSC) of 0.76 for Global process method and 0.73 for Patch based method using a subset of 200 images from the ImageCAS dataset.

Evaluation of a Deep Learning Denoising Algorithm for Dose Reduction in Whole-Body Photon-Counting CT Imaging: A Cadaveric Study.

Dehdab R, Brendel JM, Streich S, Ladurner R, Stenzl B, Mueck J, Gassenmaier S, Krumm P, Werner S, Herrmann J, Nikolaou K, Afat S, Brendlin A

pubmed logopapersJun 1 2025
Photon Counting CT (PCCT) offers advanced imaging capabilities with potential for substantial radiation dose reduction; however, achieving this without compromising image quality remains a challenge due to increased noise at lower doses. This study aims to evaluate the effectiveness of a deep learning (DL)-based denoising algorithm in maintaining diagnostic image quality in whole-body PCCT imaging at reduced radiation levels, using real intraindividual cadaveric scans. Twenty-four cadaveric human bodies underwent whole-body CT scans on a PCCT scanner (NAEOTOM Alpha, Siemens Healthineers) at four different dose levels (100%, 50%, 25%, and 10% mAs). Each scan was reconstructed using both QIR level 2 and a DL algorithm (ClariCT.AI, ClariPi Inc.), resulting in 192 datasets. Objective image quality was assessed by measuring CT value stability, image noise, and contrast-to-noise ratio (CNR) across consistent regions of interest (ROIs) in the liver parenchyma. Two radiologists independently evaluated subjective image quality based on overall image clarity, sharpness, and contrast. Inter-rater agreement was determined using Spearman's correlation coefficient, and statistical analysis included mixed-effects modeling to assess objective and subjective image quality. Objective analysis showed that the DL denoising algorithm did not significantly alter CT values (p ≥ 0.9975). Noise levels were consistently lower in denoised datasets compared to the Original (p < 0.0001). No significant differences were observed between the 25% mAs denoised and the 100% mAs original datasets in terms of noise and CNR (p ≥ 0.7870). Subjective analysis revealed strong inter-rater agreement (r ≥ 0.78), with the 50% mAs denoised datasets rated superior to the 100% mAs original datasets (p < 0.0001) and no significant differences detected between the 25% mAs denoised and 100% mAs original datasets (p ≥ 0.9436). The DL denoising algorithm maintains image quality in PCCT imaging while enabling up to a 75% reduction in radiation dose. This approach offers a promising method for reducing radiation exposure in clinical PCCT without compromising diagnostic quality.

CT-Based Deep Learning Predicts Prognosis in Esophageal Squamous Cell Cancer Patients Receiving Immunotherapy Combined with Chemotherapy.

Huang X, Huang Y, Li P, Xu K

pubmed logopapersJun 1 2025
Immunotherapy combined with chemotherapy has improved outcomes for some esophageal squamous cell carcinoma (ESCC) patients, but accurate pre-treatment risk stratification remains a critical gap. This study constructed a deep learning (DL) model to predict survival outcomes in ESCC patients receiving immunotherapy combined with chemotherapy. A DL model was developed to predict survival outcomes in ESCC patients receiving immunotherapy and chemotherapy. Retrospective data from 482 patients across three institutions were split into training (N=322), internal test (N=79), and external test (N=81) sets. Unenhanced computed tomography (CT) scans were processed to analyze tumor and peritumoral regions. The model evaluated multiple input configurations: original tumor regions of interest (ROIs), ROI subregions, and ROIs expanded by 1 and 3 pixels. Performance was assessed using Harrell's C-index and receiver operating characteristic (ROC) curves. A multimodal model combined DL-derived risk scores with five key clinical and laboratory features. The Shapley Additive Explanations (SHAP) method elucidated the contribution of individual features to model predictions. The DL model with 1-pixel peritumoral expansion achieved the best accuracy, yielding a C-index of 0.75 for the internal test set and 0.60 for the external test set. Hazard ratios for high-risk patients were 1.82 (95% CI: 1.19-2.46; P=0.02) in internal test set. The multimodal model achieved C-indices of 0.74 and 0.61 for internal and external test sets, respectively. Kaplan-Meier analysis revealed significant survival differences between high- and low-risk groups (P<0.05). SHAP analysis identified tumor response, risk score, and age as critical contributors to predictions. This DL model demonstrates efficacy in stratifying ESCC patients by survival risk, particularly when integrating peritumoral imaging and clinical features. The model could serve as a valuable pre-treatment tool to facilitate the implementation of personalized treatment strategies for ESCC patients undergoing immunotherapy and chemotherapy.

Deep Learning-Enhanced Ultra-high-resolution CT Imaging for Superior Temporal Bone Visualization.

Brockstedt L, Grauhan NF, Kronfeld A, Mercado MAA, Döge J, Sanner A, Brockmann MA, Othman AE

pubmed logopapersJun 1 2025
This study assesses the image quality of temporal bone ultra-high-resolution (UHR) Computed tomography (CT) scans in adults and children using hybrid iterative reconstruction (HIR) and a novel, vendor-specific deep learning-based reconstruction (DLR) algorithm called AiCE Inner Ear. In a retrospective, single-center study (February 1-July 30, 2023), UHR-CT scans of 57 temporal bones of 35 patients (5 children, 23 male) with at least one anatomical unremarkable temporal bone were included. There is an adult computed tomography dose index volume (CTDIvol 25.6 mGy) and a pediatric protocol (15.3 mGy). Images were reconstructed using HIR at normal resolution (0.5-mm slice thickness, 512² matrix) and UHR (0.25-mm, 1024² and 2048² matrix) as well as with a vendor-specific DLR advanced intelligent clear-IQ engine inner ear (AiCE Inner Ear) at UHR (0.25-mm, 1024² matrix). Three radiologists evaluated 18 anatomic structures using a 5-point Likert scale. Signal-to-noise (SNR) and contrast-to-noise ratio (CNR) were measured automatically. In the adult protocol subgroup (n=30; median age: 51 [11-89]; 19 men) and the pediatric protocol subgroup (n=5; median age: 2 [1-3]; 4 men), UHR-CT with DLR significantly improved subjective image quality (p<0.024), reduced noise (p<0.001), and increased CNR and SNR (p<0.001). DLR also enhanced visualization of key structures, including the tendon of the stapedius muscle (p<0.001), tympanic membrane (p<0.009), and basal aspect of the osseous spiral lamina (p<0.018). Vendor-specific DLR-enhanced UHR-CT significantly improves temporal bone image quality and diagnostic performance.
Page 111 of 1411410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.