Sort by:
Page 29 of 1411410 results

Multi-DECT Image-based Interpretable Model Incorporating Habitat Radiomics and Vision Transformer Deep Learning for Preoperative Prediction of Muscle Invasion in Bladder Cancer.

Du C, Wei W, Hu M, He J, Shen J, Liu Y, Li J, Liu L

pubmed logopapersAug 30 2025
The research aims to evaluate the effectiveness of a multi-dual-energy CT (DECT) image-based interpretable model that integrates habitat radiomics with a 3D Vision Transformer (ViT) deep learning (DL) for preoperatively predicting muscle invasion in bladder cancer (BCa). This retrospective study analyzed 200 BCa patients, who were divided into a training cohort (n=140) and a test cohort (n=60) in a 7:3 ratio. Univariate and multivariate analyses were performed on the DECT quantitative parameters to identify independent predictors, which were subsequently used to develop a DECT model. The K-means algorithm was employed to generate habitat sub-regions of BCa. Traditional radiomics (Rad) model, habitat model, ResNet 18 model, ViT model, and fusion models were constructed from the 40, 70, and 100 keV virtual monochromatic images (VMIs) in DECT. The evaluation of all models used the area under the receiver operating characteristic curve (AUC), calibration curve, decision curve analysis (DCA), net reclassification index (NRI), and integrated discrimination improvement (IDI). The SHAP method was employed to interpret the optimal model and visualize its decision-making process. The Habitat-ViT model demonstrated superior performance compared to other single models, achieving an AUC of 0.997 (95% CI 0.992, 1.000) in the training cohort and 0.892 (95% CI 0.814, 0.971) in the test cohort. The incorporation of DECT quantitative parameters did not improve the performance. DCA and calibration curve assessments indicated that the Habitat-ViT model provided a favorable net benefit and demonstrated strong calibration. Furthermore, SHAP clarified the decision-making processes underlying the model's predicted outcomes. Multi-DECT image-based interpretable model that integrates habitat radiomics with a ViT DL holds promise for predicting muscle invasion status in BCa, providing valuable insights for personalized treatment planning and prognostic assessment.

Diagnostic Performance of CT-Based Artificial Intelligence for Early Recurrence of Cholangiocarcinoma: A Systematic Review and Meta-Analysis.

Chen J, Xi J, Chen T, Yang L, Liu K, Ding X

pubmed logopapersAug 30 2025
Despite AI models demonstrating high predictive accuracy for early cholangiocarcinoma(CCA) recurrence, their clinical application faces challenges such as reproducibility, generalizability, hidden biases, and uncertain performance across diverse datasets and populations, raising concerns about their practical applicability. This meta-analysis aims to systematically assess the diagnostic performance of artificial intelligence (AI) models utilizing computed tomography (CT) imaging to predict early recurrence of CCA. A systematic search was conducted in PubMed, Embase, and Web of Science for studies published up to May 2025. Studies were selected based on the PIRTOS framework. Participants (P): Patients diagnosed with CCA (including intrahepatic and extrahepatic locations). Index test (I): AI techniques applied to CT imaging for early recurrence prediction (defined as within 1 year). Reference standard (R): Pathological diagnosis or imaging follow-up confirming recurrence. Target condition (T): Early recurrence of CCA (positive group: recurrence, negative group: no recurrence). Outcomes (O): Sensitivity, specificity, diagnostic odds ratio (DOR), and area under the receiver operating characteristic curve (AUC), assessed in both internal and external validation cohorts. Setting (S): Retrospective or prospective studies using hospital datasets. Methodological quality was assessed using an optimized version of the revised QUADAS-2 tool. Heterogeneity was assessed using the I² statistic. Pooled sensitivity, specificity, DOR and AUC were calculated using a bivariate random-effects model. Nine studies with 30 datasets involving 1,537 patients were included. In internal validation cohorts, CT-based AI models showed a pooled sensitivity of 0.87 (95% CI: 0.81-0.92), specificity of 0.85 (95% CI: 0.79-0.89), DOR of 37.71 (95% CI: 18.35-77.51), and AUC of 0.93 (95% CI: 0.90-0.94). In external validation cohorts, pooled sensitivity was 0.87 (95% CI: 0.81-0.91), specificity was 0.82 (95% CI: 0.77-0.86), DOR was 30.81 (95% CI: 18.79-50.52), and AUC was 0.85 (95% CI: 0.82-0.88). The AUC was significantly lower in external validation cohorts compared to internal validation cohorts (P < .001). Our results show that CT-based AI models predict early CCA recurrence with high performance in internal validation sets and moderate performance in external validation sets. However, the high heterogeneity observed may impact the robustness of these results. Future research should focus on prospective studies and establishing standardized gold standards to further validate the clinical applicability and generalizability of AI models.

Interpretable Auto Window setting for deep-learning-based CT analysis.

Zhang Y, Chen M, Zhang Z

pubmed logopapersAug 30 2025
Whether during the early days of popularization or in the present, the window setting in Computed Tomography (CT) has always been an indispensable part of the CT analysis process. Although research has investigated the capabilities of CT multi-window fusion in enhancing neural networks, there remains a paucity of domain-invariant, intuitively interpretable methodologies for Auto Window Setting. In this work, we propose plug-and-play module derived from Tanh activation function. This module enables the deployment of medical imaging neural network backbones without requiring manual CT window configuration. Domain-invariant design facilitates observation of the preference decisions rendered by the adaptive mechanism from a clinically intuitive perspective. We confirm the effectiveness of the proposed method on multiple open-source datasets, allowing for direct training without the need for manual window setting and yielding improvements with 54%∼127%+ Dice, 14%∼32%+ Recall and 94%∼200%+ Precision on hard segmentation targets. Experimental results conducted in NVIDIA NGC environment demonstrate that the module facilitates efficient deployment of AI-powered medical imaging tasks. The proposed method enables automatic determination of CT window settings for specific downstream tasks in the development and deployment of mainstream medical imaging neural networks, demonstrating the potential to reduce associated deployment costs.

Automated quantification of lung pathology on micro-CT in diverse disease models using deep learning.

Belmans F, Seldeslachts L, Vanhoffelen E, Tielemans B, Vos W, Maes F, Vande Velde G

pubmed logopapersAug 30 2025
Micro-CT significantly enhances the efficiency, predictive power and translatability of animal studies to human clinical trials for respiratory diseases. However, the analysis of large micro-CT datasets remains a bottleneck. We developed a generic deep learning (DL)-based lung segmentation model using longitudinal micro-CT images from studies of Down syndrome, viral and fungal infections, and exacerbation with variable lung pathology and degree of disease burden. 2D models were trained with cross-validation on axial, coronal and sagittal slices. Predictions from these single-orientation models were combined to create a 2.5D model using majority voting or probability averaging. The generalisability of these models to other studies (COVID-19, lung inflammation and fibrosis), scanner configurations and rodent species (rats, hamsters, degus) was tested, including a publicly available database. On the internal validation data, the highest mean Dice Similarity Coefficient (DSC) was found for the 2.5D probability averaging model (0.953 ± 0.023), further improving the output of the 2D models by removing erroneous voxels outside the lung region. The models demonstrated good generalisability with average DSC values ranging from 0.89 to 0.94 across different lung pathologies and scanner configurations. The biomarkers extracted from manual and automated segmentations are well in agreement and proved that our proposed solution effectively monitors longitudinal lung pathology development and response to treatment in real-world preclinical studies. Our DL-based pipeline for lung pathology quantification offers efficient analysis of large micro-CT datasets, is widely applicable across rodent disease models and acquisition protocols and enables real-time insights into therapy efficacy. This research was supported by the Service Public de Wallonie (AEROVID grant to FB, WV) and The Flemish Research Foundation (FWO, doctoral mandate 1SF2224N to EV and 1186121N/1186123N to LS, infrastructure grant I006524N to GVV).

Sex-Specific Prognostic Value of Automated Epicardial Adipose Tissue Quantification on Serial Lung Cancer Screening Chest CT.

Brendel JM, Mayrhofer T, Hadzic I, Norton E, Langenbach IL, Langenbach MC, Jung M, Raghu VK, Nikolaou K, Douglas PS, Lu MT, Aerts HJWL, Foldyna B

pubmed logopapersAug 29 2025
Epicardial adipose tissue (EAT) is a metabolically active fat depot associated with coronary atherosclerosis and cardiovascular (CV) risk. While EAT is a known prognostic marker in lung cancer screening, its sex-specific prognostic value remains unclear. This study investigated sex differences in the prognostic utility of serial EAT measurements on low-dose chest CTs. We analyzed baseline and two-year changes in EAT volume and density using a validated automated deep-learning algorithm in 24,008 heavy-smoking participants from the National Lung Screening Trial (NLST). Sex-stratified multivariable Cox models, adjusted for CV risk factors, BMI, and coronary artery calcium (CAC), assessed associations between EAT and all-cause and CV mortality (median follow-up 12.3 years [IQR: 11.9-12.8], 4,668 [19.4%] all-cause deaths, 1,083 [4.5%] CV deaths).Women (n = 9,841; 41%) were younger, with fewer CV risk factors, lower BMI, fewer pack-years, and lower CAC than men (all P < 0.001). Baseline EAT was associated with similar all-cause and CV mortality risk in both sexes (max. aHR women: 1.70; 95%-CI: 1.13-2.55; men: 1.83; 95%-CI: 1.40-2.40, P-interaction=0.986). However, two-year EAT changes predicted CV death only in women (aHR: 1.82; 95%-CI: 1.37-2.49, P < 0.001), and showed a stronger association with all-cause mortality in women (aHR: 1.52; 95%-CI: 1.31-1.77) than in men (aHR: 1.26; 95%-CI: 1.13-1.40, P-interaction=0.041). In this large lung cancer screening cohort, serial EAT changes independently predicted CV mortality in women and were more strongly associated with all-cause mortality in women than in men. These findings support routine EAT quantification on chest CT for improved, sex-specific cardiovascular risk stratification.

Clinical Consequences of Deep Learning Image Reconstruction at CT.

Lubner MG, Pickhardt PJ, Toia GV, Szczykutowicz TP

pubmed logopapersAug 29 2025
Deep learning reconstruction (DLR) offers a variety of advantages over the current standard iterative reconstruction techniques, including decreased image noise without changes in noise texture and less susceptibility to spatial resolution limitations at low dose. These advances may allow for more aggressive dose reduction in CT imaging while maintaining image quality and diagnostic accuracy. However, performance of DLRs is impacted by the type of framework and training data used. In addition, the patient size and clinical task being performed may impact the amount of dose reduction that can be reasonably employed. Multiple DLRs are currently FDA approved with a growing body of literature evaluating performance throughout this body; however, continued work is warranted to evaluate a variety of clinical scenarios to fully explore the evolving potential of DLR. Depending on the type and strength of DLR applied, blurring and occasionally other artifacts may be introduced. DLRs also show promise in artifact reduction, particularly metal artifact reduction. This commentary focuses primarily on current DLR data for abdominal applications, current challenges, and future areas of potential exploration.

Distinct 3-Dimensional Morphologies of Arthritic Knee Anatomy Exist: CT-Based Phenotyping Offers Outlier Detection in Total Knee Arthroplasty.

Woo JJ, Hasan SS, Zhang YB, Nawabi DH, Calendine CL, Wassef AJ, Chen AF, Krebs VE, Ramkumar PN

pubmed logopapersAug 29 2025
There is no foundational classification that 3-dimensionally characterizes arthritic anatomy to preoperatively plan and postoperatively evaluate total knee arthroplasty (TKA). With the advent of computed tomography (CT) as a preoperative planning tool, the purpose of this study was to morphologically classify pre-TKA anatomy across coronal, axial, and sagittal planes to identify outlier phenotypes and establish a foundation for future philosophical, technical, and technological strategies. A cross-sectional analysis was conducted using 1,352 pre-TKA lower-extremity CT scans collected from a database at a single multicenter referral center. A validated deep learning and computer vision program acquired 27 lower-extremity measurements for each CT scan. An unsupervised spectral clustering algorithm morphometrically classified the cohort. The optimal number of clusters was determined through elbow-plot and eigen-gap analyses. Visualization was conducted through t-stochastic neighbor embedding, and each cluster was characterized. The analysis was repeated to assess how it was affected by severe deformity by removing impacted parameters and reassessing cluster separation. Spectral clustering revealed 4 distinct pre-TKA anatomic morphologies (18.5% Type 1, 39.6% Type 2, 7.5% Type 3, 34.5% Type 4). Types 1 and 3 embodied clear outliers. Key parameters distinguishing the 4 morphologies were hip rotation, medial posterior tibial slope, hip-knee-ankle angle, tibiofemoral angle, medial proximal tibial angle, and lateral distal femoral angle. After removing variables impacted by severe deformity, the secondary analysis again demonstrated 4 distinct clusters with the same distinguishing variables. CT-based phenotyping established a 3D classification of arthritic knee anatomy into 4 foundational morphologies, of which Types 1 and 3 represent outliers present in 26% of knees undergoing TKA. Unlike prior classifications emphasizing native coronal plane anatomy, 3D phenotyping of knees undergoing TKA enables recognition of outlier cases and a foundation for longitudinal evaluation in a morphologically diverse and growing surgical population. Longitudinal studies that control for implant selection, alignment technique, and applied technology are required to evaluate the impact of this classification in enabling rapid recovery and mitigating dissatisfaction after TKA. Prognostic Level II. See Instructions for Authors for a complete description of levels of evidence.

Age- and sex-related changes in proximal humeral volumetric BMD assessed via chest CT with a deep learning-based segmentation model.

Li S, Tang C, Zhang H, Ma C, Weng Y, Chen B, Xu S, Xu H, Giunchiglia F, Lu WW, Guo D, Qin Y

pubmed logopapersAug 29 2025
Accurate assessment of proximal humeral volumetric bone mineral density (vBMD) is essential for surgical planning in shoulder pathology. However, age-related changes in proximal humeral vBMD remain poorly characterized. This study developed a deep learning-based method to assess proximal humeral vBMD and identified sex-specific age-related changes. It also demonstrated that lumbar spine vBMD is not a valid substitute. This study aimed to develop a deep learning-based method for proximal humeral vBMD assessment and to investigate its age- and sex-related changes, as well as its correlation with lumbar spine vBMD. An nnU-Net-based deep learning pipeline was developed to automatically segment the proximal humerus on chest CT scans from 2,675 adults. Segmentation performance was assessed using the Dice Similarity Coefficient (DSC), Intersection over Union (IoU), 95th-percentile Hausdorff Distance (95HD), and Average Symmetric Surface Distance (ASSD). Phantom-calibrated vBMD-total, trabecular, and BMAT-corrected trabecular-was quantified for each subject. Age-related distributions were modeled with generalized additive models for location, scale, and shape (GAMLSS) to generate sex-specific P3-P97 percentile curves. Lumbar spine vBMD was measured in 1460 individuals for correlation analysis. Segmentation was highly accurate (DSC 98.42 ± 0.20%; IoU 96.89 ± 0.42%; 95HD 1.12 ± 0.37 mm; ASSD 0.94 ± 0.31 mm). In males, total, trabecular, and BMAT-corrected trabecular vBMD declined approximately linearly from early adulthood. In females, a pronounced inflection occurred at ~ 40-45 years: values were stable or slightly rising beforehand, then all percentiles dropped steeply and synchronously, indicating accelerated menopause-related loss. In females, vBMD declined earlier in the lumbar spine than in the proximal humerus. Correlations between proximal humeral and lumbar spine vBMD were low to moderate overall and weakened after age 50. We present a novel, automated method for quantifying proximal humeral vBMD from chest CT, revealing distinct, sex-specific aging patterns. Males' humeral vBMD declines linearly, while females experience an earlier, accelerated loss. Moreover, the peak humeral vBMD in females occurs later than that of the lumbar spine, and spinal measurements cannot reliably substitute for humeral BMD in clinical assessment.

Radiomics and deep learning methods for predicting the growth of subsolid nodules based on CT images.

Chen J, Yan W, Shi Y, Pan X, Yu R, Wang D, Zhang X, Wang L, Liu K

pubmed logopapersAug 29 2025
The growth of subsolid nodules (SSNs) is a strong predictor of lung adenocarcinoma. However, the heterogeneity in the biological behavior of SSNs poses significant challenges for clinical management. This study aimed to evaluate the clinical utility of deep learning and radiomics approaches in predicting SSN growth based on computed tomography (CT) images. A total of 353 patients with 387 SSNs were enrolled in this retrospective study. All cases were divided into growth (n = 195) and non-growth (n = 192) groups and were randomly assigned to the training (n = 247), validation (n = 62), and test sets (n = 78) in a ratio of 3:1:1. We obtained 1454 radiomics features from each volumetric region of interest (VOI). Pearson correlation coefficient and the least absolute shrinkage and selection operator (LASSO) methods were used for radiomics signature determination. A ResNet18 architecture was used to construct the deep-learning model. The 2 models were combined via a ResNet-based fusion network to construct an ensemble model. The area under the curve (AUC) was plotted and decision curve analysis (DCA) was performed to determine the clinical performance of the 3 models. The combined model (AUC = 0.926, 95% CI: 0.869-0.977) outperformed the radiomics (AUC = 0.894, 95% CI: 0.808-0.957) and deep-learning models (AUC = 0.802, 95% CI: 0.695-0.899) in the test set. The DeLong test results showed a statistically significant difference between the combined model and the deep-learning model (P = .012), supporting the clinical value of DCA. This study demonstrates that integrating radiomics with deep learning offers promising potential for the preoperative prediction of SSN growth.

Integrating Pathology and CT Imaging for Personalized Recurrence Risk Prediction in Renal Cancer

Daniël Boeke, Cedrik Blommestijn, Rebecca N. Wray, Kalina Chupetlovska, Shangqi Gao, Zeyu Gao, Regina G. H. Beets-Tan, Mireia Crispin-Ortuzar, James O. Jones, Wilson Silva, Ines P. Machado

arxiv logopreprintAug 29 2025
Recurrence risk estimation in clear cell renal cell carcinoma (ccRCC) is essential for guiding postoperative surveillance and treatment. The Leibovich score remains widely used for stratifying distant recurrence risk but offers limited patient-level resolution and excludes imaging information. This study evaluates multimodal recurrence prediction by integrating preoperative computed tomography (CT) and postoperative histopathology whole-slide images (WSIs). A modular deep learning framework with pretrained encoders and Cox-based survival modeling was tested across unimodal, late fusion, and intermediate fusion setups. In a real-world ccRCC cohort, WSI-based models consistently outperformed CT-only models, underscoring the prognostic strength of pathology. Intermediate fusion further improved performance, with the best model (TITAN-CONCH with ResNet-18) approaching the adjusted Leibovich score. Random tie-breaking narrowed the gap between the clinical baseline and learned models, suggesting discretization may overstate individualized performance. Using simple embedding concatenation, radiology added value primarily through fusion. These findings demonstrate the feasibility of foundation model-based multimodal integration for personalized ccRCC risk prediction. Future work should explore more expressive fusion strategies, larger multimodal datasets, and general-purpose CT encoders to better match pathology modeling capacity.
Page 29 of 1411410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.