Sort by:
Page 48 of 2352345 results

Distinct 3-Dimensional Morphologies of Arthritic Knee Anatomy Exist: CT-Based Phenotyping Offers Outlier Detection in Total Knee Arthroplasty.

Woo JJ, Hasan SS, Zhang YB, Nawabi DH, Calendine CL, Wassef AJ, Chen AF, Krebs VE, Ramkumar PN

pubmed logopapersAug 29 2025
There is no foundational classification that 3-dimensionally characterizes arthritic anatomy to preoperatively plan and postoperatively evaluate total knee arthroplasty (TKA). With the advent of computed tomography (CT) as a preoperative planning tool, the purpose of this study was to morphologically classify pre-TKA anatomy across coronal, axial, and sagittal planes to identify outlier phenotypes and establish a foundation for future philosophical, technical, and technological strategies. A cross-sectional analysis was conducted using 1,352 pre-TKA lower-extremity CT scans collected from a database at a single multicenter referral center. A validated deep learning and computer vision program acquired 27 lower-extremity measurements for each CT scan. An unsupervised spectral clustering algorithm morphometrically classified the cohort. The optimal number of clusters was determined through elbow-plot and eigen-gap analyses. Visualization was conducted through t-stochastic neighbor embedding, and each cluster was characterized. The analysis was repeated to assess how it was affected by severe deformity by removing impacted parameters and reassessing cluster separation. Spectral clustering revealed 4 distinct pre-TKA anatomic morphologies (18.5% Type 1, 39.6% Type 2, 7.5% Type 3, 34.5% Type 4). Types 1 and 3 embodied clear outliers. Key parameters distinguishing the 4 morphologies were hip rotation, medial posterior tibial slope, hip-knee-ankle angle, tibiofemoral angle, medial proximal tibial angle, and lateral distal femoral angle. After removing variables impacted by severe deformity, the secondary analysis again demonstrated 4 distinct clusters with the same distinguishing variables. CT-based phenotyping established a 3D classification of arthritic knee anatomy into 4 foundational morphologies, of which Types 1 and 3 represent outliers present in 26% of knees undergoing TKA. Unlike prior classifications emphasizing native coronal plane anatomy, 3D phenotyping of knees undergoing TKA enables recognition of outlier cases and a foundation for longitudinal evaluation in a morphologically diverse and growing surgical population. Longitudinal studies that control for implant selection, alignment technique, and applied technology are required to evaluate the impact of this classification in enabling rapid recovery and mitigating dissatisfaction after TKA. Prognostic Level II. See Instructions for Authors for a complete description of levels of evidence.

Multi-regional Multiparametric Deep Learning Radiomics for Diagnosis of Clinically Significant Prostate Cancer.

Liu X, Liu R, He H, Yan Y, Zhang L, Zhang Q

pubmed logopapersAug 29 2025
Non-invasive and precise identification of clinically significant prostate cancer (csPCa) is essential for the management of prostatic diseases. Our study introduces a novel and interpretable diagnostic method for csPCa, leveraging multi-regional, multiparametric deep learning radiomics based on magnetic resonance imaging (MRI). The prostate regions, including the peripheral zone (PZ) and transition zone (TZ), are automatically segmented using a deep learning framework that combines convolutional neural networks and transformers to generate region-specific masks. Radiomics features are then extracted and selected from multiparametric MRI at the PZ, TZ, and their combined area to develop a multi-regional multiparametric radiomics diagnostic model. Feature contributions are quantified to enhance the model's interpretability and assess the importance of different imaging parameters across various regions. The multi-regional model substantially outperforms single-region models, achieving an optimal area under the curve (AUC) of 0.903 on the internal test set, and an AUC of 0.881 on the external test set. Comparison with other methods demonstrates that our proposed approach exhibits superior performance. Features from diffusion-weighted imaging and apparent diffusion coefficient play a crucial role in csPCa diagnosis, with contribution degrees of 53.28% and 39.52%, respectively. We introduce an interpretable, multi-regional, multiparametric diagnostic model for csPCa using deep learning radiomics. By integrating features from various zones, our model improves diagnostic accuracy and provides clear insights into the key imaging parameters, offering strong potential for clinical applications in csPCa management.

Incomplete Multi-modal Disentanglement Learning with Application to Alzheimer's Disease Diagnosis.

Han K, Hu D, Zhao F, Liu T, Yang F, Li G

pubmed logopapersAug 29 2025
Multi-modal neuroimaging data, including magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (PET), have greatly advanced the computer-aided diagnosis of Alzheimer's disease (AD) by providing shared and complementary information. However, the problem of incomplete multi-modal data remains inevitable and challenging. Conventional strategies that exclude subjects with missing data or synthesize missing scans either result in substantial sample reduction or introduce unwanted noise. To address this issue, we propose an Incomplete Multi-modal Disentanglement Learning method (IMDL) for AD diagnosis without missing scan synthesis, a novel model that employs a tiny Transformer to fuse incomplete multi-modal features extracted by modality-wise variational autoencoders adaptively. Specifically, we first design a cross-modality contrastive learning module to encourage modality-wise variational autoencoders to disentangle shared and complementary representations of each modality. Then, to alleviate the potential information gap between the representations obtained from complete and incomplete multi-modal neuroimages, we leverage the technique of adversarial learning to harmonize these representations with two discriminators. Furthermore, we develop a local attention rectification module comprising local attention alignment and multi-instance attention rectification to enhance the localization of atrophic areas associated with AD. This module aligns inter-modality and intra-modality attention within the Transformer, thus making attention weights more explainable. Extensive experiments conducted on ADNI and AIBL datasets demonstrated the superior performance of the proposed IMDL in AD diagnosis, and a further validation on the HABS-HD dataset highlighted its effectiveness for dementia diagnosis using different multi-modal neuroimaging data (i.e., T1-weighted MRI and diffusion tensor imaging).

Proteogenomic Biomarker Profiling for Predicting Radiolabeled Immunotherapy Response in Resistant Prostate Cancer.

Yan B, Gao Y, Zou Y, Zhao L, Li Z

pubmed logopapersAug 29 2025
Treatment resistance prevents patients with preoperative chemoradiotherapy or targeted radiolabeled immunotherapy from achieving a good result, which remains a major challenge in the prostate cancer (PCa) area. A novel integrative framework combining a machine learning workflow with proteogenomic profiling was used to identify predictive ultrasound biomarkers and classify patient response to radiolabeled immunotherapy in high-risk PCa patients who are treatment resistant. The deep stacked autoencoder (DSAE) model, combined with Extreme Gradient Boosting, was designed for feature refinement and classification. The Cancer Genome Atlas and an independent radiotherapy-treated cohort have been utilized to collect multiomics data through their respective applications. In addition to genetic mutations (whole-exome sequencing), these data contained proteomic (mass spectrometry) and transcriptomic (RNA sequencing) data. Maintaining biological variety across omics layers while reducing the dimensionality of the data requires the use of the DSAE architecture. Resistance phenotypes show a notable relationship with proteogenomic profiles, including DNA repair pathways (Breast Cancer gene 2 [BRCA2], ataxia-telangiectasia mutated [ATM]), androgen receptor (AR) signaling regulators, and metabolic enzymes (ATP citrate lyase [ACLY], isocitrate dehydrogenase 1 [IDH1]). A specific panel of ultrasound biomarkers has been confirmed in a state deemed preclinical using patient-derived xenografts. To support clinical translation, real-time phenotypic features from ultrasound imaging (e.g., perfusion, stiffness) were also considered, providing complementary insights into the tumor microenvironment and treatment responsiveness. This approach provides an integrated platform that offers a clinically actionable foundation for the development of radiolabeled immunotherapy drugs before surgical operations.

A hybrid computer vision model to predict lung cancer in diverse populations

Zakkar, A., Perwaiz, N., Harikrishnan, V., Zhong, W., Narra, V., Krule, A., Yousef, F., Kim, D., Burrage-Burton, M., Lawal, A. A., Gadi, V., Korpics, M. C., Kim, S. J., Chen, Z., Khan, A. A., Molina, Y., Dai, Y., Marai, E., Meidani, H., Nguyen, R., Salahudeen, A. A.

medrxiv logopreprintAug 29 2025
PURPOSE Disparities of lung cancer incidence exist in Black populations and screening criteria underserve Black populations due to disparately elevated risk in the screening eligible population. Prediction models that integrate clinical and imaging-based features to individualize lung cancer risk is a potential means to mitigate these disparities. PATIENTS AND METHODS This Multicenter (NLST) and catchment population based (UIH, urban and suburban Cook County) study utilized participants at risk of lung cancer with available lung CT imaging and follow up between the years 2015 and 2024. 53,452 in NLST and 11,654 in UIH were included based on age and tobacco use based risk factors for lung cancer. Cohorts were used for training and testing of deep and machine learning models using clinical features alone or combined with CT image features (hybrid computer vision). RESULTS An optimized 7 clinical feature model achieved ROC-AUC values ranging 0.64-0.67 in NLST and 0.60-0.65 in UIH cohorts across multiple years. Incorporation of imaging features to form a hybrid computer vision model significantly improved ROC-AUC values to 0.78-0.91 in NLST but deteriorated in UIH with ROC-AUC values of 0.68- 0.80, attributable to Black participants where ROC-AUC values ranged from 0.63-0.72 across multiple years. Retraining the hybrid computer vision model by incorporating Black and other participants from the UIH cohort improved performance with ROC- AUC values of 0.70-0.87 in a held out UIH test set. CONCLUSION Hybrid computer vision predicted risk with improved accuracy compared to clinical risk models alone. However, potential biases in image training data reduced model generalizability in Black participants. Performance was improved upon retraining with a subset of the UIH cohort, suggesting that inclusive training and validation datasets can minimize racial disparities. Future studies incorporating vision models trained on representative data sets may demonstrate improved health equity upon clinical use.

Fusion model integrating multi-sequence MRI radiomics and habitat imaging for predicting pathological complete response in breast cancer treated with neoadjuvant therapy.

Xu S, Ying Y, Hu Q, Li X, Li Y, Xiong H, Chen Y, Ye Q, Li X, Liu Y, Ai T, Du Y

pubmed logopapersAug 29 2025
This study aimed to develop a predictive model integrating multi-sequence MRI radiomics, deep learning features, and habitat imaging to forecast pathological complete response (pCR) in breast cancer patients undergoing neoadjuvant therapy (NAT). A retrospective analysis included 203 breast cancer patients treated with NAT from May 2018 to January 2023. Patients were divided into training (n = 162) and test (n = 41) sets. Radiomics features were extracted from intratumoral and peritumoral regions in multi-sequence MRI (T2WI, DWI, and DCE-MRI) datasets. Habitat imaging was employed to analyze tumor subregions, characterizing heterogeneity within the tumor. We constructed and validated machine learning models, including a fusion model integrating all features, using Receiver Operating Characteristic (ROC) and Precision-Recall (PR) curves, decision curve analysis (DCA), and confusion matrices. Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) analyses were performed for model interpretability. The fusion model achieved superior predictive performance compared to single-region models, with AUCs of 0.913 (95% CI: 0.770-1.000) in the test set. PR curve analysis showed improved precision-recall balance, while DCA indicated higher clinical benefit. Confusion matrix analysis confirmed the model's classification accuracy. SHAP revealed DCE_LLL_DependenceUniformity as the most critical feature for predicting pCR and PC72 for non-pCR. LIME provided patient-specific insights into feature contributions. Integrating multi-dimensional MRI features with habitat imaging enhances pCR prediction in breast cancer. The fusion model offers a robust, non-invasive tool for guiding individualized treatment strategies while providing transparent interpretability through SHAP and LIME analyses.

Artificial intelligence as an independent reader of risk-dominant lung nodules: influence of CT reconstruction parameters.

Mao Y, Heuvelmans MA, van Tuinen M, Yu D, Yi J, Oudkerk M, Ye Z, de Bock GH, Dorrius MD

pubmed logopapersAug 29 2025
To assess the impact of reconstruction parameters on AI's performance in detecting and classifying risk-dominant nodules in a baseline low-dose CT (LDCT) screening among a Chinese general population. Baseline LDCT scans from 300 consecutive participants in the Netherlands and China Big-3 (NELCIN-B3) trial were included. AI analyzed each scan reconstructed with four settings: 1 mm/0.7 mm thickness/interval with medium-soft and hard kernels (D45f/1 mm, B80f/1 mm) and 2 mm/1 mm with soft and medium-soft kernels (B30f/2 mm, D45f/2 mm). Reading results from consensus read by two radiologists served as reference standard. At scan level, inter-reader agreement between AI and reference standard, sensitivity, and specificity in determining the presence of a risk-dominant nodule were evaluated. For reference-standard risk-dominant nodules, nodule detection rate, and agreement in nodule type classification between AI and reference standard were assessed. AI-D45f/1 mm demonstrated a significantly higher sensitivity than AI-B80f/1 mm in determining the presence of a risk-dominant nodule per scan (77.5% vs. 31.5%, p < 0.0001). For reference-standard risk-dominant nodules (111/300, 37.0%), kernel variations (AI-D45f/1 mm vs. AI-B80f/1 mm) did not significantly affect AI's nodule detection rate (87.4% vs. 82.0%, p = 0.26) but substantially influenced the agreement in nodule type classification between AI and reference standard (87.7% [50/57] vs. 17.7% [11/62], p < 0.0001). Change in thickness/interval (AI-D45f/1 mm vs. AI-D45f/2 mm) had no substantial influence on any of AI's performance (p > 0.05). Variations in reconstruction kernels significantly affected AI's performance in risk-dominant nodule type classification, but not nodule detection. Ensuring consistency with radiologist-preferred kernels significantly improved agreement in nodule type classification and may help integrate AI more smoothly into clinical workflows. Question Patient management in lung cancer screening depends on the risk-dominant nodule, yet no prior studies have assessed the impact of reconstruction parameters on AI performance for these nodules. Findings The difference between reconstruction kernels (AI-D45f/1 mm vs. AI-B80f/1 mm, or AI-B30f/2 mm vs. AI-D45f/2 mm) significantly affected AI's performance in risk-dominant nodule type classification, but not nodule detection. Clinical relevance The use of kernel for AI consistent with radiologist's choice is likely to improve the overall performance of AI-based CAD systems as an independent reader and support greater clinical acceptance and integration of AI tools into routine practice.

Multimodal feature distinguishing and deep learning approach to detect lung disease from MRI images.

Alanazi TM

pubmed logopapersAug 29 2025
Precise and early detection and diagnosis of lung diseases reduce the severity of life risk and further spread of infections in patients. Computer-based image processing techniques utilize magnetic resonance imaging (MRI) as input for computing, detecting, segmenting, etc., processes for improving the processing efficacy. This article introduces a Multimodal Feature Distinguishing Method (MFDM) for augmenting lung disease detection precision. The method distinguishes the extractable features of an MRI lung input using a homogeneity measure. Depending on the possible differentiations for heterogeneity feature detection, the training using a transformer network is pursued. This network performs differentiation verification and training classification independently and integrates the same for identifying heterogeneous features. The integration classifications are used for detecting the infected region based on feature precision. If the differentiation fails, then the transformer process reinitiates its process from the last known homogeneity feature between successive segments. Therefore, the distinguishing multimodal features between successive segments are validated for different differentiation levels, augmenting the accuracy. Thus, the introduced system ensures 8.78% of sensitivity, 8.81% of precision 9.75% of differentiation time while analyzing various lung features. Then, the effective results indicate that the MFDM model was successfully utilized in medical applications to improve the disease recognition rate.

Synthetic data generation method improves risk prediction model for early tumor recurrence after surgery in patients with pancreatic cancer.

Jeong H, Lee JM, Kim HS, Chae H, Yoon SJ, Shin SH, Han IW, Heo JS, Min JH, Hyun SH, Kim H

pubmed logopapersAug 29 2025
Pancreatic cancer is aggressive with high recurrence rates, necessitating accurate prediction models for effective treatment planning, particularly for neoadjuvant chemotherapy or upfront surgery. This study explores the use of variational autoencoder (VAE)-generated synthetic data to predict early tumor recurrence (within six months) in pancreatic cancer patients who underwent upfront surgery. Preoperative data of 158 patients between January 2021 and December 2022 was analyzed, and machine learning models-including Logistic Regression, Random Forest (RF), Gradient Boosting Machine (GBM), and Deep Neural Networks (DNN)-were trained on both original and synthetic datasets. The VAE-generated dataset (n = 94) closely matched the original data (p > 0.05) and enhanced model performance, improving accuracy (GBM: 0.81 to 0.87; RF: 0.84 to 0.87) and sensitivity (GBM: 0.73 to 0.91; RF: 0.82 to 0.91). PET/CT-derived metabolic parameters were the strongest predictors, accounting for 54.7% of the model predictive power with maximum standardized uptake value (SUVmax) showing the highest importance (0.182, 95% CI: 0.165-0.199). This study demonstrates that synthetic data can significantly enhance predictive models for pancreatic cancer recurrence, especially in data-limited scenarios, offering a promising strategy for oncology prediction models.

Deep Learning Radiomics Model Based on Computed Tomography Image for Predicting the Classification of Osteoporotic Vertebral Fractures: Algorithm Development and Validation.

Liu J, Zhang L, Yuan Y, Tang J, Liu Y, Xia L, Zhang J

pubmed logopapersAug 29 2025
Osteoporotic vertebral fractures (OVFs) are common in older adults and often lead to disability if not properly diagnosed and classified. With the increased use of computed tomography (CT) imaging and the development of radiomics and deep learning technologies, there is potential to improve the classification accuracy of OVFs. This study aims to evaluate the efficacy of a deep learning radiomics model, derived from CT imaging, in accurately classifying OVFs. The study analyzed 981 patients (aged 50-95 years; 687 women, 294 men), involving 1098 vertebrae, from 3 medical centers who underwent both CT and magnetic resonance imaging examinations. The Assessment System of Thoracolumbar Osteoporotic Fractures (ASTLOF) classified OVFs into Classes 0, 1, and 2. The data were categorized into 4 cohorts: training (n=750), internal validation (n=187), external validation (n=110), and prospective validation (n=51). Deep transfer learning used the ResNet-50 architecture, pretrained on RadImageNet and ImageNet, to extract imaging features. Deep transfer learning-based features were combined with radiomics features and refined using Least Absolute Shrinkage and Selection Operator (LASSO) regression. The performance of 8 machine learning classifiers for OVF classification was assessed using receiver operating characteristic metrics and the "One-vs-Rest" approach. Performance comparisons between RadImageNet- and ImageNet-based models were performed using the DeLong test. Shapley Additive Explanations (SHAP) analysis was used to interpret feature importance and the predictive rationale of the optimal fusion model. Feature selection and fusion yielded 33 and 54 fused features for the RadImageNet- and ImageNet-based models, respectively, following pretraining on the training set. The best-performing machine learning algorithms for these 2 deep learning radiomics models were the multilayer perceptron and Light Gradient Boosting Machine (LightGBM). The macro-average area under the curve (AUC) values for the fused models based on RadImageNet and ImageNet were 0.934 and 0.996, respectively, with DeLong test showing no statistically significant difference (P=2.34). The RadImageNet-based model significantly surpassed the ImageNet-based model across internal, external, and prospective validation sets, with macro-average AUCs of 0.837 versus 0.648, 0.773 versus 0.633, and 0.852 versus 0.648, respectively (P<.05). Using the binary "One-vs-Rest" approach, the RadImageNet-based fused model achieved superior predictive performance for Class 2 (AUC=0.907, 95% CI 0.805-0.999), with Classes 0 and 1 following (AUC/accuracy=0.829/0.803 and 0.794/0.768, respectively). SHAP analysis provided a visualization of feature importance in the RadImageNet-based fused model, highlighting the top 3 most influential features: cluster shade, mean, and large area low gray level emphasis, and their respective impacts on predictions. The RadImageNet-based fused model using CT imaging data exhibited superior predictive performance compared to the ImageNet-based model, demonstrating significant utility in OVF classification and aiding clinical decision-making for treatment planning. Among the 3 classes, the model performed best in identifying Class 2, followed by Class 0 and Class 1.
Page 48 of 2352345 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.