Sort by:
Page 34 of 1301294 results

Preoperative prediction of post hepatectomy liver failure after surgery for hepatocellular carcinoma on CT-scan by machine learning and radiomics analyses.

Famularo S, Maino C, Milana F, Ardito F, Rompianesi G, Ciulli C, Conci S, Gallotti A, La Barba G, Romano M, De Angelis M, Patauner S, Penzo C, De Rose AM, Marescaux J, Diana M, Ippolito D, Frena A, Boccia L, Zanus G, Ercolani G, Maestri M, Grazi GL, Ruzzenente A, Romano F, Troisi RI, Giuliante F, Donadon M, Torzilli G

pubmed logopapersJul 1 2025
No instruments are available to predict preoperatively the risk of posthepatectomy liver failure (PHLF) in HCC patients. The aim was to predict the occurrence of PHLF preoperatively by radiomics and clinical data through machine-learning algorithms. Clinical data and 3-phases CT scans were retrospectively collected among 13 Italian centres between 2008 and 2022. Radiomics features were extracted in the non-tumoral liver area. Data were split between training(70 %) and test(30 %) sets. An oversampling was run(ADASYN) in the training set. Random-Forest(RF), extreme gradient boosting (XGB) and support vector machine (SVM) models were fitted to predict PHLF. Final evaluation of the metrics was run in the test set. The best models were included in an averaging ensemble model (AEM). Five-hundred consecutive preoperative CT scans were collected with the relative clinical data. Of them, 17 (3.4 %) experienced a PHLF. Two-hundred sixteen radiomics features per patient were extracted. PCA selected 19 dimensions explaining >75 % of the variance. Associated clinical variables were: size, macrovascular invasion, cirrhosis, major resection and MELD score. Data were split in training cohort (70 %, n = 351) and a test cohort (30 %, n = 149). The RF model obtained an AUC = 89.1 %(Spec. = 70.1 %, Sens. = 100 %, accuracy = 71.1 %, PPV = 10.4 %, NPV = 100 %). The XGB model showed an AUC = 89.4 %(Spec. = 100 %, Sens. = 20.0 %, Accuracy = 97.3 %, PPV = 20 %, NPV = 97.3 %). The AEM combined the XGB and RF model, obtaining an AUC = 90.1 %(Spec. = 89.5 %, Sens. = 80.0 %, accuracy = 89.2 %, PPV = 21.0 %, NPV = 99.2 %). The AEM obtained the best results in terms of discrimination and true positive identification. This could lead to better define patients fit or unfit for liver resection.

Does alignment alone predict mechanical complications after adult spinal deformity surgery? A machine learning comparison of alignment, bone quality, and soft tissue.

Sundrani S, Doss DJ, Johnson GW, Jain H, Zakieh O, Wegner AM, Lugo-Pico JG, Abtahi AM, Stephens BF, Zuckerman SL

pubmed logopapersJul 1 2025
Mechanical complications are a vexing occurrence after adult spinal deformity (ASD) surgery. While achieving ideal spinal alignment in ASD surgery is critical, alignment alone may not fully explain all mechanical complications. The authors sought to determine which combination of inputs produced the most sensitive and specific machine learning model to predict mechanical complications using postoperative alignment, bone quality, and soft tissue data. A retrospective cohort study was performed in patients undergoing ASD surgery from 2009 to 2021. Inclusion criteria were a fusion ≥ 5 levels, sagittal/coronal deformity, and at least 2 years of follow-up. The primary exposure variables were 1) alignment, evaluated in both the sagittal and coronal planes using the L1-pelvic angle ± 3°, L4-S1 lordosis, sagittal vertical axis, pelvic tilt, and coronal vertical axis; 2) bone quality, evaluated by the T-score from a dual-energy x-ray absorptiometry scan; and 3) soft tissue, evaluated by the paraspinal muscle-to-vertebral body ratio and fatty infiltration. The primary outcome was mechanical complications. Alongside demographic data in each model, 7 machine learning models with all combinations of domains (alignment, bone quality, and soft tissue) were trained. The positive predictive value (PPV) was calculated for each model. Of 231 patients (24% male) undergoing ASD surgery with a mean age of 64 ± 17 years, 147 (64%) developed at least one mechanical complication. The model with alignment alone performed poorly, with a PPV of 0.85. However, the model with alignment, bone quality, and soft tissue achieved a high PPV of 0.90, sensitivity of 0.67, and specificity of 0.84. Moreover, the model with alignment alone failed to predict 15 complications of 100, whereas the model with all three domains only failed to predict 10 of 100. These results support the notion that not every mechanical failure is explained by alignment alone. The authors found that a combination of alignment, bone quality, and soft tissue provided the most accurate prediction of mechanical complications after ASD surgery. While achieving optimal alignment is essential, additional data including bone and soft tissue are necessary to minimize mechanical complications.

Muscle-Driven prognostication in gastric cancer: A multicenter deep learning framework integrating Iliopsoas and erector spinae radiomics for 5-Year survival prediction.

Hong Y, Zhang P, Teng Z, Cheng K, Zhang Z, Cheng Y, Cao G, Chen B

pubmed logopapersJul 1 2025
This study developed a 5-year survival prediction model for gastric cancer patients by combining radiomics and deep learning, focusing on CT-based 2D and 3D features of the iliopsoas and erector spinae muscles. Retrospective data from 705 patients across two centers were analyzed, with clinical variables assessed via Cox regression and radiomic features extracted using deep learning. The 2D model outperformed the 3D approach, leading to feature fusion across five dimensions, optimized via logistic regression. Results showed no significant association between clinical baseline characteristics and survival, but the 2D model demonstrated strong prognostic performance (AUC ~ 0.8), with attention heatmaps emphasizing spinal muscle regions. The 3D model underperformed due to irrelevant data. The final integrated model achieved stable predictive accuracy, confirming the link between muscle mass and survival. This approach advances precision medicine by enabling personalized prognosis and exploring 3D imaging feasibility, offering insights for gastric cancer research.

2.5D deep learning radiomics and clinical data for predicting occult lymph node metastasis in lung adenocarcinoma.

Huang X, Huang X, Wang K, Bai H, Lu X, Jin G

pubmed logopapersJul 1 2025
Occult lymph node metastasis (OLNM) refers to lymph node involvement that remains undetectable by conventional imaging techniques, posing a significant challenge in the accurate staging of lung adenocarcinoma. This study aims to investigate the potential of combining 2.5D deep learning radiomics with clinical data to predict OLNM in lung adenocarcinoma. Retrospective contrast-enhanced CT images were collected from 1,099 patients diagnosed with lung adenocarcinoma across two centers. Multivariable analysis was performed to identify independent clinical risk factors for constructing clinical signatures. Radiomics features were extracted from the enhanced CT images to develop radiomics signatures. A 2.5D deep learning approach was used to extract deep learning features from the images, which were then aggregated using multi-instance learning (MIL) to construct MIL signatures. Deep learning radiomics (DLRad) signatures were developed by integrating the deep learning features with radiomic features. These were subsequently combined with clinical features to form the combined signatures. The performance of the resulting signatures was evaluated using the area under the curve (AUC). The clinical model achieved AUCs of 0.903, 0.866, and 0.785 in the training, validation, and external test cohorts The radiomics model yielded AUCs of 0.865, 0.892, and 0.796 in the training, validation, and external test cohorts. The MIL model demonstrated AUCs of 0.903, 0.900, and 0.852 in the training, validation, and external test cohorts, respectively. The DLRad model showed AUCs of 0.910, 0.908, and 0.875 in the training, validation, and external test cohorts. Notably, the combined model consistently outperformed all other models, achieving AUCs of 0.940, 0.923, and 0.898 in the training, validation, and external test cohorts. The integration of 2.5D deep learning radiomics with clinical data demonstrates strong capability for OLNM in lung adenocarcinoma, potentially aiding clinicians in developing more personalized treatment strategies.

Deep learning for gender estimation using hand radiographs: a comparative evaluation of CNN models.

Ulubaba HE, Atik İ, Çiftçi R, Eken Ö, Aldhahi MI

pubmed logopapersJul 1 2025
Accurate gender estimation plays a crucial role in forensic identification, especially in mass disasters or cases involving fragmented or decomposed remains where traditional skeletal landmarks are unavailable. This study aimed to develop a deep learning-based model for gender classification using hand radiographs, offering a rapid and objective alternative to conventional methods. We analyzed 470 left-hand X-ray images from adults aged 18 to 65 years using four convolutional neural network (CNN) architectures: ResNet-18, ResNet-50, InceptionV3, and EfficientNet-B0. Following image preprocessing and data augmentation, models were trained and validated using standard classification metrics: accuracy, precision, recall, and F1 score. Data augmentation included random rotation, horizontal flipping, and brightness adjustments to enhance model generalization. Among the tested models, ResNet-50 achieved the highest classification accuracy (93.2%) with precision of 92.4%, recall of 93.3%, and F1 score of 92.5%. While other models demonstrated acceptable performance, ResNet-50 consistently outperformed them across all metrics. These findings suggest CNNs can reliably extract sexually dimorphic features from hand radiographs. Deep learning approaches, particularly ResNet-50, provide a robust, scalable, and efficient solution for gender prediction from hand X-ray images. This method may serve as a valuable tool in forensic scenarios where speed and reliability are critical. Future research should validate these findings across diverse populations and incorporate explainable AI techniques to enhance interpretability.

Computed tomography-based radiomics predicts prognostic and treatment-related levels of immune infiltration in the immune microenvironment of clear cell renal cell carcinoma.

Song S, Ge W, Qi X, Che X, Wang Q, Wu G

pubmed logopapersJul 1 2025
The composition of the tumour microenvironment is very complex, and measuring the extent of immune cell infiltration can provide an important guide to clinically significant treatments for cancer, such as immune checkpoint inhibition therapy and targeted therapy. We used multiple machine learning (ML) models to predict differences in immune infiltration in clear cell renal cell carcinoma (ccRCC), with computed tomography (CT) imaging pictures serving as a model for machine learning. We also statistically analysed and compared the results of multiple typing models and explored an excellent non-invasive and convenient method for treatment of ccRCC patients and explored a better, non-invasive and convenient prediction method for ccRCC patients. The study included 539 ccRCC samples with clinicopathological information and associated genetic information from The Cancer Genome Atlas (TCGA) database. The Single Sample Gene Set Enrichment Analysis (ssGSEA) algorithm was used to obtain the immune cell infiltration results as well as the cluster analysis results. ssGSEA-based analysis was used to obtain the immune cell infiltration levels, and the Boruta algorithm was further used to downscale the obtained positive/negative gene sets to obtain the immune infiltration level groupings. Multifactor Cox regression analysis was used to calculate the immunotherapy response of subgroups according to Tumor Immune Dysfunction and Exclusion (TIDE) algorithm and subgraph algorithm to detect the difference in survival time and immunotherapy response of ccRCC patients with immune infiltration. Radiomics features were screened using LASSO analysis. Eight ML algorithms were selected for diagnostic analysis of the test set. Receiver operating characteristic (ROC) curve was used to evaluate the performance of the model. Draw decision curve analysis (DCA) to evaluate the clinical personalized medical value of the predictive model. The high/low subtypes of immune infiltration levels obtained by optimisation based on the Boruta algorithm were statistically different in the survival analysis of ccRCC patients. Multifactorial immune infiltration level combined with clinical factors better predicted survival of ccRCC patients, and ccRCC with high immune infiltration may benefit more from anti-PD-1 therapy. Among the eight machine learning models, ExtraTrees had the highest test and training set ROC AUCs of 1.000 and 0.753; in the test set, LR and LightGBM had the highest sensitivity of 0.615; LR, SVM, ExtraTrees, LightGBM and MLP had higher specificities of 0.789, 1.000, 0.842, 0.789 and 0.789, respectively; and LR, ExtraTrees and LightGBM had the highest accuracy of 0. 719, 0.688 and 0.719 respectively. Therefore, the CT-based ML achieved good predictive results in predicting immune infiltration in ccRCC, with the ExtraTrees machine learning algorithm being optimal. The use of radiomics model based on renal CT images can be noninvasively used to predict the immune infiltration level of ccRCC as well as combined with clinical information to create columnar plots predicting total survival in people with ccRCC and to predict responsiveness to ICI therapy, findings that may be useful in stratifying the prognosis of patients with ccRCC and guiding clinical practitioners to develop individualized regimens in the treatment of their patients.

Attention-driven hybrid deep learning and SVM model for early Alzheimer's diagnosis using neuroimaging fusion.

Paduvilan AK, Livingston GAL, Kuppuchamy SK, Dhanaraj RK, Subramanian M, Al-Rasheed A, Getahun M, Soufiene BO

pubmed logopapersJul 1 2025
Alzheimer's Disease (AD) poses a significant global health challenge, necessitating early and accurate diagnosis to enable timely interventions. AD is a progressive neurodegenerative disorder that affects millions worldwide and is one of the leading causes of cognitive impairment in older adults. Early diagnosis is critical for enabling effective treatment strategies, slowing disease progression, and improving the quality of life for patients. Existing diagnostic methods often struggle with limited sensitivity, overfitting, and reduced reliability due to inadequate feature extraction, imbalanced datasets, and suboptimal model architectures. This study addresses these gaps by introducing an innovative methodology that combines SVM with Deep Learning (DL) to improve the classification performance of AD. Deep learning models extract high-level imaging features which are then concatenated with SVM kernels in a late-fusion ensemble. This hybrid design leverages deep representations for pattern recognition and SVM's robustness on small sample sets. This study provides a necessary tool for early-stage identification of possible cases, so enhancing the management and treatment options. This is attained by precisely classifying the disease from neuroimaging data. The approach integrates advanced data pre-processing, dynamic feature optimization, and attention-driven learning mechanisms to enhance interpretability and robustness. The research leverages a dataset of MRI and PET imaging, integrating novel fusion techniques to extract key biomarkers indicative of cognitive decline. Unlike prior approaches, this method effectively mitigates the challenges of data sparsity and dimensionality reduction while improving generalization across diverse datasets. Comparative analysis highlights a 15% improvement in accuracy, a 12% reduction in false positives, and a 10% increase in F1-score against state-of-the-art models such as HNC and MFNNC. The proposed method significantly outperforms existing techniques across metrics like accuracy, sensitivity, specificity, and computational efficiency, achieving an overall accuracy of 98.5%.

Synthetic Versus Classic Data Augmentation: Impacts on Breast Ultrasound Image Classification.

Medghalchi Y, Zakariaei N, Rahmim A, Hacihaliloglu I

pubmed logopapersJul 1 2025
The effectiveness of deep neural networks (DNNs) for the ultrasound image analysis depends on the availability and accuracy of the training data. However, the large-scale data collection and annotation, particularly in medical fields, is often costly and time consuming, especially when healthcare professionals are already burdened with their clinical responsibilities. Ensuring that a model remains robust across different imaging conditions-such as variations in ultrasound devices and manual transducer operation-is crucial in the ultrasound image analysis. The data augmentation is a widely used solution, as it increases both the size and diversity of datasets, thereby enhancing the generalization performance of DNNs. With the advent of generative networks such as generative adversarial networks (GANs) and diffusion-based models, the synthetic data generation has emerged as a promising augmentation technique. However, comprehensive studies comparing classic and generative method-based augmentation methods are lacking, particularly in ultrasound-based breast cancer imaging, where variability in breast density, tumor morphology, and operator skill poses significant challenges. This study aims to compare the effectiveness of classic and generative network-based data augmentation techniques in improving the performance and robustness of breast ultrasound image classification models. Specifically, we seek to determine whether the computational intensity of generative networks is justified in data augmentation. This analysis will provide valuable insights into the role and benefits of each technique in enhancing the diagnostic accuracy of DNN for breast cancer diagnosis. The code for this work will be available at: ht.tps://github.com/yasamin-med/SCDA.git.

Dual-Modality Virtual Biopsy System Integrating MRI and MG for Noninvasive Predicting HER2 Status in Breast Cancer.

Wang Q, Zhang ZQ, Huang CC, Xue HW, Zhang H, Bo F, Guan WT, Zhou W, Bai GJ

pubmed logopapersJul 1 2025
Accurate determination of human epidermal growth factor receptor 2 (HER2) expression is critical for guiding targeted therapy in breast cancer. This study aimed to develop and validate a deep learning (DL)-based decision-making visual biomarker system (DM-VBS) for predicting HER2 status using radiomics and DL features derived from magnetic resonance imaging (MRI) and mammography (MG). Radiomics features were extracted from MRI, and DL features were derived from MG. Four submodels were constructed: Model I (MRI-radiomics) and Model III (mammography-DL) for distinguishing HER2-zero/low from HER2-positive cases, and Model II (MRI-radiomics) and Model IV (mammography-DL) for differentiating HER2-zero from HER2-low/positive cases. These submodels were integrated into a XGBoost model for ternary classification of HER2 status. Radiologists assessed imaging features associated with HER2 expression, and model performance was validated using two independent datasets from The Cancer Image Archive. A total of 550 patients were divided into training, internal validation, and external validation cohorts. Models I and III achieved an area under the curve (AUC) of 0.800-0.850 for distinguishing HER2-zero/low from HER2-positive cases, while Models II and IV demonstrated AUC values of 0.793-0.847 for differentiating HER2-zero from HER2-low/positive cases. The DM-VBS achieved average accuracy of 85.42%, 80.4%, and 89.68% for HER2-zero, -low, and -positive patients in the validation cohorts, respectively. Imaging features such as lesion size, number of lesions, enhancement type, and microcalcifications significantly differed across HER2 statuses, except between HER2-zero and -low groups. DM-VBS can predict HER2 status and assist clinicians in making treatment decisions for breast cancer.

Integrated brain connectivity analysis with fMRI, DTI, and sMRI powered by interpretable graph neural networks.

Qu G, Zhou Z, Calhoun VD, Zhang A, Wang YP

pubmed logopapersJul 1 2025
Multimodal neuroimaging data modeling has become a widely used approach but confronts considerable challenges due to their heterogeneity, which encompasses variability in data types, scales, and formats across modalities. This variability necessitates the deployment of advanced computational methods to integrate and interpret diverse datasets within a cohesive analytical framework. In our research, we combine functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and structural MRI (sMRI) for joint analysis. This integration capitalizes on the unique strengths of each modality and their inherent interconnections, aiming for a comprehensive understanding of the brain's connectivity and anatomical characteristics. Utilizing the Glasser atlas for parcellation, we integrate imaging-derived features from multiple modalities - functional connectivity from fMRI, structural connectivity from DTI, and anatomical features from sMRI - within consistent regions. Our approach incorporates a masking strategy to differentially weight neural connections, thereby facilitating an amalgamation of multimodal imaging data. This technique enhances interpretability at the connectivity level, transcending traditional analyses centered on singular regional attributes. The model is applied to the Human Connectome Project's Development study to elucidate the associations between multimodal imaging and cognitive functions throughout youth. The analysis demonstrates improved prediction accuracy and uncovers crucial anatomical features and neural connections, deepening our understanding of brain structure and function. This study not only advances multimodal neuroimaging analytics by offering a novel method for integrative analysis of diverse imaging modalities but also improves the understanding of intricate relationships between brain's structural and functional networks and cognitive development.
Page 34 of 1301294 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.