Sort by:
Page 54 of 100991 results

A Novel Visual Model for Predicting Prognosis of Resected Hepatoblastoma: A Multicenter Study.

He Y, An C, Dong K, Lyu Z, Qin S, Tan K, Hao X, Zhu C, Xiu W, Hu B, Xia N, Wang C, Dong Q

pubmed logopapersJul 1 2025
This study aimed to evaluate the application of a contrast-enhanced CT-based visual model in predicting postoperative prognosis in patients with hepatoblastoma (HB). We analyzed data from 224 patients across three centers (178 in the training cohort, 46 in the validation cohort). Visual features were extracted from contrast-enhanced CT images, and key features, along with clinicopathological data, were identified using LASSO Cox regression. Visual (DINOv2_score) and clinical (Clinical_score) models were developed, and a combined model integrating DINOv2_score and clinical risk factors was constructed. Nomograms were created for personalized risk assessment, with calibration curves and decision curve analysis (DCA) used to evaluate model performance. The DINOv2_score was recognized as a key prognostic indicator for HB. In both the training and validation cohorts, the combined model demonstrated superior performance in predicting disease-free survival (DFS) [C-index (95% CI): 0.886 (0.879-0.895) and 0.873 (0.837-0.909), respectively] and overall survival (OS) [C-index (95% CI): 0.887 (0.877-0.897) and 0.882 (0.858-0.906), respectively]. Calibration curves showed strong alignment between predicted and observed outcomes, while DCA demonstrated that the combined model provided greater clinical net benefit than the clinical or visual models alone across a range of threshold probabilities. The contrast-enhanced CT-based visual model serves as an effective tool for predicting postoperative prognosis in HB patients. The combined model, integrating the DINOv2_score and clinical risk factors, demonstrated superior performance in survival prediction, offering more precise guidance for personalized treatment strategies.

Exploring the Incremental Value of Aorta Enhancement Normalization Method in Evaluating Renal Cell Carcinoma Histological Subtypes: A Multi-center Large Cohort Study.

Huang Z, Wang L, Mei H, Liu J, Zeng H, Liu W, Yuan H, Wu K, Liu H

pubmed logopapersJul 1 2025
The classification of renal cell carcinoma (RCC) histological subtypes plays a crucial role in clinical diagnosis. However, traditional image normalization methods often struggle with discrepancies arising from differences in imaging parameters, scanning devices, and multi-center data, which can impact model robustness and generalizability. This study included 1628 patients with pathologically confirmed RCC who underwent nephrectomy across eight cohorts. These were divided into a training set, a validation set, external test dataset 1, and external test dataset 2. We proposed an "Aortic Enhancement Normalization" (AEN) method based on the lesion-to-aorta enhancement ratio and developed an automated lesion segmentation model along with a multi-scale CT feature extractor. Several machine learning algorithms, including Random Forest, LightGBM, CatBoost, and XGBoost, were used to build classification models and compare the performance of the AEN and traditional approaches for evaluating histological subtypes (clear cell renal cell carcinoma [ccRCC] vs. non-ccRCC). Additionally, we employed SHAP analysis to further enhance the transparency and interpretability of the model's decisions. The experimental results demonstrated that the AEN method outperformed the traditional normalization method across all four algorithms. Specifically, in the XGBoost model, the AEN method significantly improved performance in both internal and external validation sets, achieving AUROC values of 0.89, 0.81, and 0.80, highlighting its superior performance and strong generalizability. SHAP analysis revealed that multi-scale CT features played a critical role in the model's decision-making process. The proposed AEN method effectively reduces the impact of imaging parameter differences, significantly improving the robustness and generalizability of histological subtype (ccRCC vs. non-ccRCC) models. This approach provides new insights for multi-center data analysis and demonstrates promising clinical applicability.

Interpretable Machine Learning Radiomics Model Predicts 5-year Recurrence-Free Survival in Non-metastatic Clear Cell Renal Cell Carcinoma: A Multicenter and Retrospective Cohort Study.

Zhang J, Huang W, Li Y, Zhang X, Chen Y, Chen S, Ming Q, Jiang Q, Xv Y

pubmed logopapersJul 1 2025
To develop and validate a computed tomography (CT) radiomics-based interpretable machine learning (ML) model for predicting 5-year recurrence-free survival (RFS) in non-metastatic clear cell renal cell carcinoma (ccRCC). 559 patients with non-metastatic ccRCCs were retrospectively enrolled from eight independent institutes between March 2013 and January 2019, and were assigned to the primary set (n=271), external test set 1 (n=216), and external test set 2 (n=72). 1316 Radiomics features were extracted via "Pyradiomics." The least absolute shrinkage and selection operator algorithm was used for feature selection and Rad-Score construction. Patients were stratified into low and high 5-year recurrence risk groups based on Rad-Score, followed by Kaplan-Meier analyses. Five ML models integrating Rad-Score and clinicopathological risk factors were compared. Models' performances were evaluated via the discrimination, calibration, and decision curve analysis. The most robust ML model was interpreted using the SHapley Additive exPlanation (SHAP) method. 13 radiomic features were filtered to produce the Rad-Score, which predicted 5-year RFS with area under the receiver operating characteristic curve (AUCs) of 0.734-0.836. Kaplan-Meier analysis showed significant survival differences based on Rad-Score (all Log-Rank p values <0.05). The random forest model outperformed other models, obtaining AUCs of 0.826 [95% confidential interval (CI): 0.766-0.879] and 0.799 (95% CI: 0.670-0.899) in the external test set 1 and 2, respectively. The SHAP analysis suggested positive associations between contributing factors and 5-year RFS status in non-metastatic ccRCC. CT radiomics-based interpretable ML model can effectively predict 5-year RFS in non-metastatic ccRCC patients, distinguishing between low and high 5-year recurrence risks.

Photoacoustic-Integrated Multimodal Approach for Colorectal Cancer Diagnosis.

Biswas S, Chohan DP, Wankhede M, Rodrigues J, Bhat G, Mathew S, Mahato KK

pubmed logopapersJul 1 2025
Colorectal cancer remains a major global health challenge, emphasizing the need for advanced diagnostic tools that enable early and accurate detection. Photoacoustic (PA) spectroscopy, a hybrid technique combining optical absorption with acoustic resolution, is emerging as a powerful tool in cancer diagnostics. It detects biochemical changes in biomolecules within the tumor microenvironment, aiding early identification of malignancies. Integration with modalities, such as ultrasound (US), photoacoustic microscopy (PAM), and nanoparticle-enhanced imaging, enables detailed mapping of tissue structure, vascularity, and molecular markers. When combined with endoscopy and machine learning (ML) for data analysis, PA technology offers real-time, minimally invasive, and highly accurate detection of colorectal tumors. This approach supports tumor classification, therapy monitoring, and detecting features like hypoxia and tumor-associated bacteria. Recent studies integrating machine learning with PA imaging have demonstrated high diagnostic accuracy, achieving area under the curve (AUC) values up to 0.96 and classification accuracies exceeding 89%, highlighting its potential for precise, noninvasive colorectal cancer detection. Continued advancements in nanoparticle design, molecular targeting, and ML analytics position PA as a key tool for personalized colorectal cancer management.

External Validation of an Artificial Intelligence Algorithm Using Biparametric MRI and Its Simulated Integration with Conventional PI-RADS for Prostate Cancer Detection.

Belue MJ, Mukhtar V, Ram R, Gokden N, Jose J, Massey JL, Biben E, Buddha S, Langford T, Shah S, Harmon SA, Turkbey B, Aydin AM

pubmed logopapersJul 1 2025
Prostate imaging reporting and data systems (PI-RADS) experiences considerable variability in inter-reader performance. Artificial Intelligence (AI) algorithms were suggested to provide comparable performance to PI-RADS for assessing prostate cancer (PCa) risk, albeit tested in highly selected cohorts. This study aimed to assess an AI algorithm for PCa detection in a clinical practice setting and simulate integration of the AI model with PI-RADS for assessment of indeterminate PI-RADS 3 lesions. This retrospective cohort study externally validated a biparametric MRI-based AI model for PCa detection in a consecutive cohort of patients who underwent prostate MRI and subsequently targeted and systematic prostate biopsy at a urology clinic between January 2022 and March 2024. Radiologist interpretations followed PI-RADS v2.1, and biopsies were conducted per PI-RADS scores. The previously developed AI model provided lesion segmentations and cancer probability maps which were compared to biopsy results. Additionally, we conducted a simulation to adjust biopsy thresholds for index PI-RADS category 3 studies, where AI predictions within these studies upgraded them to PI-RADS category 4. Among 144 patients with a median age of 70 years and PSA density of 0.17ng/mL/cc, AI's sensitivity for detection of PCa (86.6%) and clinically significant PCa (csPCa, 88.4%) was comparable to radiologists (85.7%, p=0.84, and 89.5%, p=0.80, respectively). The simulation combining radiologist and AI evaluations improved clinically significant PCa sensitivity by 5.8% (p=0.025). The combination of AI, PI-RADS and PSA density provided the best diagnostic performance for csPCa (area under the curve [AUC]=0.76). The AI algorithm demonstrated comparable PCa detection rates to PI-RADS. The combination of AI with radiologist interpretation improved sensitivity and could be instrumental in assessment of low-risk and indeterminate PI-RADS lesions. The role of AI in PCa screening remains to be further elucidated.

Ultrasound-based machine learning model to predict the risk of endometrial cancer among postmenopausal women.

Li YX, Lu Y, Song ZM, Shen YT, Lu W, Ren M

pubmed logopapersJul 1 2025
Current ultrasound-based screening for endometrial cancer (EC) primarily relies on endometrial thickness (ET) and morphological evaluation, which suffer from low specificity and high interobserver variability. This study aimed to develop and validate an artificial intelligence (AI)-driven diagnostic model to improve diagnostic accuracy and reduce variability. A total of 1,861 consecutive postmenopausal women were enrolled from two centers between April 2021 and April 2024. Super-resolution (SR) technique was applied to enhance image quality before feature extraction. Radiomics features were extracted using Pyradiomics, and deep learning features were derived from convolutional neural network (CNN). Three models were developed: (1) R model: radiomics-based machine learning (ML) algorithms; (2) CNN model: image-based CNN algorithms; (3) DLR model: a hybrid model combining radiomics and deep learning features with ML algorithms. Using endometrium-level regions of interest (ROI), the DLR model achieved the best diagnostic performance, with an area under the receiver operating characteristic curve (AUROC) of 0.893 (95% CI: 0.847-0.932), sensitivity of 0.847 (95% CI: 0.692-0.944), and specificity of 0.810 (95% CI: 0.717-0.910) in the internal testing dataset. Consistent performance was observed in the external testing dataset (AUROC 0.871, sensitivity 0.792, specificity 0.829). The DLR model consistently outperformed both the R and CNN models. Moreover, endometrium-level ROIs yielded better results than uterine-corpus-level ROIs. This study demonstrates the feasibility and clinical value of AI-enhanced ultrasound analysis for EC detection. By integrating radiomics and deep learning features with SR-based image preprocessing, our model improves diagnostic specificity, reduces false positives, and mitigates operator-dependent variability. This non-invasive approach offers a more accurate and reliable tool for EC screening in postmenopausal women. Not applicable.

Multi-parametric MRI Habitat Radiomics Based on Interpretable Machine Learning for Preoperative Assessment of Microsatellite Instability in Rectal Cancer.

Wang Y, Xie B, Wang K, Zou W, Liu A, Xue Z, Liu M, Ma Y

pubmed logopapersJul 1 2025
This study constructed an interpretable machine learning model based on multi-parameter MRI sub-region habitat radiomics and clinicopathological features, aiming to preoperatively evaluate the microsatellite instability (MSI) status of rectal cancer (RC) patients. This retrospective study recruited 291 rectal cancer patients with pathologically confirmed MSI status and randomly divided them into a training cohort and a testing cohort at a ratio of 8:2. First, the K-means method was used for cluster analysis of tumor voxels, and sub-region radiomics features and classical radiomics features were respectively extracted from multi-parameter MRI sequences. Then, the synthetic minority over-sampling technique method was used to balance the sample size, and finally, the features were screened. Prediction models were established using logistic regression based on clinicopathological variables, classical radiomics features, and MSI-related sub-region radiomics features, and the contribution of each feature to the model decision was quantified by the Shapley-Additive-Explanations (SHAP) algorithm. The area under the curve (AUC) of the sub-region radiomics model in the training and testing groups was 0.848 and 0.8, respectively, both better than that of the classical radiomics and clinical models. The combined model performed the best, with AUCs of 0.908 and 0.863 in the training and testing groups, respectively. We developed and validated a robust combined model that integrates clinical variables, classical radiomics features, and sub-region radiomics features to accurately determine the MSI status of RC patients. We visualized the prediction process using SHAP, enabling more effective personalized treatment plans and ultimately improving RC patient survival rates.

A Contrast-Enhanced Ultrasound Cine-Based Deep Learning Model for Predicting the Response of Advanced Hepatocellular Carcinoma to Hepatic Arterial Infusion Chemotherapy Combined With Systemic Therapies.

Han X, Peng C, Ruan SM, Li L, He M, Shi M, Huang B, Luo Y, Liu J, Wen H, Wang W, Zhou J, Lu M, Chen X, Zou R, Liu Z

pubmed logopapersJul 1 2025
Recently, a hepatic arterial infusion chemotherapy (HAIC)-associated combination therapeutic regimen, comprising HAIC and systemic therapies (molecular targeted therapy plus immunotherapy), referred to as HAIC combination therapy, has demonstrated promising anticancer effects. Identifying individuals who may potentially benefit from HAIC combination therapy could contribute to improved treatment decision-making for patients with advanced hepatocellular carcinoma (HCC). This dual-center study was a retrospective analysis of prospectively collected data with advanced HCC patients who underwent HAIC combination therapy and pretreatment contrast-enhanced ultrasound (CEUS) evaluations from March 2019 to March 2023. Two deep learning models, AE-3DNet and 3DNet, along with a time-intensity curve-based model, were developed for predicting therapeutic responses from pretreatment CEUS cine images. Diagnostic metrics, including the area under the receiver-operating-characteristic curve (AUC), were calculated to compare the performance of the models. Survival analysis was used to assess the relationship between predicted responses and prognostic outcomes. The model of AE-3DNet was constructed on the top of 3DNet, with innovative incorporation of spatiotemporal attention modules to enhance the capacity for dynamic feature extraction. 326 patients were included, 243 of whom formed the internal validation cohort, which was utilized for model development and fivefold cross-validation, while the rest formed the external validation cohort. Objective response (OR) or non-objective response (non-OR) were observed in 63% (206/326) and 37% (120/326) of the participants, respectively. Among the three efficacy prediction models assessed, AE-3DNet performed superiorly with AUC values of 0.84 and 0.85 in the internal and external validation cohorts, respectively. AE-3DNet's predicted response survival curves closely resembled actual clinical outcomes. The deep learning model of AE-3DNet developed based on pretreatment CEUS cine performed satisfactorily in predicting the responses of advanced HCC to HAIC combination therapy, which may serve as a promising tool for guiding combined therapy and individualized treatment strategies. Trial Registration: NCT02973685.

LUNETR: Language-Infused UNETR for precise pancreatic tumor segmentation in 3D medical image.

Shi Z, Zhang R, Wei X, Yu C, Xie H, Hu Z, Chen X, Zhang Y, Xie B, Luo Z, Peng W, Xie X, Li F, Long X, Li L, Hu L

pubmed logopapersJul 1 2025
The identification of early micro-lesions and adjacent blood vessels in CT scans plays a pivotal role in the clinical diagnosis of pancreatic cancer, considering its aggressive nature and high fatality rate. Despite the widespread application of deep learning methods for this task, several challenges persist: (1) the complex background environment in abdominal CT scans complicates the accurate localization of potential micro-tumors; (2) the subtle contrast between micro-lesions within pancreatic tissue and the surrounding tissues makes it challenging for models to capture these features accurately; and (3) tumors that invade adjacent blood vessels pose significant barriers to surgical procedures. To address these challenges, we propose LUNETR (Language-Infused UNETR), an advanced multimodal encoder model that combines textual and image information for precise medical image segmentation. The integration of an autoencoding language model with cross-attention enabling our model to effectively leverage semantic associations between textual and image data, thereby facilitating precise localization of potential pancreatic micro-tumors. Additionally, we designed a Multi-scale Aggregation Attention (MSAA) module to comprehensively capture both spatial and channel characteristics of global multi-scale image data, enhancing the model's capacity to extract features from micro-lesions embedded within pancreatic tissue. Furthermore, in order to facilitate precise segmentation of pancreatic tumors and nearby blood vessels and address the scarcity of multimodal medical datasets, we collaborated with Zhuzhou Central Hospital to construct a multimodal dataset comprising CT images and corresponding pathology reports from 135 pancreatic cancer patients. Our experimental results surpass current state-of-the-art models, with the incorporation of the semantic encoder improving the average Dice score for pancreatic tumor segmentation by 2.23 %. For the Medical Segmentation Decathlon (MSD) liver and lung cancer datasets, our model achieved an average Dice score improvement of 4.31 % and 3.67 %, respectively, demonstrating the efficacy of the LUNETR.

Deep Learning for Detecting and Subtyping Renal Cell Carcinoma on Contrast-Enhanced CT Scans Using 2D Neural Network with Feature Consistency Techniques.

Gupta A, Dhanakshirur RR, Jain K, Garg S, Yadav N, Seth A, Das CJ

pubmed logopapersJul 1 2025
<b>Objective</b>  The aim of this study was to explore an innovative approach for developing deep learning (DL) algorithm for renal cell carcinoma (RCC) detection and subtyping on computed tomography (CT): clear cell RCC (ccRCC) versus non-ccRCC using two-dimensional (2D) neural network architecture and feature consistency modules. <b>Materials and Methods</b>  This retrospective study included baseline CT scans from 196 histopathologically proven RCC patients: 143 ccRCCs and 53 non-ccRCCs. Manual tumor annotations were performed on axial slices of corticomedullary phase images, serving as ground truth. After image preprocessing, the dataset was divided into training, validation, and testing subsets. The study tested multiple 2D DL architectures, with the FocalNet-DINO demonstrating highest effectiveness in detecting and classifying RCC. The study further incorporated spatial and class consistency modules to enhance prediction accuracy. Models' performance was evaluated using free-response receiver operating characteristic curves, recall rates, specificity, accuracy, F1 scores, and area under the curve (AUC) scores. <b>Results</b>  The FocalNet-DINO architecture achieved the highest recall rate of 0.823 at 0.025 false positives per image (FPI) for RCC detection. The integration of spatial and class consistency modules into the architecture led to 0.2% increase in recall rate at 0.025 FPI, along with improvements of 0.1% in both accuracy and AUC scores for RCC classification. These enhancements allowed detection of cancer in an additional 21 slices and reduced false positives in 126 slices. <b>Conclusion</b>  This study demonstrates high performance for RCC detection and classification using DL algorithm leveraging 2D neural networks and spatial and class consistency modules, to offer a novel, computationally simpler, and accurate DL approach to RCC characterization.
Page 54 of 100991 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.