Sort by:
Page 13 of 99987 results

Artificial intelligence in gastric cancer: a systematic review of machine learning and deep learning applications.

Alsallal M, Habeeb MS, Vaghela K, Malathi H, Vashisht A, Sahu PK, Singh D, Al-Hussainy AF, Aljanaby IA, Sameer HN, Athab ZH, Adil M, Yaseen A, Farhood B

pubmed logopapersSep 11 2025
Gastric cancer (GC) remains a major global health concern, ranking as the fifth most prevalent malignancy and the fourth leading cause of cancer-related mortality worldwide. Although early detection can increase the 5-year survival rate of early gastric cancer (EGC) to over 90%, more than 80% of cases are diagnosed at advanced stages due to subtle clinical symptoms and diagnostic challenges. Artificial intelligence (AI), particularly machine learning (ML) and deep learning (DL), has shown great promise in addressing these limitations. This systematic review aims to evaluate the performance, applications, and limitations of ML and DL models in GC management, with a focus on their use in detection, diagnosis, treatment planning, and prognosis prediction across diverse clinical imaging and data modalities. Following the PRISMA 2020 guidelines, a comprehensive literature search was conducted in MEDLINE, Web of Science, and Scopus for studies published between 2004 and May 2025. Eligible studies applied ML or DL algorithms for diagnostic or prognostic tasks in GC using data from endoscopy, computed tomography (CT), pathology, or multi-modal sources. Two reviewers independently performed study selection, data extraction, and risk of bias assessment. A total of 59 studies met the inclusion criteria. DL models, particularly convolutional neural networks (CNNs), demonstrated strong performance in EGC detection, with reported sensitivities up to 95.3% and Area Under the Curve (AUCs) as high as 0.981, often exceeding expert endoscopists. CT-based radiomics and DL models achieved AUCs ranging from 0.825 to 0.972 for tumor staging and metastasis prediction. Pathology-based models reported accuracies up to 100% for EGC detection and AUCs up to 0.92 for predicting treatment response. Cross-modality approaches combining radiomics and pathomics achieved AUCs up to 0.951. Key challenges included algorithmic bias, limited dataset diversity, interpretability issues, and barriers to clinical integration. ML and DL models have demonstrated substantial potential to improve early detection, diagnostic accuracy, and individualized treatment in GC. To advance clinical adoption, future research should prioritize the development of large, diverse datasets, implement explainable AI frameworks, and conduct prospective clinical trials. These efforts will be essential for integrating AI into precision oncology and addressing the increasing global burden of gastric cancer.

CT-Based Radiomics Models with External Validation for Prediction of Recurrence and Disease-Specific Mortality After Radical Surgery of Colorectal Liver Metastases.

Marzi S, Vidiri A, Ianiro A, Parrino C, Ruggiero S, Trobiani C, Teodoli L, Vallati G, Trillò G, Ciolina M, Sperati F, Scarinci A, Virdis M, Busset MDD, Stecca T, Massani M, Morana G, Grazi GL

pubmed logopapersSep 10 2025
To build computed tomography (CT)-based radiomics models, with independent external validation, to predict recurrence and disease-specific mortality in patients with colorectal liver metastases (CRLM) who underwent liver resection. 113 patients were included in this retrospective study: the internal training cohort comprised 66 patients, while the external validation cohort comprised 47. All patients underwent a CT study before surgery. Up to five visible metastases, the whole liver volume, and the surrounding free-disease parenchymal liver were separately delineated on the portal venous phase of CT. Both radiomic features and baseline clinical parameters were considered in the models' building, using different families of machine learning (ML) algorithms. The Support Vector Machine and Naive Bayes ML classifiers provided the best predictive performance. A relevant role of second-order and higher-order texture features emerged from the largest lesion and the liver residual parenchyma. The prediction models for recurrence showed good accuracy, ranging from 70% to 78% and from 66% to 70% in the training and validation sets, respectively. Models for predicting disease-related mortality performed worse, with accuracies ranging from 67% to 73% and from 60% to 64% in the training and validation sets, respectively. CT-based radiomics, alone or in combination with baseline clinical data, allowed the prediction of recurrence and disease-specific mortality of patients with CRLM, with fair to good accuracy after validation in an external cohort. Further investigations with a larger patient population for training and validation are needed to corroborate our analyses.

An Explainable Deep Learning Model for Focal Liver Lesion Diagnosis Using Multiparametric MRI.

Shen Z, Chen L, Wang L, Dong S, Wang F, Pan Y, Zhou J, Wang Y, Xu X, Chong H, Lin H, Li W, Li R, Ma H, Ma J, Yu Y, Du L, Wang X, Zhang S, Yan F

pubmed logopapersSep 10 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To assess the effectiveness of an explainable deep learning (DL) model, developed using multiparametric MRI (mpMRI) features, in improving diagnostic accuracy and efficiency of radiologists for classification of focal liver lesions (FLLs). Materials and Methods FLLs ≥ 1 cm in diameter at mpMRI were included in the study. nn-Unet and Liver Imaging Feature Transformer (LIFT) models were developed using retrospective data from one hospital (January 2018-August 2023). nnU-Net was used for lesion segmentation and LIFT for FLL classification. External testing was performed on data from three hospitals (January 2018-December 2023), with a prospective test set obtained from January 2024 to April 2024. Model performance was compared with radiologists and impact of model assistance on junior and senior radiologist performance was assessed. Evaluation metrics included the Dice similarity coefficient (DSC) and accuracy. Results A total of 2131 individuals with FLLs (mean age, 56 ± [SD] 12 years; 1476 female) were included in the training, internal test, external test, and prospective test sets. Average DSC values for liver and tumor segmentation across the three test sets were 0.98 and 0.96, respectively. Average accuracy for features and lesion classification across the three test sets were 93% and 97%, respectively. LIFT-assisted readings improved diagnostic accuracy (average 5.3% increase, <i>P</i> < .001), reduced reading time (average 34.5 seconds decrease, <i>P</i> < .001), and enhanced confidence (average 0.3-point increase, <i>P</i> < .001) of junior radiologists. Conclusion The proposed DL model accurately detected and classified FLLs, improving diagnostic accuracy and efficiency of junior radiologists. ©RSNA, 2025.

A multidimensional deep ensemble learning model predicts pathological response and outcomes in esophageal squamous cell carcinoma treated with neoadjuvant chemoradiotherapy from pretreatment CT imaging: A multicenter study.

Liu Y, Su Y, Peng J, Zhang W, Zhao F, Li Y, Song X, Ma Z, Zhang W, Ji J, Chen Y, Men Y, Ye F, Men K, Qin J, Liu W, Wang X, Bi N, Xue L, Yu W, Wang Q, Zhou M, Hui Z

pubmed logopapersSep 10 2025
Neoadjuvant chemoradiotherapy (nCRT) followed by esophagectomy remains standard for locally advanced esophageal squamous cell carcinoma (ESCC). However, accurately predicting pathological complete response (pCR) and treatment outcomes remains challenging. This study aimed to develop and validate a multidimensional deep ensemble learning model (DELRN) using pretreatment CT imaging to predict pCR and stratify prognostic risk in ESCC patients undergoing nCRT. In this multicenter, retrospective cohort study, 485 ESCC patients were enrolled from four hospitals (May 2009-August 2023, December 2017-September 2021, May 2014-September 2019, and March 2013-July 2019). Patients were divided into a discovery cohort (n = 194), an internal cohort (n = 49), and three external validation cohorts (n = 242). A multidimensional deep ensemble learning model (DELRN) integrating radiomics and 3D convolutional neural networks was developed based on pretreatment CT images to predict pCR and clinical outcomes. The model's performance was evaluated by discrimination, calibration, and clinical utility. Kaplan-Meier analysis assessed overall survival (OS) and disease-free survival (DFS) at two follow-up centers. The DELRN model demonstrated robust predictive performance for pCR across the discovery, internal, and external validation cohorts, with area under the curve (AUC) values of 0.943 (95 % CI: 0.912-0.973), 0.796 (95 % CI: 0.661-0.930), 0.767 (95 % CI: 0.646-0.887), 0.829 (95 % CI: 0.715-0.942), and 0.782 (95 % CI: 0.664-0.900), respectively, surpassing single-domain radiomics or deep learning models. DELRN effectively stratified patients into high-risk and low-risk groups for OS (log-rank P = 0.018 and 0.0053) and DFS (log-rank P = 0.00042 and 0.035). Multivariate analysis confirmed DELRN as an independent prognostic factor for OS and DFS. The DELRN model demonstrated promising clinical potential as an effective, non-invasive tool for predicting nCRT response and treatment outcome in ESCC patients, enabling personalized treatment strategies and improving clinical decision-making with future prospective multicenter validation.

Live(r) Die: Predicting Survival in Colorectal Liver Metastasis

Muhammad Alberb, Helen Cheung, Anne Martel

arxiv logopreprintSep 10 2025
Colorectal cancer frequently metastasizes to the liver, significantly reducing long-term survival. While surgical resection is the only potentially curative treatment for colorectal liver metastasis (CRLM), patient outcomes vary widely depending on tumor characteristics along with clinical and genomic factors. Current prognostic models, often based on limited clinical or molecular features, lack sufficient predictive power, especially in multifocal CRLM cases. We present a fully automated framework for surgical outcome prediction from pre- and post-contrast MRI acquired before surgery. Our framework consists of a segmentation pipeline and a radiomics pipeline. The segmentation pipeline learns to segment the liver, tumors, and spleen from partially annotated data by leveraging promptable foundation models to complete missing labels. Also, we propose SAMONAI, a novel zero-shot 3D prompt propagation algorithm that leverages the Segment Anything Model to segment 3D regions of interest from a single point prompt, significantly improving our segmentation pipeline's accuracy and efficiency. The predicted pre- and post-contrast segmentations are then fed into our radiomics pipeline, which extracts features from each tumor and predicts survival using SurvAMINN, a novel autoencoder-based multiple instance neural network for survival analysis. SurvAMINN jointly learns dimensionality reduction and hazard prediction from right-censored survival data, focusing on the most aggressive tumors. Extensive evaluation on an institutional dataset comprising 227 patients demonstrates that our framework surpasses existing clinical and genomic biomarkers, delivering a C-index improvement exceeding 10%. Our results demonstrate the potential of integrating automated segmentation algorithms and radiomics-based survival analysis to deliver accurate, annotation-efficient, and interpretable outcome prediction in CRLM.

A Fusion Model of ResNet and Vision Transformer for Efficacy Prediction of HIFU Treatment of Uterine Fibroids.

Zhou Y, Xu H, Jiang W, Zhang J, Chen S, Yang S, Xiang H, Hu W, Qiao X

pubmed logopapersSep 10 2025
High-intensity focused ultrasound (HIFU) is a non-invasive technique for treating uterine fibroids, and the accurate prediction of its therapeutic efficacy depends on precise quantification of the intratumoral heterogeneity. However, existing methods still have limitations in characterizing intratumoral heterogeneity, which restricts the accuracy of efficacy prediction. To this end, this study proposes a deep learning model with a parallel architecture of ResNet and ViT (Res-ViT) to verify whether the synergistic characterization of local texture and global spatial features can improve the accuracy of HIFU efficacy prediction. This study enrolled patients with uterine fibroids who underwent HIFU treatment from Center A (training set: N = 272; internal validation set: N = 92) and Center B (external test set: N = 125). Preoperative T2-weighted magnetic resonance images were used to develop the Res-ViT model for predicting immediate post-treatment non-perfused volume ratio (NPVR) ≥ 80%. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC) and compared against independent Radiomics, ResNet-18, and ViT models. The Res-ViT model outperformed all standalone models across both internal (AUC = 0.895, 95% CI: 0.857-0.987) and external (AUC = 0.853, 95% CI: 0.776-0.921) test sets. SHAP analysis identified the ResNet branch as the predominant decision-making component (feature contribution: 55.4%). The visualization of Gradient-weighted Class Activation Mapping (Grad-CAM) shows that the key regions attended by Res-ViT have higher spatial overlap with the postoperative non-ablated fibroid tissue. The proposed Res-ViT model demonstrates that the fusion strategy of local and global features is an effective method for quantifying uterine fibroid heterogeneity, significantly enhancing the accuracy of HIFU efficacy prediction.

Calibration and Uncertainty for multiRater Volume Assessment in multiorgan Segmentation (CURVAS) challenge results.

Riera-Marín M, O K S, Rodríguez-Comas J, May MS, Pan Z, Zhou X, Liang X, Erick FX, Prenner A, Hémon C, Boussot V, Dillenseger JL, Nunes JC, Qayyum A, Mazher M, Niederer SA, Kushibar K, Martín-Isla C, Radeva P, Lekadir K, Barfoot T, Garcia Peraza Herrera LC, Glocker B, Vercauteren T, Gago L, Englemann J, Kleiss JM, Aubanell A, Antolin A, García-López J, González Ballester MA, Galdrán A

pubmed logopapersSep 10 2025
Deep learning (DL) has become the dominant approach for medical image segmentation, yet ensuring the reliability and clinical applicability of these models requires addressing key challenges such as annotation variability, calibration, and uncertainty estimation. This is why we created the Calibration and Uncertainty for multiRater Volume Assessment in multiorgan Segmentation (CURVAS), which highlights the critical role of multiple annotators in establishing a more comprehensive ground truth, emphasizing that segmentation is inherently subjective and that leveraging inter-annotator variability is essential for robust model evaluation. Seven teams participated in the challenge, submitting a variety of DL models evaluated using metrics such as Dice Similarity Coefficient (DSC), Expected Calibration Error (ECE), and Continuous Ranked Probability Score (CRPS). By incorporating consensus and dissensus ground truth, we assess how DL models handle uncertainty and whether their confidence estimates align with true segmentation performance. Our findings reinforce the importance of well-calibrated models, as better calibration is strongly correlated with the quality of the results. Furthermore, we demonstrate that segmentation models trained on diverse datasets and enriched with pre-trained knowledge exhibit greater robustness, particularly in cases deviating from standard anatomical structures. Notably, the best-performing models achieved high DSC and well-calibrated uncertainty estimates. This work underscores the need for multi-annotator ground truth, thorough calibration assessments, and uncertainty-aware evaluations to develop trustworthy and clinically reliable DL-based medical image segmentation models.

Development of an MRI-Based Comprehensive Model Fusing Clinical, Habitat Radiomics, and Deep Learning Models for Preoperative Identification of Tumor Deposits in Rectal Cancer.

Li X, Zhu Y, Wei Y, Chen Z, Wang Z, Li Y, Jin X, Chen Z, Zhan J, Chen X, Wang M

pubmed logopapersSep 9 2025
Tumor deposits (TDs) are an important prognostic factor in rectal cancer. However, integrated models combining clinical, habitat radiomics, and deep learning (DL) features for preoperative TDs detection remain unexplored. To investigate fusion models based on MRI for preoperative TDs identification and prognosis in rectal cancer. Retrospective. Surgically diagnosed rectal cancer patients (n = 635): training (n = 259) and internal validation (n = 112) from center 1; center 2 (n = 264) for external validation. 1.5/3T, T2-weighted image (T2WI) using fast spin echo sequence. Four models (clinical, habitat radiomics, DL, fusion) were developed for preoperative TDs diagnosis (184 TDs positive). T2WI was segmented using nnUNet, and habitat radiomics and DL features were extracted separately. Clinical parameters were analyzed independently. The fusion model integrated selected features from all three approaches through two-stage selection. Disease-free survival (DFS) analysis was used to assess the models' prognostic performance. Intraclass correlation coefficient (ICC), logistic regression, Mann-Whitney U tests, Chi-squared tests, LASSO, area under the curve (AUC), decision curve analysis (DCA), calibration curves, Kaplan-Meier analysis. The AUCs for the four models ranged from 0.778 to 0.930 in the training set. In the internal validation cohort, the AUCs of clinical, habitat radiomics, DL, and fusion models were 0.785 (95% CI 0.767-0.803), 0.827 (95% CI 0.809-0.845), 0.828 (95% CI 0.815-0.841), and 0.862 (95% CI 0.828-0.896), respectively. In the external validation cohort, the corresponding AUCs were 0.711 (95% CI 0.599-0.644), 0.817 (95% CI 0.801-0.833), 0.759 (95% CI 0.743-0.773), and 0.820 (95% CI 0.770-0.860), respectively. TDs-positive patients predicted by the fusion model had significantly poorer DFS (median: 30.7 months) than TDs-negative patients (median follow-up period: 39.9 months). A fusion model may identify TDs in rectal cancer and could allow to stratify DFS risk. 3.

Artificial intelligence in medical imaging empowers precision neoadjuvant immunochemotherapy in esophageal squamous cell carcinoma.

Fu J, Huang X, Fang M, Feng X, Zhang XY, Xie X, Zheng Z, Dong D

pubmed logopapersSep 9 2025
Neoadjuvant immunochemotherapy (nICT) has demonstrated significant potential in improving pathological response rates and survival outcomes for patients with locally advanced esophageal squamous cell carcinoma (ESCC). However, substantial interindividual variability in therapeutic outcomes highlights the urgent need for more precise predictive tools to guide clinical decision-making. Traditional biomarkers remain limited in both predictive performance and clinical feasibility. In recent years, the application of artificial intelligence (AI) in medical imaging has expanded rapidly. By incorporating voxel-level feature maps, the combination of radiomics and deep learning enables the extraction of rich textural, morphological, and microstructural features, while autonomously learning high-level abstract representations from clinical CT images, thereby revealing biological heterogeneity that is often imperceptible to conventional assessments. Leveraging these high-dimensional representations, AI models can provide more accurate predictions of nICT response. Future advancements in foundation models, multimodal integration, and dynamic temporal modeling are expected to further enhance the generalizability and clinical applicability of AI. AI-powered medical imaging is poised to support all stages of perioperative management in ESCC, playing a pivotal role in high-risk patient identification, dynamic monitoring of therapeutic response, and individualized treatment adjustment, thereby comprehensively advancing precision nICT.

PUUMA (Placental patch and whole-Uterus dual-branch U-Mamba-based Architecture): Functional MRI Prediction of Gestational Age at Birth and Preterm Risk

Diego Fajardo-Rojas, Levente Baljer, Jordina Aviles Verdera, Megan Hall, Daniel Cromb, Mary A. Rutherford, Lisa Story, Emma C. Robinson, Jana Hutter

arxiv logopreprintSep 8 2025
Preterm birth is a major cause of mortality and lifelong morbidity in childhood. Its complex and multifactorial origins limit the effectiveness of current clinical predictors and impede optimal care. In this study, a dual-branch deep learning architecture (PUUMA) was developed to predict gestational age (GA) at birth using T2* fetal MRI data from 295 pregnancies, encompassing a heterogeneous and imbalanced population. The model integrates both global whole-uterus and local placental features. Its performance was benchmarked against linear regression using cervical length measurements obtained by experienced clinicians from anatomical MRI and other Deep Learning architectures. The GA at birth predictions were assessed using mean absolute error. Accuracy, sensitivity, and specificity were used to assess preterm classification. Both the fully automated MRI-based pipeline and the cervical length regression achieved comparable mean absolute errors (3 weeks) and good sensitivity (0.67) for detecting preterm birth, despite pronounced class imbalance in the dataset. These results provide a proof of concept for automated prediction of GA at birth from functional MRI, and underscore the value of whole-uterus functional imaging in identifying at-risk pregnancies. Additionally, we demonstrate that manual, high-definition cervical length measurements derived from MRI, not currently routine in clinical practice, offer valuable predictive information. Future work will focus on expanding the cohort size and incorporating additional organ-specific imaging to improve generalisability and predictive performance.
Page 13 of 99987 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.