Next-generation AI for visually occult pancreatic cancer detection in a low-prevalence setting with longitudinal stability and multi-institutional generalisability.
Authors
Affiliations (7)
Affiliations (7)
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA.
- Radiology, University of Texas MD Anderson Cancer Center, Houston, Texas, USA.
- Radiology, University of Washington, Seattle, Washington, USA.
- Surgery, Mayo Clinic, Rochester, Minnesota, USA.
- Quantitative Health Sciences, Mayo Clinic, Rochester, Minnesota, USA.
- Gastroenterology, Hepatology and Nutrition, University of Texas MD Anderson Cancer Center, Houston, Texas, USA.
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA [email protected].
Abstract
Failure of conventional imaging to detect pancreatic ductal adenocarcinoma (PDA) at its visually occult pre-diagnostic stage is a primary barrier to improving its otherwise poor rate of survival. To develop and validate the Radiomics-based Early Detection MODel (REDMOD), an AI framework to identify subvisual radiomic signatures of pre-diagnostic PDA on standard-of-care CT. REDMOD was trained on a multi-institutional cohort (n=969; 156 pre-diagnostic, 813 control) and tested on an independent set (n=493; 63 pre-diagnostic, 430 control), simulating a low prevalence (~1:6) early detection paradigm. The fully automated framework couples AI-driven segmentation with a heterogeneous ensemble architecture trained on a 40-feature radiomic signature derived from Synthetic Minority Over-sampling Technique (SMOTE)-balanced data. A tunable Youden Index-optimised classification threshold enables performance calibration without retraining. Validation included direct comparison with radiologists, longitudinal test-retest analysis and external specificity validation across two independent cohorts (n=539 and n=80). On an independent test set (n=493), REDMOD identified occult PDA (AUC 0.82; 73.0% sensitivity) at a median 475-day lead time. This represented nearly twofold higher sensitivity than radiologists (38.9%; p<0.001), which grew to nearly threefold (68.0% vs 23.0%) at >24 months lead time. REDMOD showed strong longitudinal stability (90-92% concordance) and generalisable specificity across multi-institutional (81.3%; n=539) and public (87.5%; n=80) datasets. Mechanistic analyses confirmed predictive power derived principally from multi-scale wavelet-filtered textural features (90% of selected signature), which outperformed unfiltered features (AUC 0.82 vs 0.74; p=0.007) in capturing subvisual architectural disruptions. REDMOD is an automated, mechanistically grounded, longitudinally stable, externally validated AI that surpasses radiologists for PDA detection at its visually occult pre-diagnostic stage. These attributes position it for prospective validation in high-risk cohorts, a necessary step towards shifting the paradigm from late-stage symptomatic diagnosis to proactive pre-clinical interception.