Prostate Cancer Detection on Micro-Ultrasound Raw Data Using a Deep Learning Neural Network.
Authors
Affiliations (7)
Affiliations (7)
- Department of Radiology, UCSD, La Jolla, CA, USA. Electronic address: [email protected].
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA.
- Exact Imaging, Toronto, Ontario, Canada.
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA.
- Department of Urology, UCSD, La Jolla, CA, USA.
- Department of Radiology, UCSD, La Jolla, CA, USA.
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA; Department of Urology, Stanford University School of Medicine, Stanford, CA, USA.
Abstract
Micro-ultrasound (micro-US) is a clinically available novel high-resolution imaging technology for guiding prostate biopsies. However, clinical image interpretation during live biopsies remains a challenge. To develop a convolutional neural network (CNN) to classify prostate tissues as benign versus clinically significant prostate cancer (csPCa) from the power spectrums (PS) derived from raw micro-US, with the eventual goal of developing a tool for automating interpretation during image-guided biopsies. Retrospective Micro-US data were obtained from 491 men (mean age 62 y, SD 8) undergoing prostate biopsy across 5 sites between 2013 and 2016; associated raw data and prostate-specific antigen (PSA) were obtained for each targeted biopsy location and used to obtain spatially mapped PSs. The dataset was split at a patient-level into a train/validation (80%) and a set-aside test set (20%). This includes up to 12 image-frames at distinct prostate locations (total of 6530 single image-frames), each with a corresponding biopsy. No specific prostate tissue segmentation was carried out. A custom CNN named PSNet was developed to classify benign from csPCa in non-segmented regions of micro-US data, and its performance was compared to traditional CNNs trained on associated conventional B-Mode images. Biopsy histopathology served as the clinical standard labels. The area under the receiver operator curve (ROC-AUC) was used to evaluate all models; sensitivity, specificity, precision and the F1 score were also computed; 95% confidence interval is shown in parenthesis. For frame-level performance, PSNet without PSA achieved an ROC-AUC of 82% (0.77, 0.85), a sensitivity of 0.73 (0.66, 0.80) and a specificity of 0.74 (0.71, 0.77) for classifying benign versus csPCa. After inclusion of PSA, the ROC-AUC increased to 85% (0.83, 0.88), with a sensitivity of 0.72 (0.65, 0.79) and a specificity of 0.82 (0.80, 0.84). For patient-level performance, which was obtained by aggregating image-level predictions, the models without and with PSA achieved patient-level ROC-AUCs of 85% (0.77, 0.92) and 91% (0.85, 0.97), sensitivities of 0.74 (0.70, 0.79) and 0.70 (0.65, 0.75) and specificities of 0.88 (0.76, 0.84) and 0.99 (0.98, 1.00), respectively. In this pilot development study, we suggest that deep learning can capture unique tissue acoustic properties in raw micro-US data to help identify prostate cancer, without the need for segmentation of the prostate gland, and that the diagnostic value of these tissue properties can be augmented by PSA measurements to increase specificity. Our approach may be further leveraged to guide targeted prostate biopsy.