Intraoperative Ultrasound-Based Displacement Mapping Through Deep Learning.
Authors
Affiliations (5)
Affiliations (5)
- Morsani College of Medicine, University of South Florida, Tampa, Florida, USA.
- Johns Hopkins Whiting School of Engineering, Baltimore, Maryland, USA.
- Department of Neurological Surgery, Miller School of Medicine, University of Miami, Miami, Florida, USA.
- Division of Neurological Surgery, Saint Louis University School of Medicine, St. Louis, Missouri, USA.
- Department of Neurosurgery, Loma Linda University Medical Center, Loma Linda, California, USA.
Abstract
Brain shift during neurological surgery for brain tumors can be caused by factors such as retraction, resection, and osmotic changes and can undermine the reliability of preoperative image-based navigation. Intraoperative ultrasound (iUS) provides a low-cost, real-time imaging alternative, but current correction strategies rely on intraoperative MRI, limiting generalizability and spatial granularity. We present a deep learning framework that predicts voxel-wise brain deformation directly from paired iUS sweeps, allowing for localized brain shift compensation without relying on preoperative MRI. Using the Brain Images of Tumor Evaluation data set of 13 patients with pre-resection and postresection 3-dimensional iUS and landmark annotations, we trained two 3-dimensional neural network architectures and their ensemble. Performance was measured using standard regression metrics at anatomic landmarks with leave-one-patient-out cross-validation. The baseline model achieved the lowest average root median squared error [median: 1.45 (IQR: 0.39)], while the enhanced model had the best directional accuracy [median 69.33° (IQR: 44.45°)]. The ensemble balanced both metrics. Gradient-weighted Class Activation Mapping visualization helped identify regions more likely to deform in pre-resection scans. Landmark-wise error analysis showed consistency, with most patients below 2-mm median absolute error but one patient with atypical anatomy had higher error, suggesting challenges in generalizing large or nonuniform shifts with limited data. Whereas most previous studies have focused on MRI-to-iUS or MRI-to-MRI deformation modeling, our study demonstrates the feasibility of estimating spatially resolved brain shift directly from iUS-to-iUS scans using deep learning. This approach provides dense, real-time deformation fields for better intraoperative adaptability. Future work should expand on data set diversity and size, and integrate multitask learning to distinguish deformation from parenchymal collapse.