Generative Artificial Intelligence to Automate Cerebral Perfusion Mapping in Acute Ischemic Stroke from Non-contrast Head Computed Tomography Images: Pilot Study.

Authors

Primiano NJ,Changa AR,Kohli S,Greenspan H,Cahan N,Kummer BR

Affiliations (8)

  • Department of Radiology, Mount Sinai West, One Gustave Levy Place, Box 1137, New York, NY, USA. [email protected].
  • Department of Radiology, Mount Sinai West, One Gustave Levy Place, Box 1137, New York, NY, USA.
  • Icahn School of Medicine at Mount Sinai, New York, NY, USA.
  • BioMedical Engineering and Imaging Institute (BMEII), Icahn School of Medicine at Mount Sinai, New York, NY, USA.
  • Faculty of Engineering, Tel Aviv University, Tel Aviv, Israel.
  • Windreich Department of Artificial Intelligence and Human Health, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
  • Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
  • Clinical Neuro-Informatics Center, Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, NY, USA.

Abstract

Acute ischemic stroke (AIS) is a leading cause of death and long-term disability worldwide, where rapid reperfusion remains critical for salvaging brain tissue. Although CT perfusion (CTP) imaging provides essential hemodynamic information, its limitations-including extended processing times, additional radiation exposure, and variable software outputs-can delay treatment. In contrast, non-contrast head CT (NCHCT) is ubiquitously available in acute stroke settings. This study explores a generative artificial intelligence approach to predict key perfusion parameters (relative cerebral blood flow [rCBF] and time-to-maximum [Tmax]) directly from NCHCT, potentially streamlining stroke imaging workflows and expanding access to critical perfusion data. We retrospectively identified patients evaluated for AIS who underwent NCHCT, CT angiography, and CTP. Ground truth perfusion maps (rCBF and Tmax) were extracted from VIZ.ai post-processed CTP studies. A modified pix2pix-turbo generative adversarial network (GAN) was developed to translate co-registered NCHCT images into corresponding perfusion maps. The network was trained using paired NCHCT-CTP data, with training, validation, and testing splits of 80%:10%:10%. Performance was assessed on the test set using quantitative metrics including the structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and Fréchet inception distance (FID). Out of 120 patients, studies from 99 patients fitting our inclusion and exclusion criteria were used as the primary cohort (mean age 73.3 ± 13.5 years; 46.5% female). Cerebral occlusions were predominantly in the middle cerebral artery. GAN-generated Tmax maps achieved an SSIM of 0.827, PSNR of 16.99, and FID of 62.21, while the rCBF maps demonstrated comparable performance (SSIM 0.79, PSNR 16.38, FID 59.58). These results indicate that the model approximates ground truth perfusion maps to a moderate degree and successfully captures key cerebral hemodynamic features. Our findings demonstrate the feasibility of generating functional perfusion maps directly from widely available NCHCT images using a modified GAN. This cross-modality approach may serve as a valuable adjunct in AIS evaluation, particularly in resource-limited settings or when traditional CTP provides limited diagnostic information. Future studies with larger, multicenter datasets and further model refinements are warranted to enhance clinical accuracy and utility.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.