Geographic prompting and content fidelity in generative Artificial Intelligence: A multi-model study of demographics and imaging equipment in AI-generated videos and images of Canadian medical radiation technologists.
Authors
Affiliations (4)
Affiliations (4)
- Department of Diagnostic Imaging, Trillium Health Partners, Mississauga, ON, Canada; Medical Radiation Science, McMaster University, Hamilton, ON, Canada. Electronic address: [email protected].
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada.
- Department of Diagnostic Imaging, Trillium Health Partners, Mississauga, ON, Canada.
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada; Diagnostic Imaging, BC Cancer Vancouver, Vancouver, BC, Canada.
Abstract
As generative AI tools increasingly produce medical imagery and videos for education, marketing, and communication, concerns have arisen about the accuracy and equity of these representations. Existing research has identified demographic biases in AI-generated depictions of healthcare professionals, but little is known about their portrayal of Medical Radiation Technologists (MRTs), particularly in the Canadian context. This study evaluated 690 AI-generated outputs (600 images and 90 videos) created by eight leading text-to-image and text-to-video models using the prompt ``Image [or video] of a Canadian Medical Radiation Technologist.'' Each image and video was assessed for demographic characteristics (gender, race/ethnicity, age, religious representation, visible disabilities), and the presence and accuracy of imaging equipment. These were compared to real-world demographic data on Canadian MRTs (n = 20,755). Significant demographic discrepancies were observed between AI-generated content and real-world data. AI depictions included a higher proportion of visible minorities (as defined by Statistics Canada) (39% vs. 20.8%, p < 0.001) and males (41.4% vs. 21.2%, p < 0.001), while underrepresenting women (58.5% vs. 78.8%, p < 0.001). Age representation skewed younger than actual workforce demographics (p < 0.001). Equipment representation was inconsistent, with 66% of outputs showing CT/MRI and only 4.3% showing X-rays; 26% included inaccurate or fictional equipment. Generative AI models frequently produce demographically and contextually inaccurate depictions of MRTs, misrepresenting workforce diversity and clinical tools. These inconsistencies pose risks for educational accuracy, public perception, and equity in professional representation. Improved model training and prompt sensitivity are needed to ensure reliable and inclusive AI-generated medical content.