RoentMod: a synthetic chest X-ray modification model to identify and correct image interpretation model shortcuts.
Authors
Affiliations (3)
Affiliations (3)
- Cardiovascular Imaging Research Center, Massachusetts General Hospital & Harvard Medical School, Boston, MA, USA.
- Medical Imaging Centre, Semmelweis University, Budapest, Hungary.
- Cardiovascular Imaging Research Center, Massachusetts General Hospital & Harvard Medical School, Boston, MA, USA. [email protected].
Abstract
Chest radiographs (CXRs) are among the most common tests in medicine; automated interpretation may reduce radiologists' workload and expand access. Deep learning multi-task and foundation models have shown strong CXR interpretation performance but are vulnerable to shortcut learning, where spurious correlations drive decision-making. We introduce RoentMod, a counterfactual image editing framework that generates realistic CXRs with user-specified and synthetic pathology while maintaining the original anatomical features. RoentMod combines an open-source medical image generator (RoentGen) with an image-to-image modification model without retraining. In reader studies of RoentMod-produced images, 93% appeared realistic, 89-99% correctly incorporated the specified finding, and all preserved native anatomy comparable to real follow-up CXRs. Using RoentMod, we demonstrate that state-of-the-art multi-task and foundation models frequently exploit off-target pathology as shortcuts, limiting their specificity. Incorporating RoentMod-generated counterfactual images during training mitigated this vulnerability, improving model discrimination across multiple pathologies by 3-19% AUC in internal validation and by 1-11% for 5 out of 6 tested pathologies in external testing. These findings establish RoentMod as a tool to probe and correct shortcut learning in medical AI. By enabling controlled counterfactual interventions, RoentMod enhances the robustness and interpretability of CXR interpretation models and provides a strategy to improve medical imaging models.