ReaderAdaptNet: Modeling Reader Variability in Breast Imaging with Reader-Specific Embeddings.
Authors
Affiliations (4)
Affiliations (4)
- Women's Health & Xray, GE Healthcare France, 283 Rue de la Minière, Buc, 78530, France.
- LMPS, ENS Paris-Saclay, 4 Av. des Sciences, Gif-sur-Yvette, 91272, France.
- Women's Health & Xray, GE Healthcare France, GE Healthcare, Rue de la Minière, 283, Buc, Buc, 78533, France.
- LIP6, Sorbonne Universite, 4 place Jussieu, Paris, Île-de-France, 75005, France.
Abstract
Inter-reader variability remains a major challenge in breast imaging interpretation, particularly for ordinal classification tasks such as breast density and background parenchymal enhancement (BPE). These visual assessments are prone to inconsistencies due to subjective interpretation, limiting the reliability of AI-based models trained on aggregated or noisy labels. This work aims to account for this variability rather than collapse it.
Approach. We propose ReaderAdaptNet, a reader-adaptive network that explicitly models inter-reader variability through reader-specific embeddings. A novel two-stage deep learning framework underpins the training process of the model. The first stage jointly learns discriminative image features and low-dimensional embeddings representing individual annotation styles, enabling personalized classification. The second stage allows for embedding calibration using a small set of labeled examples, enabling rapid adaptation to new readers or consensus definitions without retraining the full model. The method is designed for multi-reader datasets and is evaluated on two breast classification tasks: breast density classification from full-field digital mammography and BPE classification from contrast-enhanced mammography.
Main results. Results demonstrate that reader embeddings improve both individual and consensus-level performance, and calibrated embeddings enable flexible, low-cost personalization. Reader-specific embeddings of dimension 32 improved the mean classification accuracy across readers, increasing from 76.4% to 84.4% for breast density, and from 65.1% to 72.1% for BPE, compared to the proposed baseline method.
Significance. By disentangling stable image features from reader-specific decision tendencies, ReaderAdaptNet provides a parameter-efficient, interpretable route to<i>personalization</i>(alignment to a given reader) or<i>unification</i>(alignment to an institutional standard) under real-world variability.