🤖 AI Summary
This work addresses the insufficient modeling of individual differences in rating tasks by proposing a personalized rating framework grounded in natural language value profiles. Methodologically, it introduces the first approach to compress users’ intrinsic values into interpretable and controllable natural language representations; an information-theoretic evaluation framework demonstrates that these value profiles explain rating variance more effectively than demographic variables—preserving over 70% of demonstration information. The framework integrates context-aware demonstration compression, clustering-driven behavioral grouping, and a controllable language decoder for personalized rating prediction. Experiments show that value profiles significantly improve rating interpretability and calibration, enable instance-level resolution of annotation disagreements, and—when clustered—reveal sources of rating variance more effectively than conventional demographic groupings.
📝 Abstract
Modelling human variation in rating tasks is crucial for enabling AI systems for personalization, pluralistic model alignment, and computational social science. We propose representing individuals using value profiles -- natural language descriptions of underlying values compressed from in-context demonstrations -- along with a steerable decoder model to estimate ratings conditioned on a value profile or other rater information. To measure the predictive information in rater representations, we introduce an information-theoretic methodology. We find that demonstrations contain the most information, followed by value profiles and then demographics. However, value profiles offer advantages in terms of scrutability, interpretability, and steerability due to their compressed natural language format. Value profiles effectively compress the useful information from demonstrations (>70% information preservation). Furthermore, clustering value profiles to identify similarly behaving individuals better explains rater variation than the most predictive demographic groupings. Going beyond test set performance, we show that the decoder models interpretably change ratings according to semantic profile differences, are well-calibrated, and can help explain instance-level disagreement by simulating an annotator population. These results demonstrate that value profiles offer novel, predictive ways to describe individual variation beyond demographics or group information.