π€ AI Summary
This study investigates how recommendation and ranking strategies influence usersβ information health over time through framing effects. To this end, we introduce FrameRef, the first large-scale dataset of reframed statements spanning five distinct framing dimensions, and propose a dynamic simulation framework that integrates language model fine-tuning with Monte Carlo trajectory sampling to model the cumulative impact of prolonged exposure to framed content on framing-sensitive agents. Through framing-conditioned loss attenuation during fine-tuning, agent-based simulations, and human evaluations, we demonstrate that minor initial framing biases can be substantially amplified over extended interactions, leading to divergent trajectories in information health. Furthermore, we show that generated frames exert a measurable influence on human judgment.
π Abstract
Information ecosystems increasingly shape how people internalize exposure to adverse digital experiences, raising concerns about the long-term consequences for information health. In modern search and recommendation systems, ranking and personalization policies play a central role in shaping such exposure and its long-term effects on users. To study these effects in a controlled setting, we present FrameRef, a large-scale dataset of 1,073,740 systematically reframed claims across five framing dimensions: authoritative, consensus, emotional, prestige, and sensationalist, and propose a simulation-based framework for modeling sequential information exposure and reinforcement dynamics characteristic of ranking and recommendation systems. Within this framework, we construct framing-sensitive agent personas by fine-tuning language models with framing-conditioned loss attenuation, inducing targeted biases while preserving overall task competence. Using Monte Carlo trajectory sampling, we show that small, systematic shifts in acceptance and confidence can compound over time, producing substantial divergence in cumulative information health trajectories. Human evaluation further confirms that FrameRef's generated framings measurably affect human judgment. Together, our dataset and framework provide a foundation for systematic information health research through simulation, complementing and informing responsible human-centered research. We release FrameRef, code, documentation, human evaluation data, and persona adapter models at https://github.com/infosenselab/frameref.