🤖 AI Summary
Preference modeling in recommender systems often induces “filter bubbles,” yet existing diversity metrics fail to distinguish algorithmic bias from genuine information isolation. To address this, we propose Behavior-Aware Escape Latency (BEP), a novel metric grounded in a contrastive simulation framework that disentangles user behavior modeling from bubble formation. BEP quantifies users’ capacity to escape their established preference distributions by simulating synthetic users with heterogeneous behavioral propensities. Our analysis reveals a fundamental trade-off between prediction accuracy and bubble escape capability—and surprisingly, mild randomization exacerbates rather than alleviates bubble entrapment. Experiments across multiple state-of-the-art recommendation models demonstrate that conventional diversity metrics substantially underestimate bubble risk, whereas BEP enables precise diagnosis of bubble severity. BEP thus establishes a new paradigm for evaluating fairness and robustness in recommender systems.
📝 Abstract
Nowadays, recommendation systems have become crucial to online platforms, shaping user exposure by accurate preference modeling. However, such an exposure strategy can also reinforce users'existing preferences, leading to a notorious phenomenon named filter bubbles. Given its negative effects, such as group polarization, increasing attention has been paid to exploring reasonable measures to filter bubbles. However, most existing evaluation metrics simply measure the diversity of user exposure, failing to distinguish between algorithmic preference modeling and actual information confinement. In view of this, we introduce Bubble Escape Potential (BEP), a behavior-aware measure that quantifies how easily users can escape from filter bubbles. Specifically, BEP leverages a contrastive simulation framework that assigns different behavioral tendencies (e.g., positive vs. negative) to synthetic users and compares the induced exposure patterns. This design enables decoupling the effect of filter bubbles and preference modeling, allowing for more precise diagnosis of bubble severity. We conduct extensive experiments across multiple recommendation models to examine the relationship between predictive accuracy and bubble escape potential across different groups. To the best of our knowledge, our empirical results are the first to quantitatively validate the dilemma between preference modeling and filter bubbles. What's more, we observe a counter-intuitive phenomenon that mild random recommendations are ineffective in alleviating filter bubbles, which can offer a principled foundation for further work in this direction.