🤖 AI Summary
This work addresses the insufficient collaboration and cumulative bias arising from disjoint modeling of recommender and user agents in recommendation systems. We propose the first collaborative optimization framework explicitly designed for a dual-agent closed-loop feedback paradigm. Methodologically, we establish a bidirectional iterative feedback mechanism: the recommender agent generates recommendations and observes responses from the user agent, while the user agent dynamically refines its preference representation based on feedback; both agents co-evolve via an LLM-driven, interpretable interaction protocol. Our key contribution is the first formalization of the recommender–user dual-agent closed-loop feedback process, jointly optimizing recommendation quality and mitigating bias—without exacerbating popularity or positional biases. On three benchmark datasets, our approach achieves average improvements of 11.52% in recommendation accuracy over a recommender-only baseline and 21.12% over a user-only baseline, significantly enhancing fidelity in user behavior modeling.
📝 Abstract
Large language model-based agents are increasingly applied in the recommendation field due to their extensive knowledge and strong planning capabilities. While prior research has primarily focused on enhancing either the recommendation agent or the user agent individually, the collaborative interaction between the two has often been overlooked. Towards this research gap, we propose a novel framework that emphasizes the feedback loop process to facilitate the collaboration between the recommendation agent and the user agent. Specifically, the recommendation agent refines its understanding of user preferences by analyzing the feedback from the user agent on the item recommendation. Conversely, the user agent further identifies potential user interests based on the items and recommendation reasons provided by the recommendation agent. This iterative process enhances the ability of both agents to infer user behaviors, enabling more effective item recommendations and more accurate user simulations. Extensive experiments on three datasets demonstrate the effectiveness of the agentic feedback loop: the agentic feedback loop yields an average improvement of 11.52% over the single recommendation agent and 21.12% over the single user agent. Furthermore, the results show that the agentic feedback loop does not exacerbate popularity or position bias, which are typically amplified by the real-world feedback loop, highlighting its robustness. The source code is available at https://github.com/Lanyu0303/AFL.