🤖 AI Summary
To address challenges in personalization, trustworthiness, and security for conversational recommendation systems under complex user requests, this paper proposes a large language model (LLM)-based multi-agent collaborative framework. The framework introduces a novel, functionally specialized agent architecture—comprising intent analysis, candidate generation, ranking, re-ranking, explainability generation, and safety guarding agents—tightly integrated with dialogue state tracking and content safety filtering. Empirical evaluation on a real-world game recommendation scenario demonstrates that the model achieves or surpasses state-of-the-art performance across eight core metrics, significantly improving complex intent understanding, long-tail preference modeling, and robustness against adversarial interactions. Moreover, it supports natural-language free-form input and delivers end-to-end recommendations that are trustworthy, interpretable, and secure.
📝 Abstract
In this paper, we propose a multi-agent collaboration framework called MATCHA for conversational recommendation system, leveraging large language models (LLMs) to enhance personalization and user engagement. Users can request recommendations via free-form text and receive curated lists aligned with their interests, preferences, and constraints. Our system introduces specialized agents for intent analysis, candidate generation, ranking, re-ranking, explainability, and safeguards. These agents collaboratively improve recommendations accuracy, diversity, and safety. On eight metrics, our model achieves superior or comparable performance to the current state-of-the-art. Through comparisons with six baseline models, our approach addresses key challenges in conversational recommendation systems for game recommendations, including: (1) handling complex, user-specific requests, (2) enhancing personalization through multi-agent collaboration, (3) empirical evaluation and deployment, and (4) ensuring safe and trustworthy interactions.