🤖 AI Summary
As large language models (LLMs) increasingly serve as autonomous agents, recommendation systems must evolve from static modeling to dynamic, multi-agent societies. Method: We propose the first unified formal framework for multi-agent recommender systems—structured as an agent-protocol-environment triad—supporting hierarchical memory, tool invocation, collaborative reasoning, and brand alignment; we further design multi-agent communication protocols and a shared environment to enable interactive planning and user simulation. Contribution/Results: We systematically identify and model five core challenges: protocol complexity, hallucination and error propagation, implicit collusion, objective misalignment, and brand compliance. Through four end-to-end use cases—including party planning and furniture recommendation—we demonstrate significant improvements in personalization, trustworthiness, and contextual richness. This work establishes both theoretical foundations and practical pathways for agent-based recommendation.
📝 Abstract
Large language models (LLMs) are rapidly evolving from passive engines of text generation into agentic entities that can plan, remember, invoke external tools, and co-operate with one another. This perspective paper investigates how such LLM agents (and societies thereof) can transform the design space of recommender systems.
We introduce a unified formalism that (i) models an individual agent as a tuple comprising its language core, tool set, and hierarchical memory, and (ii) captures a multi-agent recommender as a triple of agents, shared environment, and communication protocol. Within this framework, we present four end-to-end use cases-interactive party planning, synthetic user-simulation for offline evaluation, multi-modal furniture recommendation, and brand-aligned explanation generation-each illustrating a distinct capability unlocked by agentic orchestration.
We then surface five cross-cutting challenge families: protocol complexity, scalability, hallucination and error propagation, emergent misalignment (including covert collusion), and brand compliance.
For each, we formalize the problem, review nascent mitigation strategies, and outline open research questions. The result is both a blueprint and an agenda: a blueprint that shows how memory-augmented, tool-using LLM agents can be composed into robust recommendation pipelines, and an agenda inviting the RecSys community to develop benchmarks, theoretical guarantees, and governance tools that keep pace with this new degree of autonomy. By unifying agentic abstractions with recommender objectives, the paper lays the groundwork for the next generation of personalized, trustworthy, and context-rich recommendation services.