🤖 AI Summary
Under cold-start conditions, user–POI interactions are extremely sparse, posing critical challenges for existing methods—particularly LLM-based approaches—including prohibitively high supervision costs for fine-tuning, poor generalization, and inflexible static prompts incapable of adapting to diverse user contexts. To address these limitations, we propose KG-RLP (Knowledge Graph-enhanced Reinforcement Learning-guided Prompting), a novel framework that models prompt construction as a learnable policy. Leveraging contextual bandits, KG-RLP dynamically selects and composes relation paths from a knowledge graph to generate evidence cards, enabling frozen large language models to perform adaptive reasoning without parameter updates. Crucially, KG-RLP eliminates the need for supervised fine-tuning, transcending both static prompting and parameter-tuning paradigms. Extensive experiments on three real-world datasets demonstrate that KG-RLP improves Acc@1 by 7.7% on average for inactive users under cold start, while maintaining competitive performance for active users.
📝 Abstract
Next point-of-interest (POI) recommendation is crucial for smart urban services such as tourism, dining, and transportation, yet most approaches struggle under cold-start conditions where user-POI interactions are sparse. Recent efforts leveraging large language models (LLMs) address this challenge through either supervised fine-tuning (SFT) or in-context learning (ICL). However, SFT demands costly annotations and fails to generalize to inactive users, while static prompts in ICL cannot adapt to diverse user contexts. To overcome these limitations, we propose Prompt-as-Policy over knowledge graphs, a reinforcement-guided prompting framework that learns to construct prompts dynamically through contextual bandit optimization. Our method treats prompt construction as a learnable policy that adaptively determines (i) which relational evidences to include, (ii) the number of evidence per candidate, and (iii) their organization and ordering within prompts. More specifically, we construct a knowledge graph (KG) to discover candidates and mine relational paths, which are transformed into evidence cards that summarize rationales for each candidate POI. The frozen LLM then acts as a reasoning engine, generating recommendations from the KG-discovered candidate set based on the policy-optimized prompts. Experiments on three real-world datasets demonstrate that Prompt-as-Policy consistently outperforms state-of-the-art baselines, achieving average 7.7% relative improvements in Acc@1 for inactive users, while maintaining competitive performance on active users, without requiring model fine-tuning.