Prompt Optimization with Logged Bandit Data

📅 2025-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Optimizing large language model (LLM) prompts for personalized sentence generation using implicit user feedback (e.g., clicks) suffers from high variance and bias in policy gradient estimation within high-dimensional prompt spaces. Method: We propose a kernel-based offline policy gradient method that leverages semantic similarity among prompt embeddings, integrating kernel regression with a logged contextual bandit framework to achieve unbiased, low-variance gradient estimation—without requiring online interaction. Contribution/Results: This is the first approach to enable efficient, unbiased policy optimization directly in the continuous prompt space. Evaluated on a newly constructed benchmark for movie recommendation description generation, our method significantly outperforms existing baselines, demonstrating robust high performance even under large candidate prompt sets.

Technology Category

Application Category

📝 Abstract
We study how to use naturally available user feedback, such as clicks, to optimize large language model (LLM) pipelines for generating personalized sentences using prompts. Naive approaches, which estimate the policy gradient in the prompt space, suffer either from variance caused by the large action space of prompts or bias caused by inaccurate reward predictions. To circumvent these challenges, we propose a novel kernel-based off-policy gradient method, which estimates the policy gradient by leveraging similarity among generated sentences, substantially reducing variance while suppressing the bias. Empirical results on our newly established suite of benchmarks demonstrate the effectiveness of the proposed approach in generating personalized descriptions for movie recommendations, particularly when the number of candidate prompts is large.
Problem

Research questions and friction points this paper is trying to address.

Optimize LLM pipelines using user feedback
Reduce variance and bias in prompt optimization
Generate personalized movie recommendation descriptions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Kernel-based off-policy gradient method
Leverages similarity among generated sentences
Reduces variance and suppresses bias
🔎 Similar Papers
No similar papers found.