Iterative Critique-Refine Framework for Enhancing LLM Personalization

πŸ“… 2025-10-28
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing retrieval-augmented generation (RAG) methods (e.g., LaMP, PGraphRAG) struggle to precisely align generated text with user-specific style, tone, and topic focus in personalized text generation. To address this, we propose PerFineβ€”a training-free, model-agnostic, iterative critique-refinement framework guided by user profiles. PerFine employs a large language model (LLM) simultaneously as both critic and generator, iteratively refining outputs via topic-aware feedback, Best-of-N sampling, and dynamic candidate elimination. Its core innovation lies in the tight integration of retrieval augmentation with inference-time refinement to achieve joint alignment across style, tone, and topic. Evaluated on Yelp, Goodreads, and Amazon datasets, PerFine achieves 7–13% absolute improvement in GEval over state-of-the-art methods. Performance consistently improves over 3–5 refinement iterations, and further scales with larger critic models.

Technology Category

Application Category

πŸ“ Abstract
Personalized text generation requires models not only to produce coherent text but also to align with a target user's style, tone, and topical focus. Existing retrieval-augmented approaches such as LaMP and PGraphRAG enrich profiles with user and neighbor histories, but they stop at generation and often yield outputs that drift in tone, topic, or style. We present PerFine, a unified, training-free critique-refine framework that enhances personalization through iterative, profile-grounded feedback. In each iteration, an LLM generator produces a draft conditioned on the retrieved profile, and a critic LLM - also conditioned on the same profile - provides structured feedback on tone, vocabulary, sentence structure, and topicality. The generator then revises, while a novel knockout strategy retains the stronger draft across iterations. We further study additional inference-time strategies such as Best-of-N and Topic Extraction to balance quality and efficiency. Across Yelp, Goodreads, and Amazon datasets, PerFine consistently improves personalization over PGraphRAG, with GEval gains of +7-13%, steady improvements over 3-5 refinement iterations, and scalability with increasing critic size. These results highlight that post-hoc, profile-aware feedback offers a powerful paradigm for personalized LLM generation that is both training-free and model-agnostic.
Problem

Research questions and friction points this paper is trying to address.

Enhancing personalized text generation alignment with user profiles
Reducing tone and topic drift in retrieval-augmented language models
Implementing training-free iterative refinement for style consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterative critique-refine framework enhances personalization
Profile-grounded feedback improves tone and topic alignment
Training-free knockout strategy retains stronger drafts
πŸ”Ž Similar Papers
No similar papers found.