Post Persona Alignment for Multi-Session Dialogue Generation

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multi-turn role-playing dialogue faces the challenge of simultaneously maintaining long-term persona consistency and response diversity. This paper proposes a two-stage post-hoc persona alignment framework: first generating a context-aware generic response, then using it as a query to retrieve relevant persona traits from a semantic persona memory bank and inject them precisely into the response. This paradigm decouples response generation from persona modeling for the first time, avoiding rigidity and bias induced by conventional pre-generation persona injection. The method integrates large language model (LLM) inference, efficient semantic retrieval, and lightweight response refinement, enabling response-driven memory retrieval. Evaluated on multi-turn LLM dialogue benchmarks, our approach achieves significant improvements over state-of-the-art methods in persona relevance (+12.3%), consistency (+9.7%), and diversity (+8.1%).

Technology Category

Application Category

📝 Abstract
Multi-session persona-based dialogue generation presents challenges in maintaining long-term consistency and generating diverse, personalized responses. While large language models (LLMs) excel in single-session dialogues, they struggle to preserve persona fidelity and conversational coherence across extended interactions. Existing methods typically retrieve persona information before response generation, which can constrain diversity and result in generic outputs. We propose Post Persona Alignment (PPA), a novel two-stage framework that reverses this process. PPA first generates a general response based solely on dialogue context, then retrieves relevant persona memories using the response as a query, and finally refines the response to align with the speaker's persona. This post-hoc alignment strategy promotes naturalness and diversity while preserving consistency and personalization. Experiments on multi-session LLM-generated dialogue data demonstrate that PPA significantly outperforms prior approaches in consistency, diversity, and persona relevance, offering a more flexible and effective paradigm for long-term personalized dialogue generation.
Problem

Research questions and friction points this paper is trying to address.

Maintaining long-term consistency in multi-session persona dialogues
Preserving persona fidelity across extended conversational interactions
Balancing response diversity and personalization in dialogue generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage Post Persona Alignment framework
Generates response before retrieving persona
Refines response for persona consistency
🔎 Similar Papers
No similar papers found.