🤖 AI Summary
To address catastrophic forgetting in large language models (LLMs) during continual alignment with human preferences, this paper proposes Memory-Augmented Focused Preference Optimization (MFPO). MFPO integrates short- to long-term memory mechanisms—specifically, denoising of short-term preference representations, intrinsic dimensionality reduction, and memory consolidation—to enable efficient storage and retrieval of historical preference knowledge. During optimization, MFPO dynamically focuses on task-critical preference features while constraining gradient updates to preserve previously acquired alignment capabilities. Evaluated on a multi-domain sequential alignment benchmark, MFPO substantially outperforms existing continual learning and alignment methods: it maintains high alignment quality while reducing average performance degradation on historical tasks by up to 42%. To our knowledge, MFPO is the first approach to jointly enhance both stability (i.e., retention of prior knowledge) and adaptability (i.e., acquisition of new preferences) across task sequences.
📝 Abstract
Alignment plays a crucial role in Large Language Models (LLMs) in aligning with human preferences on a specific task/domain. Traditional alignment methods suffer from catastrophic forgetting, where models lose previously acquired knowledge when adapting to new preferences or domains. We introduce LifeAlign, a novel framework for lifelong alignment that enables LLMs to maintain consistent human preference alignment across sequential learning tasks without forgetting previously learned knowledge. Our approach consists of two key innovations. First, we propose a focalized preference optimization strategy that aligns LLMs with new preferences while preventing the erosion of knowledge acquired from previous tasks. Second, we develop a short-to-long memory consolidation mechanism that merges denoised short-term preference representations into stable long-term memory using intrinsic dimensionality reduction, enabling efficient storage and retrieval of alignment patterns across diverse domains. We evaluate LifeAlign across multiple sequential alignment tasks spanning different domains and preference types. Experimental results demonstrate that our method achieves superior performance in maintaining both preference alignment quality and knowledge retention compared to existing lifelong learning approaches. The codes and datasets will be released on GitHub.