🤖 AI Summary
To address the bottlenecks of fixed divergence selection and insufficient representational capacity in preference modeling for large language model (LLM) value alignment, this paper proposes a novel preference optimization paradigm featuring kernel enhancement, semantic awareness, and multi-divergence fusion. Our method introduces: (1) kernelized representation via integration of polynomial, RBF, Mahalanobis, and spectral kernels, trained with a hybrid loss; (2) six alternative divergences—JS, Hellinger, Rényi, Bhattacharyya, Wasserstein, and f-divergence; (3) a data-driven automatic kernel–divergence selection mechanism; and (4) a hierarchical kernel-mixing architecture supported by heavy-tailed self-regularization theory. Evaluated on 12 benchmarks, our approach achieves state-of-the-art performance across factual consistency, safety, reasoning, and instruction following, significantly enhancing LLM robustness and generalization.
📝 Abstract
The rapid rise of large language models (LLMs) has unlocked many applications but also underscores the challenge of aligning them with diverse values and preferences. Direct Preference Optimization (DPO) is central to alignment but constrained by fixed divergences and limited feature transformations. We propose DPO-Kernels, which integrates kernel methods to address these issues through four key contributions: (i) Kernelized Representations with polynomial, RBF, Mahalanobis, and spectral kernels for richer transformations, plus a hybrid loss combining embedding-based and probability-based objectives; (ii) Divergence Alternatives (Jensen-Shannon, Hellinger, Renyi, Bhattacharyya, Wasserstein, and f-divergences) for greater stability; (iii) Data-Driven Selection metrics that automatically choose the best kernel-divergence pair; and (iv) a Hierarchical Mixture of Kernels for both local precision and global modeling. Evaluations on 12 datasets demonstrate state-of-the-art performance in factuality, safety, reasoning, and instruction following. Grounded in Heavy-Tailed Self-Regularization, DPO-Kernels maintains robust generalization for LLMs, offering a comprehensive resource for further alignment research.