Visualising Policy-Reward Interplay to Inform Zeroth-Order Preference Optimisation of Large Language Models

πŸ“… 2025-03-05
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Zeroth-order (ZO) optimization methods for large language model (LLM) preference optimization suffer from slow convergence due to high-dimensional parameter spaces and have been largely restricted to classification tasks. Method: This work pioneers the extension of ZO to generative preference modeling, introducing ZOPrOβ€”a novel algorithm that leverages dynamic interaction visualization between policy and reward models to design a goal-directed Simultaneous Perturbation Stochastic Approximation (SPSA) sampling strategy, substantially improving gradient estimation quality and convergence efficiency. Contribution/Results: Evaluated on summarization, machine translation, and dialogue tasks, ZOPrO achieves significantly higher reward scores than baselines while matching the convergence speed of first-order methods. This constitutes the first systematic solution for zeroth-order preference optimization in LLMs, establishing a new paradigm for efficient, gradient-free alignment.

Technology Category

Application Category

πŸ“ Abstract
Fine-tuning LLMs with first-order methods like back-propagation is computationally intensive. Zeroth-Order (ZO) optimisation, using function evaluations instead of gradients, reduces memory usage but suffers from slow convergence in high-dimensional models. As a result, ZO research in LLMs has mostly focused on classification, overlooking more complex generative tasks. In this paper, we introduce ZOPrO, a novel ZO algorithm designed for extit{Preference Optimisation} in LLMs. We begin by analysing the interplay between policy and reward models during traditional (first-order) Preference Optimisation, uncovering patterns in their relative updates. Guided by these insights, we adapt Simultaneous Perturbation Stochastic Approximation (SPSA) with a targeted sampling strategy to accelerate convergence. Through experiments on summarisation, machine translation, and conversational assistants, we demonstrate that our method consistently enhances reward signals while achieving convergence times comparable to first-order methods. While it falls short of some state-of-the-art methods, our work is the first to apply Zeroth-Order methods to Preference Optimisation in LLMs, going beyond classification tasks and paving the way for a largely unexplored research direction. Code and visualisations are available at https://github.com/alessioGalatolo/VisZOPrO
Problem

Research questions and friction points this paper is trying to address.

Optimising LLMs with Zeroth-Order methods for Preference Optimisation.
Reducing memory usage and improving convergence in high-dimensional models.
Extending ZO methods beyond classification to complex generative tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

ZOPrO algorithm for LLM preference optimisation
Targeted sampling strategy accelerates convergence
First ZO method beyond classification tasks
πŸ”Ž Similar Papers
No similar papers found.