MA-RLHF: Reinforcement Learning from Human Feedback with Macro Actions

📅 2024-10-03
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address token-level credit assignment difficulties and unstable, slow-converging policy gradients caused by reward delay in long-sequence RLHF, this paper introduces macro-actions—semantically coherent, high-level linguistic units—into the RLHF framework for the first time, substantially reducing the temporal distance between actions and rewards. Our method extends PPO by jointly integrating human preference scores, reward modeling, and a learnable macro-action definition and sampling mechanism, ensuring compatibility with diverse LLMs and task paradigms. Experiments demonstrate 1.7–2× faster training without additional computational overhead; performance gains reach 30% on text summarization and code generation, 18% on dialogue generation, and 8% on question answering. Cross-model and multi-task evaluations confirm strong robustness. The core contribution is a macro-action-driven RLHF paradigm that enables more precise reward attribution and efficient alignment.

Technology Category

Application Category

📝 Abstract
Reinforcement learning from human feedback (RLHF) has demonstrated effectiveness in aligning large language models (LLMs) with human preferences. However, token-level RLHF suffers from the credit assignment problem over long sequences, where delayed rewards make it challenging for the model to discern which actions contributed to preferred outcomes. This hinders learning efficiency and slows convergence.In this paper, we propose MA-RLHF, a simple yet effective RLHF framework that incorporates macro actions -- sequences of tokens or higher-level language constructs -- into the learning process. By operating at higher level of abstraction, our approach reduces the temporal distance between actions and rewards, facilitating faster and more accurate credit assignment. This results in more stable policy gradient estimates and enhances learning efficiency within each episode, all without increasing computational complexity during training or inference. We validate our approach through extensive experiments across various model sizes and tasks, including text summarization, dialogue generation, question answering, and program synthesis. Our method achieves substantial performance improvements over standard RLHF, with performance gains of up to 30% in text summarization and code generation, 18% in dialogue, and 8% in question answering tasks. Notably, our approach reaches parity with vanilla RLHF 1.7 ~ 2 times faster in terms of training time and continues to outperform it with further training. We make our code and data publicly available at https://github.com/ernie-research/MA-RLHF.
Problem

Research questions and friction points this paper is trying to address.

addresses credit assignment in long sequences
improves learning efficiency with macro actions
enhances performance in various NLP tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incorporates macro actions
Reduces temporal distance
Enhances learning efficiency
🔎 Similar Papers
No similar papers found.
Yekun Chai
Yekun Chai
Baidu
natural language processingmachine learning
H
Haoran Sun
Baidu Inc.
H
Huang Fang
Baidu Inc.
Shuohuan Wang
Shuohuan Wang
Baidu
Natural Language ProcessingDeep Learning
Y
Yu Sun
Baidu Inc.
H
Hua Wu
Baidu Inc.