Post Hoc Extraction of Pareto Fronts for Continuous Control

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of efficiently constructing multi-objective Pareto fronts without retraining from scratch. The authors propose an offline multi-objective reinforcement learning method that leverages pre-trained single-objective expert policies, critics, and replay buffers to extract a posterior Pareto front from existing policies for the first time. By integrating a hybrid advantage signal with behavioral cloning loss, the approach fuses information from multiple expert critics to learn a set of Pareto-optimal policies that balance competing objectives. Evaluated on five MuJoCo multi-objective environments, the method achieves Pareto front quality comparable to current baselines while using only 0.001% of the sample budget, drastically reducing sample complexity without sacrificing the simplicity inherent to off-policy reinforcement learning.

Technology Category

Application Category

📝 Abstract
Agents in the real world must often balance multiple objectives, such as speed, stability, and energy efficiency in continuous control. To account for changing conditions and preferences, an agent must ideally learn a Pareto frontier of policies representing multiple optimal trade-offs. Recent advances in multi-policy multi-objective reinforcement learning (MORL) enable learning a Pareto front directly, but require full multi-objective consideration from the start of training. In practice, multi-objective preferences often arise after a policy has already been trained on a single specialised objective. Existing MORL methods cannot leverage these pre-trained `specialists' to learn Pareto fronts and avoid incurring the sample costs of retraining. We introduce Mixed Advantage Pareto Extraction (MAPEX), an offline MORL method that constructs a frontier of policies by reusing pre-trained specialist policies, critics, and replay buffers. MAPEX combines evaluations from specialist critics into a mixed advantage signal, and weights a behaviour cloning loss with it to train new policies that balance multiple objectives. MAPEX's post hoc Pareto front extraction preserves the simplicity of single-objective off-policy RL, and avoids retrofitting these algorithms into complex MORL frameworks. We formally describe the MAPEX procedure and evaluate MAPEX on five multi-objective MuJoCo environments. Given the same starting policies, MAPEX produces comparable fronts at $0.001\%$ the sample cost of established baselines.
Problem

Research questions and friction points this paper is trying to address.

Pareto front
multi-objective reinforcement learning
post hoc extraction
continuous control
pre-trained policies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pareto front extraction
multi-objective reinforcement learning
post hoc policy reuse
offline MORL
mixed advantage
🔎 Similar Papers
No similar papers found.
R
Raghav Thakar
The Collaborative Robotics and Intelligent Systems (CoRIS) Institute, Oregon State University, Corvallis, Oregon, USA
G
Gaurav Dixit
The Collaborative Robotics and Intelligent Systems (CoRIS) Institute, Oregon State University, Corvallis, Oregon, USA
Kagan Tumer
Kagan Tumer
Oregon State University
Multiagent SystemsDistributed Optimization