🤖 AI Summary
Dynamic decision-making under frequent unknown tasks and high task diversity poses significant challenges for generalization and adaptability. Method: This paper proposes a Policy Committee Learning framework that trains a diverse set of policies, guaranteeing that for any unseen task at deployment, at least one policy in the committee is near-optimal. It establishes the first learning-theoretic analysis with probabilistic guarantees, provides a provably approximate algorithm and sample complexity bounds for low-dimensional task spaces, and integrates task distribution modeling, gradient-driven policy ensembling, task embedding, and generalization analysis. Contribution/Results: Experiments on MuJoCo and Meta-World demonstrate that the approach significantly outperforms multi-task RL, meta-RL, and task-clustering baselines in training efficiency, cross-task generalization, and few-shot adaptation—offering a theoretically grounded and practically effective paradigm for high-diversity dynamic decision-making.
📝 Abstract
Many dynamic decision problems, such as robotic control, involve a series of tasks, many of which are unknown at training time. Typical approaches for these problems, such as multi-task and meta reinforcement learning, do not generalize well when the tasks are diverse. On the other hand, approaches that aim to tackle task diversity, such as using task embedding as policy context and task clustering, typically lack performance guarantees and require a large number of training tasks. To address these challenges, we propose a novel approach for learning a policy committee that includes at least one near-optimal policy with high probability for tasks encountered during execution. While we show that this problem is in general inapproximable, we present two practical algorithmic solutions. The first yields provable approximation and task sample complexity guarantees when tasks are low-dimensional (the best we can do due to inapproximability), whereas the second is a general and practical gradient-based approach. In addition, we provide a provable sample complexity bound for few-shot learning. Our experiments on MuJoCo and Meta-World show that the proposed approach outperforms state-of-the-art multi-task, meta-, and task clustering baselines in training, generalization, and few-shot learning, often by a large margin.