Imitation Learning via Focused Satisficing

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional imitation learning assumes demonstrations are near-optimal; however, human behavior often adheres to satisficing—satisfying dynamically evolving individual expectations rather than achieving global optimality. This work introduces Satisficing Imitation Learning (SIL), the first framework to incorporate satisficing theory into imitation learning. Instead of explicitly modeling time-varying expectation thresholds, SIL identifies high-quality demonstration segments via trajectory-level interval-based targets and prioritizes their imitation. The method integrates interval-based deep reinforcement learning, trajectory-level satisficing modeling, and adaptive demonstration-quality focusing optimization. In multi-task experiments, SIL significantly improves guaranteed acceptability—the rate at which unseen demonstrations meet user-defined acceptability criteria—while maintaining competitive true return. It consistently outperforms state-of-the-art imitation learning baselines across all evaluated metrics.

Technology Category

Application Category

📝 Abstract
Imitation learning often assumes that demonstrations are close to optimal according to some fixed, but unknown, cost function. However, according to satisficing theory, humans often choose acceptable behavior based on their personal (and potentially dynamic) levels of aspiration, rather than achieving (near-) optimality. For example, a lunar lander demonstration that successfully lands without crashing might be acceptable to a novice despite being slow or jerky. Using a margin-based objective to guide deep reinforcement learning, our focused satisficing approach to imitation learning seeks a policy that surpasses the demonstrator's aspiration levels -- defined over trajectories or portions of trajectories -- on unseen demonstrations without explicitly learning those aspirations. We show experimentally that this focuses the policy to imitate the highest quality (portions of) demonstrations better than existing imitation learning methods, providing much higher rates of guaranteed acceptability to the demonstrator, and competitive true returns on a range of environments.
Problem

Research questions and friction points this paper is trying to address.

Overcoming suboptimal human demonstrations in imitation learning
Learning policies surpassing demonstrator's dynamic aspiration levels
Improving imitation quality without explicit aspiration modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses margin-based objective for deep reinforcement learning
Surpasses demonstrator's aspiration levels implicitly
Focuses on highest quality demonstration portions
🔎 Similar Papers
No similar papers found.
R
Rushit N. Shah
Department of Computer Science, University of Illinois Chicago
N
Nikolaos Agadakos
Department of Computer Science, University of Illinois Chicago
S
Synthia Sasulski
Department of Computer Science, University of Illinois Chicago
A
Ali Farajzadeh
Department of Computer Science, University of Illinois Chicago
Sanjiban Choudhury
Sanjiban Choudhury
Assistant Professor, Cornell
Machine LearningReinforcement LearningImitation Learning
B
Brian Ziebart
Department of Computer Science, University of Illinois Chicago