Learning to Answer from Correct Demonstrations

πŸ“… 2025-10-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work studies offline imitation learning from a finite set of correct demonstrations for ambiguous tasksβ€”where multiple valid outputs exist for each input. Unlike standard supervised fine-tuning, no explicit reward signal is available, and demonstrations originate from an unknown optimal policy. The problem is formalized as offline imitation learning under a contextual bandit framework. A key weak assumption is introduced: the reward function belongs to a low-cardinality function class. Under this assumption, the authors propose a novel algorithm that overcomes the sample-inefficiency bottleneck of maximum likelihood estimation (MLE), achieving sample complexity logarithmic in the cardinality of the reward class. Theoretically, they prove that MLE may fail in this setting, whereas their method attains superior sample efficiency and generalization. Crucially, it implicitly leverages a reward model to guide policy learning, mitigating bias arising from the absence of explicit rewards.

Technology Category

Application Category

πŸ“ Abstract
We study the problem of learning to generate an answer (or completion) to a question (or prompt), where there could be multiple correct answers, any one of which is acceptable at test time. Learning is based on demonstrations of some correct answer to each training question, as in Supervised Fine Tuning (SFT). We formalize the problem as offline imitation learning in contextual bandits, with demonstrations from some optimal policy, without explicitly observed rewards. Prior work assumes that the demonstrator belongs to a low-complexity policy class, which motivates maximum likelihood estimation (i.e., log-loss minimization). In contrast, we propose relying only on the reward model (specifying which answers are correct) being in a low-cardinality class, which we argue is a weaker assumption. We show that likelihood maximization methods can fail in this case, and instead devise an alternative novel approach that learns with sample complexity logarithmic in the cardinality of the reward class. Our work motivates looking beyond likelihood maximization when learning from correct demonstrations.
Problem

Research questions and friction points this paper is trying to address.

Learning to generate multiple correct answers from demonstrations
Addressing failure of likelihood maximization in low-cardinality reward classes
Developing sample-efficient alternative to supervised fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning from correct demonstrations without rewards
Using low-cardinality reward model assumption
Novel approach with logarithmic sample complexity
πŸ”Ž Similar Papers
No similar papers found.