Inner Speech as Behavior Guides: Steerable Imitation of Diverse Behaviors for Human-AI coordination

📅 2026-02-24
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing imitation learning approaches struggle to capture the diversity and non-Markovian nature of human behavior and lack flexible mechanisms for guiding behavior generation at inference time. This work proposes MIMIC, a novel framework that, for the first time, integrates human inner speech—a form of covert self-directed language—into AI behavior generation. By treating inner speech as a controllable linguistic representation of behavioral intent, MIMIC combines vision-language models, conditional variational autoencoders, and diffusion-based behavioral cloning to establish an inner-speech-mediated imitation learning architecture. The method enables fine-grained behavioral control during inference without requiring additional demonstrations, significantly improving behavioral diversity, fidelity to demonstrations, and controllability of behavioral style in robotic manipulation and human-robot collaboration tasks.

Technology Category

Application Category

📝 Abstract
Effective human-AI coordination requires artificial agents capable of exhibiting and responding to human-like behaviors while adapting to changing contexts. Imitation learning has emerged as one of the prominent approaches to build such agents by training them to mimic human-demonstrated behaviors. However, current methods struggle to capture the inherent diversity and non-Markovian nature of human behavior and lack the ability to steer behavior at inference time. Drawing inspiration from the theory of human cognitive processes, where inner speech guides action selection before execution, we propose MIMIC (Modeling Inner Motivations for Imitation and Control), a framework that uses language as an internal representation of behavioral intent. MIMIC employs the novel use of vision-language models as linguistic scaffolding to train a conditional variational autoencoder capable of generating inner speech from observations. A diffusion-based behavior cloning policy then selects actions conditioned on current observations and the generated inner speech. MIMIC enables fine-grained steering of behavior at inference time by conditioning the agent on behavior-specific speech. Experiments across robotic manipulation tasks and human-AI collaboration games demonstrate that MIMIC significantly enhances both behavior diversity and fidelity to human demonstrations while enabling nuanced behavioral steering without training on additional demonstrations. We open source our code and provide pre-trained MIMIC agents and qualitative demos at: https://mimic-research.github.io.
Problem

Research questions and friction points this paper is trying to address.

human-AI coordination
imitation learning
behavior diversity
non-Markovian behavior
behavior steering
Innovation

Methods, ideas, or system contributions that make the work stand out.

inner speech
imitation learning
vision-language models
conditional variational autoencoder
diffusion policy
🔎 Similar Papers
No similar papers found.