Active inference for action-unaware agents

📅 2025-08-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether action-unaware agents can match the performance of action-aware agents in navigation tasks. To overcome their lack of action priors, we propose a novel active inference–based approach that eschews efference copy and instead infers latent actions solely from observed sensory sequences via variational Bayesian inference. The agent performs planning by minimizing free energy using a generative model of its environment and internal dynamics. Crucially, policy optimization is guided by expected free energy, enabling goal-directed behavior without explicit action representations. Evaluated on two standard navigation benchmarks, the action-unaware agent achieves performance comparable to action-aware baselines. These results demonstrate that such agents retain high adaptability and robust decision-making under information constraints. To our knowledge, this is the first systematic demonstration that explicit action perception is not a necessary prerequisite for effective active inference–based navigation.

Technology Category

Application Category

📝 Abstract
Active inference is a formal approach to study cognition based on the notion that adaptive agents can be seen as engaging in a process of approximate Bayesian inference, via the minimisation of variational and expected free energies. Minimising the former provides an account of perceptual processes and learning as evidence accumulation, while minimising the latter describes how agents select their actions over time. In this way, adaptive agents are able to maximise the likelihood of preferred observations or states, given a generative model of the environment. In the literature, however, different strategies have been proposed to describe how agents can plan their future actions. While they all share the notion that some kind of expected free energy offers an appropriate way to score policies, sequences of actions, in terms of their desirability, there are different ways to consider the contribution of past motor experience to the agent's future behaviour. In some approaches, agents are assumed to know their own actions, and use such knowledge to better plan for the future. In other approaches, agents are unaware of their actions, and must infer their motor behaviour from recent observations in order to plan for the future. This difference reflects a standard point of departure in two leading frameworks in motor control based on the presence, or not, of an efference copy signal representing knowledge about an agent's own actions. In this work we compare the performances of action-aware and action-unaware agents in two navigations tasks, showing how action-unaware agents can achieve performances comparable to action-aware ones while at a severe disadvantage.
Problem

Research questions and friction points this paper is trying to address.

Compares action-aware vs action-unaware agents' navigation performance
Explores how agents infer motor behavior without action knowledge
Tests if action-unaware agents can match action-aware performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses active inference for action-unaware agents
Minimizes variational and expected free energies
Compares action-aware and action-unaware agents
F
Filippo Torresan
Araya Inc., Tokyo, Japan; School of Engineering and Informatics, University of Sussex, Brighton, UK
Keisuke Suzuki
Keisuke Suzuki
Center for Human Nature, Artificial Intelligence, and Neuroscience (CHAIN), Hokkaido University
Artificial LifeEmbodied CognitionSense of RealityComputational Phenomenology
Ryota Kanai
Ryota Kanai
Araya, Inc.
ConsciousnessNeuroscienceInformationArtificial Intelligence
M
Manuel Baltieri
Araya Inc., Tokyo, Japan; School of Engineering and Informatics, University of Sussex, Brighton, UK