Distributional Active Inference

๐Ÿ“… 2026-01-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the sample inefficiency of reinforcement learning in complex environments, which often stems from inadequate state perception. To this end, the paper introduces a formal abstraction framework that seamlessly integrates active inference into distributional reinforcement learning, enabling joint optimization of perception and decision-making. Notably, the proposed approach unifies model-based, model-free, and distributional reinforcement learning paradigms without requiring an explicit model of the environmentโ€™s transition dynamics. Experimental results demonstrate that, without relying on environmental dynamics models, the method substantially improves sample efficiency while achieving near-optimal control performance.

Technology Category

Application Category

๐Ÿ“ Abstract
Optimal control of complex environments with robotic systems faces two complementary and intertwined challenges: efficient organization of sensory state information and far-sighted action planning. Because the reinforcement learning framework addresses only the latter, it tends to deliver sample-inefficient solutions. Active inference is the state-of-the-art process theory that explains how biological brains handle this dual problem. However, its applications to artificial intelligence have thus far been limited to extensions of existing model-based approaches. We present a formal abstraction of reinforcement learning algorithms that spans model-based, distributional, and model-free approaches. This abstraction seamlessly integrates active inference into the distributional reinforcement learning framework, making its performance advantages accessible without transition dynamics modeling.
Problem

Research questions and friction points this paper is trying to address.

active inference
reinforcement learning
distributional reinforcement learning
sample efficiency
sensory state organization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Active Inference
Distributional Reinforcement Learning
Sample Efficiency
Unified Abstraction
Model-Free Control
๐Ÿ”Ž Similar Papers
No similar papers found.
A
Abdullah Akgul
Department of Mathematics and Computer Science, University of Southern Denmark
Gulcin Baykal
Gulcin Baykal
University of Southern Denmark
Representation LearningReinforcement Learning
Manuel Haussmann
Manuel Haussmann
Syddansk Universitet
Machine LearningBayesian Deep LearningProbabilistic ModellingReinforcement Learning
M
Mustafa Mert cCelikok
Department of Mathematics and Computer Science, University of Southern Denmark
M
M. Kandemir
Department of Mathematics and Computer Science, University of Southern Denmark