🤖 AI Summary
This work addresses efficient learning of controllable latent-state dynamics and the encoder mapping in exogenous block Markov decision processes (Ex-BMDPs) from a single continuous trajectory—without environment resets. To tackle function approximation, we propose STEEL, the first algorithm with theoretical guarantees for this setting. STEEL integrates spectral estimation and contrastive representation learning, leveraging Markov chain mixing-time analysis and latent-space identifiability theory to jointly learn both the controllable latent dynamics and the encoder. Its sample complexity depends only on the size of the controllable latent state space, the capacity of the encoder function class, and the mixing time of the exogenous noise—eliminating reliance on environment resets. We provide rigorous theoretical guarantees establishing correctness and sample efficiency. Empirical evaluation on synthetic tasks demonstrates high-precision recovery of the controllable dynamics.
📝 Abstract
In order to train agents that can quickly adapt to new objectives or reward functions, efficient unsupervised representation learning in sequential decision-making environments can be important. Frameworks such as the Exogenous Block Markov Decision Process (Ex-BMDP) have been proposed to formalize this representation-learning problem (Efroni et al., 2022b). In the Ex-BMDP framework, the agent's high-dimensional observations of the environment have two latent factors: a controllable factor, which evolves deterministically within a small state space according to the agent's actions, and an exogenous factor, which represents time-correlated noise, and can be highly complex. The goal of the representation learning problem is to learn an encoder that maps from observations into the controllable latent space, as well as the dynamics of this space. Efroni et al. (2022b) has shown that this is possible with a sample complexity that depends only on the size of the controllable latent space, and not on the size of the noise factor. However, this prior work has focused on the episodic setting, where the controllable latent state resets to a specific start state after a finite horizon. By contrast, if the agent can only interact with the environment in a single continuous trajectory, prior works have not established sample-complexity bounds. We propose STEEL, the first provably sample-efficient algorithm for learning the controllable dynamics of an Ex-BMDP from a single trajectory, in the function approximation setting. STEEL has a sample complexity that depends only on the sizes of the controllable latent space and the encoder function class, and (at worst linearly) on the mixing time of the exogenous noise factor. We prove that STEEL is correct and sample-efficient, and demonstrate STEEL on two toy problems. Code is available at: https://github.com/midi-lab/steel.