🤖 AI Summary
Traditional path signatures compress trajectories into a single global representation, discarding temporal structure and thus struggling to support decision-making tasks requiring step-by-step responses. This work proposes the Incremental Signature Contribution (ISC) method, which decomposes truncated path signatures into a temporally ordered sequence of incremental components in tensor algebra space, explicitly preserving their internal temporal evolution. Building upon this decomposition, the authors introduce the ISCT model—the first architecture to achieve compatibility between path signatures and Transformers—enabling sequence models to effectively process trajectory representations that are both algebraically expressive and sensitive to instantaneous changes. Evaluated on offline reinforcement learning benchmarks including HalfCheetah, Walker2d, Hopper, and Maze2d, ISCT demonstrates remarkable robustness under delayed rewards and low-quality datasets.
📝 Abstract
Path signatures embed trajectories into tensor algebra and constitute a universal, non-parametric representation of paths; however, in the standard form, they collapse temporal structure into a single global object, which limits their suitability for decision-making problems that require step-wise reactivity. We propose the Incremental Signature Contribution (ISC) method, which decomposes truncated path signatures into a temporally ordered sequence of elements in the tensor-algebra space, corresponding to incremental contributions induced by last path increments. This reconstruction preserves the algebraic structure and expressivity of signatures, while making their internal temporal evolution explicit, enabling processing signature-based representations via sequential modeling approaches. In contrast to full signatures, ISC is inherently sensitive to instantaneous trajectory updates, which is critical for sensitive and stability-requiring control dynamics. Building on this representation, we introduce ISC-Transformer (ISCT), an offline reinforcement learning model that integrates ISC into a standard Transformer architecture without further architectural modification. We evaluate ISCT on HalfCheetah, Walker2d, Hopper, and Maze2d, including settings with delayed rewards and downgraded datasets. The results demonstrate that ISC method provides a theoretically grounded and practically effective alternative to path processing for temporally sensitive control tasks.