🤖 AI Summary
To address weak generalization, high computational overhead, and unstable performance of Contextual Markov Decision Processes (CMDPs) under high-dimensional or unstructured contexts, this paper proposes an information-theoretic framework for contextual semantic compression. Methodologically, it leverages large language models to extract low-dimensional, semantically rich context summaries that enhance state representation while preserving decision-critical information. Theoretically, it introduces the novel concept of *approximate contextual sufficiency*, establishing a formal regret bound for CMDPs grounded in a delay–entropy trade-off. Empirically, the approach achieves significant improvements in reward, success rate, and sample efficiency across diverse tasks—including discrete/continuous control, visual navigation, and recommendation—while reducing inference latency and memory footprint. These results demonstrate its effectiveness, scalability, and interpretability in resource-constrained settings.
📝 Abstract
Contextual Markov Decision Processes (CMDPs) offer a framework for sequential decision-making under external signals, but existing methods often fail to generalize in high-dimensional or unstructured contexts, resulting in excessive computation and unstable performance. We propose an information-theoretic summarization approach that uses large language models (LLMs) to compress contextual inputs into low-dimensional, semantically rich summaries. These summaries augment states by preserving decision-critical cues while reducing redundancy. Building on the notion of approximate context sufficiency, we provide, to our knowledge, the first regret bounds and a latency-entropy trade-off characterization for CMDPs. Our analysis clarifies how informativeness impacts computational cost. Experiments across discrete, continuous, visual, and recommendation benchmarks show that our method outperforms raw-context and non-context baselines, improving reward, success rate, and sample efficiency, while reducing latency and memory usage. These findings demonstrate that LLM-based summarization offers a scalable and interpretable solution for efficient decision-making in context-rich, resource-constrained environments.