A Translation of Probabilistic Event Calculus into Markov Decision Processes

📅 2025-07-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Probabilistic Event Calculus (PEC) offers strong interpretability and expressive power for temporal narrative reasoning but lacks goal-directed inference mechanisms. Method: We propose the first formal translation framework from PEC to Markov Decision Processes (MDPs), introducing the novel concept of “action contexts” to bridge semantic fidelity with decision-theoretic expressiveness. This translation preserves PEC’s flexible semantics and interpretability while endowing it with MDP-based policy optimization and sequential planning capabilities. Contribution/Results: The framework supports temporal projection, goal-driven decision-making, and reversible natural-language narrative generation from learned policies. By unifying logical modeling with probabilistic decision theory, our approach bridges the paradigmatic gap between interpretable narrative reasoning and goal-oriented planning—demonstrating effectiveness, compatibility, and practical utility across diverse tasks.

Technology Category

Application Category

📝 Abstract
Probabilistic Event Calculus (PEC) is a logical framework for reasoning about actions and their effects in uncertain environments, which enables the representation of probabilistic narratives and computation of temporal projections. The PEC formalism offers significant advantages in interpretability and expressiveness for narrative reasoning. However, it lacks mechanisms for goal-directed reasoning. This paper bridges this gap by developing a formal translation of PEC domains into Markov Decision Processes (MDPs), introducing the concept of "action-taking situations" to preserve PEC's flexible action semantics. The resulting PEC-MDP formalism enables the extensive collection of algorithms and theoretical tools developed for MDPs to be applied to PEC's interpretable narrative domains. We demonstrate how the translation supports both temporal reasoning tasks and objective-driven planning, with methods for mapping learned policies back into human-readable PEC representations, maintaining interpretability while extending PEC's capabilities.
Problem

Research questions and friction points this paper is trying to address.

Bridging PEC's lack of goal-directed reasoning mechanisms
Translating PEC into MDPs for algorithmic application
Enabling interpretable temporal reasoning and objective-driven planning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Translates PEC into MDPs for goal-directed reasoning
Introduces action-taking situations for flexible semantics
Maps MDP policies back to human-readable PEC
🔎 Similar Papers
No similar papers found.