🤖 AI Summary
Formalizing information transparency—i.e., quantifying agents’ observability of others’ states and actions—in stochastic multi-agent systems remains challenging. Method: This paper introduces, for the first time, an observability operator into probabilistic strategy logic, enabling quantitative specification of observational granularity over temporal properties. We propose a hybrid semantic framework integrating probabilistic computation tree logic (PCTL) with strategy logic, jointly modeling observational relations and probabilistic transition systems to formally capture transparency under incomplete information. Contribution/Results: We design a decidable model-checking algorithm for this logic, prove its exponential-time complexity, and demonstrate its applicability to automated verification of transparency constraints in safety- and privacy-critical scenarios—thereby establishing the first formal reasoning foundation for transparency in probabilistic multi-agent systems.
📝 Abstract
There has been considerable work on reasoning about the strategic ability of agents under imperfect information. However, existing logics such as Probabilistic Strategy Logic are unable to express properties relating to information transparency. Information transparency concerns the extent to which agents' actions and behaviours are observable by other agents. Reasoning about information transparency is useful in many domains including security, privacy, and decision-making. In this paper, we present a formal framework for reasoning about information transparency properties in stochastic multi-agent systems. We extend Probabilistic Strategy Logic with new observability operators that capture the degree of observability of temporal properties by agents. We show that the model checking problem for the resulting logic is decidable.