🤖 AI Summary
This paper investigates the modal logical foundations of information change in multi-agent systems, focusing on modeling and quantifying the relationship between simulation (knowledge reduction) and refinement (knowledge expansion). Addressing the imbalance in prior work—which emphasizes refinement while neglecting simulation—the paper establishes, for the first time, a logical duality between these two operations and introduces the “mutual factual ignorance” modality to formally characterize agents’ initial epistemic state prior to acquiring any factual information. Methodologically, it develops an extended multimodal logic framework equipped with a modularly composable reduction axiom system, enabling coordinated reasoning over simulation, refinement, and mutual factual ignorance operators; all proposed systems are proven decidable. Key contributions include: (i) a unified bidirectional dynamic characterization of knowledge increase and decrease; (ii) a formal definition and logical axiomatization of initial ignorance; and (iii) the first decidable axiomatic system supporting joint modeling of simulation and refinement.
📝 Abstract
Simulation and refinement are variations of the bisimulation relation, where in the former we keep only atoms and forth, and in the latter only atoms and back. Quantifying over simulations and refinements captures the effects of information change in a multi-agent system. In the case of quantification over refinements, we are looking at all the ways the agents in a system can become more informed. Similarly, in the case of quantification over simulations, we are dealing with all the ways the agents can become less informed, or in other words, could have been less informed, as we are at liberty how to interpret time in dynamic epistemic logic. While quantification over refinements has been well explored in the literature, quantification over simulations has received considerably less attention. In this paper, we explore the relationship between refinements and simulations. To this end, we also employ the notion of mutual factual ignorance that allows us to capture the state of a model before agents have learnt any factual information. In particular, we consider the extensions of multi-modal logic with the simulation and refinement modalities, as well as modalities for mutual factual ignorance. We provide reduction-based axiomatizations for several of the resulting logics that are built extending one another in a modular fashion.