🤖 AI Summary
This paper investigates the decidability of timed opacity for Timed Automata (TAs), i.e., whether a system can conceal designated secret states from an attacker who observes timestamped actions. Methodologically, it establishes decidability boundaries across multiple restricted subclasses—including one-clock, one-action, and observable-event-recording TAs—by leveraging zone-graph construction, fixed-point computation, and language inclusion checking. Key contributions include: (i) the first fine-grained decidability map for timed opacity; (ii) the introduction of a novel “bounded-observation attacker” model, where the adversary observes only the first $N$ occurrences or $N$ timestamps of observable events, and the complete decidability characterization and algorithmic solution for opacity under this constraint; and (iii) a proof that timed opacity is decidable for all natural TA subclasses except one-action TAs and one-clock TAs with $varepsilon$-transitions, accompanied by effective decision procedures for the newly identified decidable cases.
📝 Abstract
In 2009, Franck Cassez showed that the timed opacity problem, where an attacker can observe some actions with their timestamps and attempts to deduce information, is undecidable for timed automata (TAs). Moreover, he showed that the undecidability holds even for subclasses such as event-recording automata. In this article, we consider the same definition of opacity for several other subclasses of TAs: with restrictions on the number of clocks, of actions, on the nature of time, or on a new subclass called observable event-recording automata. We show that opacity can mostly be retrieved, except for one-action TAs and for one-clock TAs with $epsilon$-transitions, for which undecidability remains. We then exhibit a new decidable subclass in which the number of observations made by the attacker is limited.