# «A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Electrical Engineering and Computer ...»

A plausible remedy for this would, informally, be the following: for any path beginning at a given state, discount any branch oﬀ that path that is not a suﬃx of another path beginning at that same state. And indeed, this would work for this particular case. But there are more problems.

Consider two {l1, l2 }-labelled execution systems, whose executions are as depicted in the

**following left and right frames respectively:**

CHAPTER 3. EXECUTION SYSTEMS 59

Then s1 and s1 are bisimilar among the two systems. But intuitively, there is diﬀerence in branching potential between the two states: the two executions starting from s1 diverge right away at s1, with steps that carry identical labels, whereas those starting from s1 diverge after the ﬁrst step, at s2, with steps that carry diﬀerent labels. Of course, the explanation here is that there is no execution starting from s2, and so, conceptually, the choice between the diverging steps is already made at s1. But then, what is the point of having the two executions share the state s2 ?

By now, the reader should begin to suspect what the source of our problems is: what we have called “state” in our systems does not really behave as such. In a type of system that is supposed to serve as a modelling device for processes of some kind, it is essential that “the future behavior depends only upon the current state, and not upon how that state was reached”. And this is not always the case here. What we need to do is constrain the structure of our systems so that it is.

Of course, this idea is not new. The quote above is from [34, p. 176], where Lamport required that the set of paths in a path structure be suﬃx closed, in the sense that for any path in the set, any suﬃx of that path is again a path in the set. It was later observed in [20] that this is not enough: one must also require that the set of paths be fusion closed, in the sense that for any preﬁx of a path in the set, and any suﬃx of another path in the set, if the former ends at the state at which the latter begins, then their fusion at that state is again a path in the set (see [52]). And apparently, it was Abrahamson, in [1], that ﬁrst considered path structures that satisﬁed both requirements (see [16]).

We now adapt these requirements to our own setting.

**We say that S, E is Abrahamson if and only if the following are true:**

In an Abrahamson system, there is a clear notion of a “possible next step” relation, which induces the construction of an associated, or better, underlying labelled transition system.

From a mathematical standpoint, this construction makes sense for a non-Abrahamson system as well, and is most conveniently carried out on the coalgebra side of the theory.

Assume a class C.

We write η(C) for a class function from Pow Seq(L × C) to Pow(L × C) such that for every S ∈ Pow Seq(L × C),

Our choice of notation here is not arbitrary. We think of η as an operator that assigns to every class C a class function from its image under Pow ◦ Seq ◦ (L × Id) to its image under Pow ◦ (L × Id). And what is interesting about this operator is that for every class function f : C1 → C2,

In the language of category theory, this makes η a natural transformation from Pow ◦ Seq ◦ (L × Id) to Pow ◦ (L × Id).

The concept of natural transformation is of great importance in the study and application of category theory. It is deﬁned using the same property as above, only generalized to a pair of arbitrary functors with a common domain category and a common codomain one. Pierce helps us visualize the essence of the concept by inviting us to “imagine ‘sliding’ the picture deﬁned by [the ﬁrst functor] onto the picture deﬁned by [the second one]” (see [47, p. 41]).

The reason why it is of interest to us here that η is a natural transformation is a theorem by Rutten, according to which, every natural transformation ν from an endofunctor F1 on Class to an endofunctor F2 on Class induces a functor from F1 -Coalg to F2 -Coalg that assigns to every F1 -coalgebra C, γ the F2 -coalgebra C, η(C) ◦ γ, and to every homomorphism h from an F1 -coalgebra C1, γ1 to an F1 -coalgebra C2, γ2 that same class CHAPTER 3. EXECUTION SYSTEMS 61 function h, which is now a homomorphism from the F2 -coalgebra C1, η(C1 ) ◦ γ1 to the F2 -coalgebra C2, η(C2 ) ◦ γ2 (see [56, thm. 15.1]). In other words, the induced functor preserves homomorphisms, and thus, by Theorem 2.2.10, bisimulations too.

In our case, the functor induced by η is a forgetful functor, which, informally, keeps only the ﬁrst step, if any, from any execution starting from any state, and discards the rest.

The following is immediate from Rutten’s theorem, but a more direct proof would require

**only little extra work:**

Proposition 3.3.1. If h is a homomorphism from C1, ε1 to C2, ε2, then h is a homomorphism from the L-labelled transition coalgebra C1, η(C1 ) ◦ ε1 to the L-labelled transition coalgebra C2, η(C2 ) ◦ ε2.

The following is now immediate from Theorem 2.2.10 and Proposition 3.3.1.

Proposition 3.3.2. If B is a bisimulation between C1, ε1 and C2, ε2, then B is a bisimulation between the L-labelled transition coalgebras C1, η(C1 ) ◦ ε1 and C2, η(C2 ) ◦ ε2.

Of course, we can translate all this back to the system side of the theory.

Assume a binary relation E : S ↔ S (L × S).

We write trans E for a binary relation between S and L × S such that for any s ∈ S and any l, s ∈ L × S,

**The following is trivial:**

Proposition 3.3.3. trans E = rel(η(S) ◦ fun E).

**The following is now immediate from Proposition 2.2.1(b), 2.2.15, 3.2.4, 3.3.2, and 3.3.3:**

Proposition 3.3.4. If B is a bisimulation between S1, E1 and S2, E2, then B is a bisimulation between the L-labelled transition systems S1, trans E1 and S2, trans E2.

First, suppose that S, E and S, E are two {l}-labelled execution systems, whose

**executions are as depicted in the following left and right frames respectively:**

Then s1 and s1 are bisimilar among the {l}-labelled transition systems S, trans E and S, trans E, but not among S, E and S, E.

The problem is easy to spot here. The two systems have one execution each. But whereas the execution of the ﬁrst system has only one step, the execution of the second has two.

And that second step, which is the cause for s1 and s1 not being bisimilar among the two systems, is dropped during the labelled transition system construction.

Now suppose that S, E and S, E are two {l1, l2, l3 }-labelled execution systems, whose

**executions are as depicted in the following left and right frames respectively:**

Then s1 and s1 are bisimilar among the now {l1, l2, l2 }-labelled transition systems S, trans E and S, trans E, but not among S, E and S, E.

Here the problem is of a diﬀerent nature. Every step of every execution is accounted for in the constructed labelled transition systems. However, the two longer executions, starting from s1 and s1 respectively, disagree on their second step, and that disagreement is masked by the agreement of executions starting from s2 and s2 respectively.

These two examples were specially chosen to target the two deﬁning clauses of the Abrahamson property. Speciﬁcally, and informally, the systems in the ﬁrst example are not suﬃx closed, thus violating clause (i) of the property, whereas those in the second are not CHAPTER 3. EXECUTION SYSTEMS 63 fusion closed, thus violating clause (ii). Overall, none of them is Abrahamson. And since our construction was based on the idea of a “possible next step” relation, which, in the case of a non-Abrahamson system, is a conceptually ambiguous notion, it is no surprise that non-bisimilar states turn bisimilar in the constructed systems.

With Abrahamson systems, things get much more interesting. In the rest of our examples, we shall focus on such systems. And as aﬀorded with such systems, we shall communicate their structure more casually, simply drawing a diagram of the underlying labelled transition system, and describing the set of paths in that diagram that correspond to their executions.

Consider then the {l1, l2 }-labelled transition system, with l1 and l2 diﬀerent, portrayed in

**the following diagram:**

** s l1 l2**

One {l1, l2 }-labelled execution system lying over this labelled transition system is the one whose executions correspond to all inﬁnite paths in the diagram. Another is the one whose executions correspond to those inﬁnite paths that go through each of the two loops inﬁnitely often. And of course, s is not bisimilar with itself among the two.

We may think of each of the two {l1, l2 }-labelled execution systems in this example as a speciﬁcation of a scheduling policy between two processes, forever iterating over l1 and l2 respectively, on a single processing unit. Under the ﬁrst policy, the scheduler is only required to guarantee progress of execution, simply picking at random one process at a time. Under the second, it is further required to be fair, taking care that there is no point in time after which a process is forever neglected. But whereas its behaviour in the ﬁrst case is completely speciﬁed by the underlying {l1, l2 }-labelled transition system, in the second case, it cannot be speciﬁed by any {l1, l2 }-labelled transition system alone.

Besides demonstrating the failure of the converse of Proposition 3.3.4 for Abrahamson systems, this example attempts to display the increase in expressive power and branching complexity that moving from a labelled transition to a labelled execution system can bring.

But it does so inadequately. For one need not really move to a labelled execution system to specify the behaviour of the scheduler under that second policy. One can just augment the given labelled transition system with the set of all inﬁnite sequences over {l1, l2 } corresponding to a fair interleaving of the two processes. And in fact, the concept of bisimulation between labelled transition systems can be generalized to account for this kind of augmentation by simply adding a third clause to Deﬁnition 2.1.3 that tests for inclusion between the sets of “admissible” sequences of labels associated with each state. This gives rise to the less known concept of fortiﬁcation equivalence, one of the alternative approaches to the semantics of ﬁnite delay considered by Milner in [38], and a perfectly adequate CHAPTER 3. EXECUTION SYSTEMS 64 approach to the speciﬁcation of the two scheduling policies in our example. What we want is another example that will expose the shortcomings of this type of approach, and vindicate our present venture.

Consider then the {l1, l2 }-labelled transition system, with l1 and l2 diﬀerent, portrayed in

**the following diagram:**

The ﬁrst {l1, l2 }-labelled execution system that we wish to consider here is the unique Abrahamson system whose executions starting from s1 correspond to all maximal paths in this diagram. The second is the one whose executions are all the executions of the ﬁrst, except the single inﬁnite execution stuttering around s1. And because of this exception, s1 is not bisimilar with itself among the two systems.

This beautiful example is from [4], where it was used to attack precisely the type of approach discussed above. Here, it is perhaps convenient to think of the two systems as modelling the behaviour of two distinct processes, both initialized at s1. The ﬁrst process will either loop around s1 forever, or iterate through it for a ﬁnite, indeterminate number of times before progressing to s2. From there on, a single indeterminate choice will decide its fate. The second process, on the other hand, is not allowed to loop around s1 forever. It must eventually advance to s2, from where on it behaves just like the ﬁrst one. What sets the behaviour of the two processes apart is, of course, the inﬁnite stuttering around s1, permitted for the ﬁrst process, but not the second. However, this is something that cannot be determined by the sequences of actions that the two processes perform in the course of their executions, for the trace of that inﬁnite stuttering is matched by that of every inﬁnite execution that eventually loops around s2. And yet the two processes ought to be distinguished. For during that inﬁnite stuttering, the ﬁrst process may always choose to branch oﬀ to a state from which it can perform l2, whereas, in every execution having that trace, the second must eventually reach a state from which it cannot ever do so.

With respect to the failure of the converse of Proposition 3.3.4, both this and the previous example point at the same problem: the existence of an inﬁnite path in the diagram that does not correspond to any execution of a system, but whose every ﬁnite preﬁx is a preﬁx of another path that does.

** CHAPTER 3. EXECUTION SYSTEMS 65 This too is something that has already come up in the investigation of path structures in temporal logic.**