# «Oracle Semantics Aquinas Hobor A Dissertation Presented to the Faculty of Princeton University in Candidacy for the Degree of Doctor of Philosophy ...»

The problem with unrestricted implication is that it is not always necessary in the sense given in section 7.3.3, even when it is applied to necessary predicates; in other words,

Since implication is inherently contravariant in its antecedent, it interacts badly with the stratiﬁed construction.

Of course, for certain P and Q, the unrestricted implication P ⇒ Q is necessary. In general, as long as P avoids mentioning the stratiﬁed

## CHAPTER 7. A MODAL SUBSTRUCTURAL LOGIC

construction10, and Q is necessary, then P ⇒ Q will be necessary.Accordingly, we sometimes use unrestricted implication in the proof when these conditions are met. The assertion ¬P is deﬁned in terms of unrestricted implication and is likewise dangerous to use.

One can apply the operator to an implication to make it necessary (recall that ∀P. P= P ), but this can be quite problematic. If the implication P ⇒ Q is not necessary to begin with, then the meaning (P ⇒ Q) is extremely diﬃcult to understand; in many cases it of becomes false, but depending on P and Q it might have other behavior.

the operator, the predicates P and Q will not be able to expose wild terms in the underlying stratiﬁcation.

Translated into informal language, the unrestricted implication P ⇒ Q means, “If P holds on world (ρ, φ, m), then Q holds on world (ρ, φ, m).” The trouble is that P ⇒ Q does not say anything about the relationship P and Q will have as the resource map φ becomes more approximate.

In contrast, the subset implication means, “For this and any more approximate resource map, whenever P holds, Q will hold.” Often this is exactly what is needed to express a desired property, and it plays nicely with the model.

We take advantage of this insight when we invert the YES pseudoconstructor in section 7.3.10.

7.3.8 Extensionality, validity, precision, and

In section 4.7.1 we presented the notions of closure to local variables, validity, precision, and tightness, and informally explained the related operators close, validly, precisely, and tightly. We are at last ready to provide their models, which we do in ﬁgure 7.11.

The extensionally P operator is similar in some ways to the ! P operator in the sense that they both quantify over memories and thereby restrict the ability of P to depend on memory. However, while the ! P operator does not allow P to use any part of memory, extensionally P does allow P to use certain parts of memory.

## CHAPTER 7. A MODAL SUBSTRUCTURAL LOGIC

The problem is that there is no restriction on φ to ensure that memory location 3 can be accessed. This is in contrast to our deﬁnition of maps-to in ﬁgure 7.7, where we are very careful to place restrictions on the resource map φ. Predicates that are well-behaved in this sense are called extensional. The extensionally P operator forces predictes to be extensional; if P is already extensional then

In other words, a predicate is valid when it ignores memory outside its resource map and is well-behaved under the approximation operation.

In section 4.7.1 we introduced the idea of a precise assertion. Recall that an assertion P is precise if for any predicate Q, the separating conjunction can only divide resources in a single way. The precisely P operator expresses this idea and forces a predicate to be precise in a way that is well-behaved under the approximation operation, and if P

A sharp-eyed reader may notice the use of the dangerous implication operator in the deﬁnition of precisely, but since it is guarded by the and operators, it is safe and precisely is necessary.

A predicate is tight if it is valid, closed, and precise, and we deﬁne the tightly P operator as validly close precisely P. For all P, tightly P is tight. If P is tight then

If P is not tight then the meaning of tightly P depends on the exact nature of P, but is likely to be equal to false (which is very tight).

7.3.9 Higher-order recursion Natural invariants frequently have a recursive structure; for example, the invariant required for a linked list where each lock guards another

## 7.3. REASONING ABOUT THE MODEL WITH A MODAL

SUBSTRUCTURAL LOGIC linked list (except for the last lock, which guards nil). We also used the recursion operator µ to deﬁne the example program’s resource invariant in section 4.9.1, where it helped provide a natural way to satisfy the precondition of the unlock CSL rule. In our work we will extend the normal recursion operator µ to a higher-order version µHO, which will be used to deﬁne our Hoare tuple in section 10.1.3.The indexed models of Appel et al. [AM01] and Ahmed et al.

[AAV02, AAV03, Ahm04] and the modal model of Appel et al. [AMRV07] all supported recursion. We employ a hybrid approach: we follow Appel et al. [AMRV07] by deﬁning contractiveness using the approximately operator, but our deﬁntions of the recursion operator and our proofs of the fold-unfold rules are closer to those in Appel et al. [AM01]. Our deﬁntions are given in ﬁgure 7.12.

Our semantic recursion operator is more powerful than the one dened in the indexed and modal models because it is higher order. A standard ﬁrst-order recursion operator µ has the type

**In contrast, our semantic higher-order recursion operator µHO is parameterized by an extra type parameter α:**

µHO : Λα. ((α → predicate) → (α → predicate)) → α → predicate.

This more powerful recursion operator is used to deﬁne the Hoare tuCHAPTER 7. A MODAL SUBSTRUCTURAL LOGIC ple in chapter 10. Because the higher-order recursion operator µHO is strictly more powerful than the ﬁrst-order recursion operator µ, we can deﬁne µ in terms of µHO. Still, in many situations, such as deﬁning the resource invariant in section 4.9.1, µ is powerful enough, and since it is simpler to use we include it as well.

The rest of the proof does not see the deﬁnition of µHO, which is a generalization of the ﬁrst-order one found in Appel et al. [AM01].

**Instead, the rest of the proof uses the following fold-unfold rule:**

Contractiveness is a requirement for deﬁning recursive equations in many contexts; here we build on the elegant deﬁnitions by Appel et al. [AMRV07]. A function f is ﬁrst-order contractive if for all predicates P and Q that are approximately equivalent, f P is approximately equivalent to f Q. Higher-order contractiveness is a generalization of this idea, where we universally quantify over the additional parameter a : α. As indicated by the second deﬁnition of contractive in ﬁgure 7.12, it is also possible to deﬁne ﬁrst-order contractiveness in terms of higher-order contractiveness.

The proof of the higher-order fold-unfold rule roughly follows the

One of the advantages of giving this characterization is that it demonstrates the power of expressing ideas in the modal logic, since it is clearly more compact than the previous one. It is still not quite ideal, however, since it still requires exposing the private level operator.

structor, passing the φ in the world it is passed as its ﬁrst argument.

YES, in turn, then calls stratify to produce invariants stratiﬁed to level φ.

In other words,

is calling stratify on the elements of the list P to produce a list of stratiﬁed invariants of level φ.

The way we express inversion in the logic is to determine the necessary and suﬃcient conditions for concluding

predicate lists P and P ′ are approximately equal.

## CHAPTER 7. A MODAL SUBSTRUCTURAL LOGIC

7.4 Conclusions We have now provided a sound deﬁnition for resource and rmap using a stratiﬁcation technique. The stratiﬁed model is hidden behind a cleaner dependent model, which is in turn hidden behind a modal substructural logic. By reasoning about the underlying models using this logic we can express properties cleanly and reason about them more simply than if we manipulated the models directly.

This logic forms the model for the assertions of CSL, which can then utilize the logic’s full expressive power. The model for the Hoare tuple itself will be deferred until chapter 10, since it depends ﬁrst on the concurrent operational semantics presented in chapter 8 and oracle semantics presented in chapter 9.

Chapter 8 Concurrent Operational Semantics In chapter 3 we introduced Concurrent C minor, and in chapter 5 gave it a formal erased concurrent operational semantics. The purpose of this chapter is to give it an qunerased concurrent operational semantics1.

The concurrent operational semantics does additional bookkeeping and is therefore easy to use for metatheoretical reasoning about concurrent features such as locking a lock. The concurrent operational semantics is not easy to use for metatheoretical reasoning about the equential features of Concurrent C minor. In chapter 9, we deﬁne an oracular semantics for Concurrent C minor that will be allow for straightforward metatheoretical reasoning for the sequential features of Concurrent C minor.

Normally we leave oﬀ the word unerased and just say the concurrent operational semantics.

In section 8.1 we outline the parts that make up the concurrent operational semantics. In section 8.2 we deﬁne the sequential submachine, which executes sequential code. In section 8.3 we deﬁne the state of a concurrent computation, and in section 8.4 we give a set of consistency properties for a concurrent machine state. In section 8.5 we deﬁne the concurrent step relation. Finally, in section in section 8.6, we argue why our concurrent operational semantics is a reasonable abstraction of a real machine by showing that it is a conservative approximation to the erased concurrent operational semantics deﬁned in 52.

The concurrent operational semantics has several distinct parts. The ﬁrst, called the “sequential submachine”, executes all of the statements that do not depend on other threads, such as call, store, and loop.

We isolate all the properties of the sequential step relation on which we rely using the interface discussed in section 6.3.1. The sequential submachine is thus “resource map aware”, meaning that it gets stuck if it attempts to access memory without the correct permission. In this way we support half of our modularity principle by hiding the complexities of sequential control- and data-ﬂow from concurrent metatheoretical reasoning; as explained in chapter 9, we hide the complexities of concurrent computation from sequential metatheoretical reasoning using an oracle semantics.

Portions of this chapter have been published before as [HAZ08a] and [HAZ08b].

8.2. SEQUENTIAL SUBMACHINE We combine the local states of many sequential submachines to get threads and then add a schedule, lock pool, alloc pool, function pool, and memory to get a concurrent machine state. Our concurrent state has a complex set of consistency requirements that ensure that it is well-formed.

We deﬁne a concurrent step relation that transforms concurrent states into concurrent states in the context of a program Ψ. When a thread wishes to execute a sequential instruction, the concurrent step relation uses the sequential submachine. Fully-concurrent instructions such as lock are handled by the concurrent step relation directly. Our concurrent step relation has a number of unusual features, including determinism, coroutine interleaving, and nonconstructive semantics. We conclude by arguing why these features are reasonable abstractions for reasoning about real machines.

8.2 Sequential submachine The sequential submachine is the part of the concurrent operational semantics that knows how to execute sequential instructions. The two pseudoconcurrent statements, make lock and free lock do not interact with the other threads and so are executed by the sequential submachine as well. At the fully concurrent statements—lock, unlock, and fork—the sequential submachine gets stuck.

We build it by extending the core semantics of C minor with with

## CHAPTER 8. CONCURRENT OPERATIONAL SEMANTICS

** Figure 8.1: Simpliﬁed subset of sequential step relation an extension that knows how to execute the pseudoconcurrent instructions.**

When we deﬁned the erased machine in chapter 5 we provided the erased semantics for three sequential C minor statements in ﬁgure

5.1. For comparison, we give the unerased semantics for those core sequential instructions in ﬁgure 8.1. The semantics of these statements have been signiﬁcantly simpliﬁed in the presentation due to the removal of nonlocal exits, stack allocated variables, and function return values.

The unerased sequential semantics are very similar to the erased sequential semantics. We have added a resource map φ to the world w. The sequence rule does not use it, but both the call rule and the assignment rule do. All the call rule does to φ is age it once; this is required for the semantic model of the Hoare tuple for reasons explained in section 10.2.2. The assignment rule uses φ to check that the location v1 is writable by requiring

that is, that the memory location v1 be fully owned by the resource map φ. It is easy to show that the unerased sequential semantics is a conservative approximation to the erased sequential semantics.

Proof. Each case in the unerased sequential relation in ﬁgure 8.1 has a corrosponding case in the erased sequential relation in ﬁgure 5.1 with a subset of the premises.