«Oracle Semantics Aquinas Hobor A Dissertation Presented to the Faculty of Princeton University in Candidacy for the Degree of Doctor of Philosophy ...»
4.9. APPLYING CSL TO THE EXAMPLE PROGRAM current thread into the lock for unlock or into the child thread for fork. The precondition requires a function e with precondition P and postcondition Q. Just as for unlock, we apply an operator to force the precondition to have the desired property, in this case only precision22.
Since the precondition P is a map from the arguments e to an assertion, the precondition of the fork rule requires that the assertion precisely P (e) hold. Just as in the case of unlock, if P (e) is precise, precisely P (e) = P (e); otherwise it will be equal to false, meaning that the fork statement in question cannot be proven to be safe using CSL. After the fork statement, the parent thread loses the assertion precisely P (e). The child process will then start with the assertion precisely P (e) when it starts.
4.9 Applying CSL to the example
We demonstrate the usefulness of our CSL by demonstrating how to apply it to critical parts of the example program from section 3.2.
4.9.1 The resource invariant R(l) The key resource invariant is given in ﬁgure 4.2. It is divided into two parts: S(l, P ) and R(l). In eﬀect, S(l, P ) is specialized to each lock, In the Coq development, precision is guaranteed with a side condition; the presentation here is equivalent and more consistent with the other rules.
CHAPTER 4. CONCURRENT SEPARATION LOGIC
while the R(l) deﬁnition is boilerplate. The convention used when deﬁning S(l, P ) is that l is the address of the lock and P is the lock’s invariant, which will include S(l, P ). Because we are using the recursion operator µ to form resource invariants, S(l, P ) must be contractive in P ; as long as P is used inside a lock or hold assertion then this requirement is easily satisﬁed. The formal deﬁnition of contractiveness is technical and is deferred until section 7.3.9.
In line 1, S(l, P ) implies that the locations l + 1 and l + 2 are fully owned and point to the same value. In line 2, S(l, P ) also implies that the location l+3 is owned, and is being used as a signal to communicate that the child thread is done processing. While the child is working, l+3 is set to 1, to indicate that it is continuing to compute. When the child is done, it sets l+3 to 0, and gives up the location l, which is a lock with invariant P (fuﬁlling the convention for the boilerplate), and which is partially owned with the left half of the full share. This allows the child to return all of its resources to its parent cleanly, and thus exit holding
4.9. APPLYING CSL TO THE EXAMPLE PROGRAM no resources.
S(l, P ) is contractive in P since P is only used inside a lock invariant. S(l, P ) is precise since the separating conjunction of two precise predicates is precise and since exactly one half of the disjunction on line 2 is true at any time. Moreover, S(l, P ) is valid since it is built from the operators presented in sections 4.1 and 4.7. By inspection, S(l, P ) is closed to local variables. Therefore S(l, P ) is tight.
The boilerplate deﬁnition for R(l) is given on line 3. It uses the recursive operator µ to tie the knot and ensures that the lock invariant implies that the lock is held, as required for the unlock rule. Since S(l, P ) is contractive, µ is well-deﬁned. Since S(l, P ) is tight, so is R(l).
4.9.2 Veriﬁcation of selected instructions Here we show how to apply the rules of CSL to verify parts of the example program. For cross-referencing purposes, we use the same numbering here as in section 3.2. For clarity we omit the use of Γ (the map from functions to speciﬁcations) in the presentation of our rules.
i. The pre- and postconditions of main are emp. Since emp is the identity element, a precondition of emp indicates that main can be called at any time. The postcondition of emp indicates that when it exits, main will no longer own any resources. In traditional separation logic this would mean that all of the resources allocated by main had been freed. However, in CSL it is possible for resources to be allocated and then given away in a lock;
CHAPTER 4. CONCURRENT SEPARATION LOGIC this means that a postcondition of emp is not strong enough to guarantee that all resources have been freed on program termination. There are several approaches to guaranteeing this stronger property, such as requiring certain programming disciplines, or, alternatively, forcing lock invariants to have certain properties, an approach utilized by Gotsman et al. in [GBC+ 07].
ii. Lines 2 and 3 of the example program require only standard separation logic. It is important that L be initialized to 0 for (iii).
iii. The make lock in main turns location L into a lock with invariant R(L). The postcondition of line 3 is
For full details on the rules for borrowing from Γ, see [AB07].
Like a maps-to assertion, the “is a lock with invariant R” assertion is valid and precise.
CHAPTER 4. CONCURRENT SEPARATION LOGIC
which allows us to use the fork rule as follows:
Therefore, the spawned child thread is given the left half of lock L, and the parent retains the right half. We then return the speciﬁcation of f to Γ.
v. Although the assignment on line 6 was worrisome in the informal presentation, in the formalism it is quite straightforward. A part of the frame from (iv) that comes from (iii) is
On line 8, the test *(L+3) == 0 is done to break out of the loop.
Given that the precondition of line 8 is the loop invariant, if the test succeeds, then the postcondition of the loop will hold by foldunfold. If the test fails, then the state will not change, and the loop invariant will hold after line 8. If R(L) holds before line 9, then it will hold afterwards. Therefore the precondition to the unlock statement on line 10 is
and by adding the frame back in we can prove the loop invariant.
vii. The postcondition of the loop given in (vi) implies that main has reacquired the full ownership of lock L. Accordingly, after
abstracting away the frame we can apply the free lock rule:
viii. The postcondition of the loop given in (vi) also implies that L + 1 and L + 2 point to the same value, so the test on 15 will always succeed and the program will not get stuck on line 16.
ix. The cleanup on lines 17–18 follows from the standard rules of separation logic, and implies the postcondition emp of main.
x. The precondition of f was discussed above in (iv), and the meanCHAPTER 4. CONCURRENT SEPARATION LOGIC ing of emp as a postcondition was discussed in (i).
xi. The precondition and loop invariant of the loop starting on line
On line 24, the loop invariant R(L) is temporarily broken; L + 1 does not point to the same value as L + 2. However, this is permitted since the lock invariant does not have to hold while the lock is held. On line 25, the invariant is restored, leading to a postcondition of
This gives us the postcondition emp, which satisﬁes the postcondition given in line 21, so we can safely return on line 29.
Notice that the child thread gave up all its resources before returning. It did so by unlocking the lock L. However, one of the resources it gave up by unlocking L was the visibility of L itself!
In general we want every thread to give up all resources before exiting, and in general it must do so by unlocking a lock. We accomplish this by taking advantage of the recursion operator as demonstrated above.
Now the main thread can safely dispose the lock L, as explained in (vii).
If the test on line 26 does not succeed, state is not modiﬁed on line 30, giving a precondition of
which implies the loop invariant.
In [Man08], Mansky veriﬁes this program with our CSL in Coq; in addition to ﬁlling in various details omitted above, he gives a sense for the process of verifying a program in Coq. Furthermore, taking a lesson from Knuth, he actually implemented the algorithm in C (with POSIX threads) and tested it.
We have presented a new concurrent separation logic. The logic includes many powerful assertions, including impredicative quantiﬁcation, recursive invariants, fractional permissions, and ﬁrst-class locks and function pointers.
We have given Hoare rules for using these assertions to reason about Concurrent C minor programs, including rules for lock, unlock, and fork. We have developed new operators such as tightly to express those rules cleanly. Finally we have demonstrated how to use these rules and operators to verify the example program from section 3.2.
In the remainder of this thesis we will present a soundness proof of our concurrent separation logic with respect to Concurrent C minor.
Chapter 5 Erased Concurrent Operational Semantics In chapter 3 we introduced Concurrent C minor, and in chapter 4 we gave it an axiomatic semantics, Concurrent Separation Logic. This chapter gives Concurrent C minor a formal concurrent operational semantics, which we call an erased semantics. In chapters 8, 9, and 10, we will prove the soundness of CSL with respect to the erased semantics1.
The erased concurrent operational semantics justiﬁes the claim that Concurrent C minor has a reasonable model of conventional concurrency. It is not easy to prove properties about the erased semantics.
It is easier to reason about an unerased semantics, which tracks additional bookkeeping information (see chapter 8). In section 8.6 we prove an erasure theorem that says that the unerased semantics is a conPortions of this chapter have been published before as [HAZ08a] and [HAZ08b].
semantics holds on the unerased concurrent operational semantics. The erasure theorem then guarantees that the properties hold on our erased semantics.
One problem with the erased semantics given here is that it does not compose well with the compiler correctness proofs. If we erase too soon, at the source language, then the compiler will not be able to rely on the bookkeeping features of the unerased semantics in its proofs; as discussed further in section 8.6, the correct time to erase is at the very end of compilation, on the machine code.
In section 5.1 we give a vastly simpliﬁed semantics for sequential C minor because the complexities of the sequential language are not the focus of this thesis; in the Coq development we utilize the full C minor of Appel and Blazy [AB07]. In section 5.2, we give the erased concurrent step relation to show that we have a reasonable model for concurrency.
Our concurrent semantics has an unusual interleaving model; in section
5.3 we discuss why the interleaving model is reasonable.
5.1. ERASED SEQUENTIAL STEP RELATION
5.1 Erased sequential step relation Here we present a highly simpliﬁed version of sequential C minor; the Coq development uses the full sequential C minor of Appel and Blazy, which is given in ﬁgure 2.3.
The syntax of simpliﬁed C minor is given by the following grammar:
A value v is a natural number n. There are eight diﬀerent kinds of statements: three sequential statements, given as simpliﬁed examples of the kinds of statements in C minor, and ﬁve concurrent statements.
The store statement [v1 ]:=v2 updates the memory at location v1 to have value v2. The call statement call f v starts a subroutine function
the state. The lock statement waits until the lock v is unlocked, and then locks the lock. The unlock statement unlocks the lock v. The fork statement starts a new thread with a function call to f.
A sequential computation state, also called an erased world w, is a pair of locals ρ (a map from identiﬁers to values) and memory m; in our simpliﬁed syntax, ρ is unused, but it is used in the full C minor and so is included here.
A control contains program syntax and has the following grammar:
cases. The ﬁrst case, er-sstep-update, handles the store statement by updating the memory at location v1 to have value v2 and advancing to the next statement κ. The second case, er-sstep-call, performs a function call by replacing the call statement with the body of the function Ψ(f ) and applying the arguments v. The third case, er-sstep-seq, takes apart a sequence statement. The fourth case, er-sstep-makelock, and ﬁfth case, er-sstep-freelock, do nothing other than advance to the next statement κ.
For the fully concurrent instructions lock, unlock, and fork, the erased sequential step relation gets stuck.
5.2 Erased concurrent step relation A schedule ℧ is a ﬁnite list of natural numbers that act as thread-IDs.
The number at the head of the schedule tells the concurrent operational semantics which thread to execute next. When the schedule runs out, the concurrent machine safely halts computation.
With a ﬁxed schedule, our semantics is fully deterministic, which simpliﬁes the proofs, particularly proofs about sequential features that assume determinacy. Of course, concurrent systems are not usually considered to be deterministic. We ensure that the proved properties hold regardless of the way threads interleave by universally quantifying over all schedules.
We use a ﬁnite schedule because it allows us to do induction easily.
CHAPTER 5. ERASED CONCURRENT OPERATIONALSEMANTICS We do not restrict the length of a schedule, so when we quantify over all schedules we include schedules of arbitrary length. Therefore our proved properties will be true for any ﬁnite amount of time.
Krun κ means that the thread is in a runnable state, with κ as the next control to execute. Klock v κ means that the thread is waiting on a lock at address v; after acquiring the lock it will continue with κ.