\documentstyle[11pt,tla]{article}
\def\ensuremath#1{\relax\ifmmode #1\else $#1$\fi}
\renewcommand{\o}{\circ}
\newcommand{\?}{\_\!\_\,}
\renewcommand{\O}[1]{\ensuremath{\overline{#1}}}
%\newcommand{\TLA}[1]{TLA$^{+}\hspace*{-.1em}$}
\newcommand{\TLA}[1]{TLA$^{+}$}
\newcommand{\M}[1]{\ensuremath{[\![#1]\!]}}
\newcommand{\dd}{\ensuremath{\mathop{\ldotp\ldotp}}}
\newcommand{\act}[1]{\ensuremath{{\cal #1}}}
\newcommand{\minimal}{\ensuremath{\phi}}
\newcommand{\Y}{{\bf y}}
\newcommand{\Z}{{\bf z}}
\newcommand{\AUX}{{\bf a}}
\newcommand{\G}[1]{\mbox{\sf #1}}
\newcommand{\subseq}[3]{\ensuremath{#1_{#2}^{#3}}}
\newcommand{\proofrule}[2]{\setlength{\arrayrulewidth}{.6pt}%
{\mathdef{\begin{array}[t]{@{}c@{}}%
\begin{array}[t]{@{}l@{}} #1\raisebox{-.1em}{\strut}\end{array}\\
\hline \raisebox{.1em}{\strut}#2\end{array}}}}
\newcommand{\mathdef}[1]{\relax\ifmmode #1\else $#1$\fi}
\newcommand{\implies}{\Rightarrow}
\newcommand{\provable}[1]{\mathdef{\vdash #1}}
\newcommand{\sqact}[2]{\mathdef{[\act{#1}]_{#2}}}
\newcommand{\anact}[2]{\mathdef{\langle\act{#1}\rangle_{#2}}}
\newcommand{\unchanged}[1]{\mathdef{{\it Unchanged}~#1}}
\newcommand{\enabled}[1]{\mathdef{{\it Enabled}\: #1}}
\newcommand{\tempact}[1]{\mathdef{\theta(\makecal#1)}}
\newcommand{\templor}{\mbox{\boldmath$\lor$}}
\newcommand{\rvbl}[1]{{\sf #1}}
\newcommand{\PROPVARIABLE}{\mbox{\sc propositional variable}}
\newcommand{\oo}{\infty}
\renewcommand{\o}{\circ}
%\newcommand{\?}{\_\!\_}
\newtheorem{theorem}{Theorem}
\title{Formal But Lively Buffers in \TLA+}
\author{Peter B. Ladkin \\
Universit\"at Bielefeld, Technische Fakult\"at \\
Postfach 10 01 31, D-33501 Bielefeld \\
{\tt ladkin@techfak.uni-bielefeld.de}}
\date{Version of January 6, 1996}
\begin{document}
\maketitle
\begin{abstract}
I perform some rigorous
verifications in TLA, by using simple examples which nevertheless
illustrate TLA techniques, in particular liveness proofs. Since the
method of invariants for safety proofs is well understood, our example
needs only the trivial invariant, which is simply omitted.
We specify in TLA a buffer implemented as an array, a double buffer
implemented as two arrays in series, and an abstract buffer which uses
a sequence. We prove, formally and rigorously, that the two
`implementations' implement the abstract buffer. The non-trivial
part is the proof of liveness.
\end{abstract}
\section{The Problem}
\label{sec:intro}
When she was told Calvin Coolidge had died, Dorothy Parker asked, ``How
could they tell?''. If your system dies, how
could {\it you} tell? The chances are slim if you don't specify
that it shall live.
Nonetheless, many specification methods, including most
process algebraic approaches, omit a liveness requirement.
Liveness can be hard to prove, which is maybe one reason
why it isn't stated. And engineers writing specifications
don't always understand its
importance. We illustrate how to perform rigorous verifications in TLA,
emphasising the proofs of liveness.
We specify some buffers in TLA$^+$. Our examples are not mathematically
very interesting, because that is not our point. We chose them small,
in order better to illustrate the proof techniques.
TLA$^+$ is a specification language based on Leslie Lamport's TLA, the
Temporal Logic of Actions \cite{Lam94tla}. TLA$^+$ contains
structuring and naming conventions (such as modules, with {\tt import}
and {\tt include}), and some shorthand for necessary mathematics
(e.g., a notation for values of functions). TLA$^+$ is to TLA rather
as \LaTeX \hspace{2pt} is to \TeX .
We define an abstract bounded buffer of length $N$,
which contains a sequence of
elements {\it push}ed on it, and which elements may be
{\it pop}ped off. The sequence may contain at most $N$
elements at one time.
We don't require anything to be pushed on the
buffer, but if there is something there, we want it eventually to be
popped off (that's liveness).
And if it's full, then it must be
{\it pop}ped before anything more may be {\it push}ed on.
We then define an implementation of a buffer as an array
of fixed length, which may have elements pushed in the last place, and
popped off the first place, and which may shuttle them forwards to
get them ready for popping. Again, we don't require anything to
be pushed. But if there are elements in the buffer, and if the
shuttle doesn't happen sometime when it can, they
can't be popped because they don't get to the front. So, to
show that the concrete buffer implements the abstract buffer requires
that the shuttle action is lively. A further wrinkle is that the shuttle
action is `invisible' to the abstract buffer. It can't `see' a
shuttle because shuttling alters only the absolute positions
of elements in the array.
The abstract buffer can only `see' relative position.
The Fairchild F3341 chip implemented a `bubble-through' FIFO buffer
in exactly this way. It was 64 bits long by 4 bits wide. Two were
used in parallel to give an 8-bit-wide transfer path. It was used
by original equipment manufacturers to buffer disk transfers,
notably Digital Equipment Corporation in
their RP03 drives.
We also specify the concrete buffer as two smaller concrete buffers
of half the size rigged in series. We want to prove that this is
also a buffer. We provide a completely formal, syntactic proof
that the buffers-in-series implement a larger concrete buffer, and
then that the concrete buffer implements the abstract buffer
specification. Since implementation is just logical implication in
TLA, and implication is transitive, this shows that the double-buffer
is indeed an implementation of a buffer. The specifications were
written using the {\tt tla.sty} style file of Lamport.
The hard part of the proof is the proof of liveness. Our proof
is rigorously syntactically formal. That is to say, every deduction in
the proof is a syntactically rigorous application of one of the TLA
inference rules (with the exception of some deductions from
the theory of sequences and arithmetic, and the Bridge Rule,
a derivable TLA rule defined in the proof on Page
\pageref{page:bridge-rule}
which is needed to tie together the TLA
rules WF2 and Lattice in the liveness proof).
This level of rigor is achieved by automated
theorem provers, but rarely by `hand proofs',
which often employ
looser standards. Rigorous proofs are harder, as can be seen by
comparing our proof sketch in Section \ref{sec:informal-proof}
with the formal TLA proof in Section \ref{sec:proof}. The proof
was constructed using the {\tt pf.sty} style file of Lamport.
Both \cite{Mil89} and \cite{Hoa85} treat the example of buffers, and
composing larger buffers from smaller, as we do. Buffers are
specified in \cite[pp72-73]{Mil89} as constructed from single-place
buffers and proved correct with respect to their properties. Two
buffers (or other processes with a single input and a single output
channel) are constructed in \cite[Section 4.4 Pipes,
pp151-161]{Hoa85}. What are the differences between those approaches
and ours? Both of these approaches use process algebra in order to
check mathematically that the buffer specifications/constructions are
correct. However, firstly, the reasoning used for the mathematical
arguments is the usual informal mathematical reasoning, and, secondly,
liveness properties of the constructions may not be stated. Any
liveness properties in these process algebras follow only from general
liveness properties of the specification style and may not be tailored
to individual problems.
When a part of the reasoning is not formal, the possibility remains
that errors may be made. The purpose of purely formal reasoning is
to minimise the possibility of errors in a field of computer science
in which errors are both rife and notorious, by the nature of the subject.
When the entire reasoning needed for the proofs in CCS and CSP is formalised,
a part of it must be in formal logic. One would be in the position
of having part of the reasoning in a formal logic, and part in the
axiomatic theory of the process algebra.
Liveness properties are not explicit in process algebra formalisations
such as these. Thus, what liveness properties there are follow from
general principles of the process algebra, and cannot be tailored to
the wishes of the specifier, as ours are. A specifier who requires
specific liveness properties may then be constrained to use a
specification and verification method in which they may be stated
explicitly, such as TLA. Liveness properties may of course be stated
in the temporal logic version of CCS, the Modal Mu-Calculus
\cite{Sti89}.
Writing proofs is hard - rather like programming. However, the technology
of programming is ahead of the technology of proving. This paper aims to
contribute to the technology of proving.
\section{Conclusions}
\label{sec:conclusions}
In a case study, it's appropriate to put the
conclusions at the front. The general lessons learned from this
exercise are:
\begin{itemize}
\item Writing literate proofs is as important, for similar reasons,
as writing literate programs, and requires non-trivial skills;
\item Writing proofs is aided by derived proof rules such as
Theorem \ref{the:sufficient};
\item Writing syntactically correct proofs (ultimately, the only
appropriate sort of proofs) by hand can be aided by good organisation,
such as suggested by Lamport, but resembles trying to write programs
without use of a compiler.
\item A well-designed proof checker would save work and time, playing a
role rather like a compiler when developing programs.
But proof-formatting aids are as helpful as a proof checker.
Examples would include a structured
editor, and tools allowing fragments of a proof to be viewed
according to the proof structure.
\item The hierarchical structure recommended by Lamport for the most
part works well, but there are occasions on which proofs of substeps
may be used also in a different part of the proof, and there are
as yet limited means within the hierarchical proof support tools to
do so. Rigid hierarchical structure often entails duplication
of proof fragments, which is unnecessary and undesirable for
`literacy'.
\end{itemize}
Issues specific to TLA are:
\begin{itemize}
\item A certain amount of pure temporal logic (TL) manipulation is
needed for the liveness proofs;
\item A library of derived TL rules is needed for effective use of TLA.
Deriving TLA temporal logic rules needed even for such simple proofs as this
is not easy, unless one happens to be good at temporal logic.
The most significant derived rule, needed for Weak Fairness proofs,
is the Bridge Rule
\[ \proofrule{[]X /\ Y => []Y \\
[]X => [](Z => <>Y)}
{[]X /\ <>[]Z => <>[]Y}
\]
The Bridge Rule is derived in the formal proof on
Page~\pageref{page:bridge-rule} from the STL rules.
It is needed to
tie together the conclusion of the Lattice Rule and the fourth
hypothesis of WF2 in the liveness proof.
It is also used in the proof that the Lazy Caching algorithm implements
the Complete Cache algorithm, in \cite{Lad-LC-impl-CC:96}. For background
to the Lazy Caching algorithm and its proof of correctness, see
\cite{LadLamRoeOli94}.
In order to state derived rules, one needs to use
{\it propositional variables}, logical symbols that may be
substituted by propositions.
These are not TLA variables.
However, it seems appropriate to declare them when used.
We thus introduce an extra-TLA meta-category $\PROPVARIABLE$
to declare these variables.
\item A restriction must be added to the application of the Lattice
Rule in order that it shall be sound.
\item An inference rule is not literally a statement of TLA.
TLA proof steps are, however, supposed to be true TLA statements.
Establishing the validity of a derived inference rule is a
TLA meta-operation, and is not strictly TLA syntax.
For convenience in writing proofs which need some temporal
logic manipulation, it is desirable to allow the statement of
inference rules as legitimate proof steps, even though TLA strictly
defined does not allow it.
The proof of such a rule, i.e., the substeps, consist in
establishing its validity as a derived rule. Normally, that will be via
a derivation. I've used such an operation to introduce
and prove the Bridge Rule, as well as in other places.
It obviates the need to claim a theorem outside of the
proof proper, and then refer to it inside the proof.
Other useful derived rules, which are used in the proof of the
Bridge Rule and have frequent application in any temporal logic
calculations, are STL7, STL2$^{<>}$, STL4$^{<>}$ and STL6$^{<>}$.
See the formal proof of the Bridge Rule for statements of these rules.
\item Propositional logic does not follow from the TLA proof rules
as given in \cite{Lam94tla}.\footnote{%
This may be seen as follows.
Rules STL1 and STL4 are the only rules with non-trivial hypotheses.
(The other rules with trivial hypotheses are usually called {\it axioms}
in a Hilbert-style formulation of logic.) Both these rules increase
the number of occurrences of the operator $\Box$ in the consequent
over the number in the antecedent, as do all the
STL rules (this follows trivially for the others). However, any
propositional rule (or axiom) has the same number of occurrences
of $\Box$ in hypothesis and consequent, namely, none. Thus no
application of STL1 and STL4 can yield, for example, Modus Ponens,
contrary to what Lamport says in \cite[Section 5.6]{Lam94tla},
that STL1 `incorporates all the rules of ordinary logic,
such as {\it modus ponens}'.
A similar extension to the
argument shows that Modus Ponens does not follow from
the Lattice Rule in combination with STL1--STL6.
}
%
This is easy to fix and should be fixed
by a user, either by including Modus Ponens and
a set of Hilbert-style axioms for propositional logic,
or by including a set of Gentzen-style `introduction'
and `elimination' rules.
Predicate logic is included in TLA, as shown in Rules E1 and F1
of \cite[Figure 9]{Lam94tla}.
\end{itemize}
\section{Why Do This?}
Why is this logic of practical use? Here are a few questions asked
about an earlier version of this paper, and some responses. Many of
these points have been made before, but it seems they are not yet
clearly understood within the community. Similar issues are discussed
in \cite{Lam89-cacm}.
\medskip
{\it Is this work of any practical importance? You're talking
about proving the implementation, but still the implementation is just
a formal specification. How is it related to a program?}
\medskip
\noindent
The formal specification of the concrete buffer is a
simplified specification of the design of a chip. (We have
abstracted the four-bit-wide data path into a single data element.
In a verification of this design, this abstraction would have to be
decomposed into its component parts, but one would not expect the
proof to differ significantly -- just to gain a few new theorems.)
A formal specification may be a
precise description of an algorithm or a program or an architecture or
whatever. You can't tell from just looking at the symbols what a
specification describes. It could be at a very low level of detail,
describing the results of executing each individual step
of an imperative program,
or it could be thirty lines describing a property of a system
implemented by many megalines of imperative code. A precise
description of any implementation can always be considered to be a
specification - a better one or a worse one. Even Fortran -- but
you might not be able easily to write down the exact
semantics of a Fortran program,
because it depends on the inner workings of the compiler
(often a better approach is to describe the compiled code instead,
using the semantics of the assembler \cite{Yu-thesis}).
But if things are so tricky that you don't know what's going on or
can't say so succinctly, how on earth can you expect that your code
does what you think it does? In safety-critical applications, for
example, it's crucial to know what the code does and how, and while
proofs are merely hard, truly exhaustive testing is all but impossible.
To go about proving code correct is a task which can be structured
into levels (e.g., \cite{Par72a-decompose,Par74-buzzword,SIFT78}).
One separates the business of proving the architecture correct (i.e.,
that the machine implements the semantics of its machine code) from
the business of proving the assembler correct (i.e., that the
semantics of the assembler instructions are correct with respect to
the machine code and its semantics), from
the business of proving the compiler correct (i.e., that the sequence
of instructions implementing each high-level statement implements its
formal semantics) from the business of proving that the program is
correct (one assumes that the instructions implement their formal
semantics, and proves that the sequence of instructions implements the
algorithm). You may ask `where do the formal specs stop?' At the low
end, at
some level of granularity of the architecture, and at the high end, at
the requirements specification. At which low level they terminate
depends on the physics of the chip. But these questions (what does
a proof apply to, how can you decompose proofs, what features can you
show correct and how?) concern the metamathematics of a proof, which
is not the topic of this paper. This paper is about the technical
details of a particular proof, just one of many that would be involved
in verification of code on a particular machine. In particular, proofs
of liveness are important, because liveness is important, and further
because engineers seem unused to specifying, or indeed thinking about,
liveness.
\medskip
{\it What is important about liveness?}
\medskip
\noindent
Liveness is important for systems with many concurrently-running
processes, because, first, any system property may be regarded as a
combination of safety + liveness \cite{AlpSch87} and, second, liveness
is not guaranteed in a concurrent system. (Most specification methods
understand and promote proofs mainly of {\it safety} properties.) For serial
algorithms or code, one often assumes that at any stage there is
nothing to prevent the next program action from taking place (PC users
may smile at this point). It is generally more pressing to show that
the program calculations yield the desired results. However, in
concurrent systems, there are other processes in the environment whose
actions may interfere with your own. These interferences may be
encouraged (communication) or deplored (competition for other system
resources). The trick is to design algorithms that succeed no matter
what other processes do. And the property that {\it they succeed in
doing something no matter what else happens} is assured by a proof of
liveness.
Algorithms in TLA are described more carefully and more generally than
in an imperative program. They are construed as a series of actions,
defined by assertions concerning the pre- and post-values of program
variables. For example, in Pascal one may write {\tt x := 1} to
generate an action which sets the value of {\tt x} equal to {\tt 1}.
In TLA, one may define an action {\tt unity $==$ x$'$ = 1} which is a
statement of logic expressing the postcondition that the value of x
after the action, denoted {\tt x$'$}, is equal to {\tt 1}. In a given
state, many actions may be enabled, and similarly many actions may
take place at once (precisely when the preceding and succeeding states
satisfy the definitions of more than one action). In an imperative
program, usually one action is defined to follow another, and which
one may be known at compile time (sequential composition) or only at
run-time (many conditionals). TLA can specify such algorithms, but
also more general ones that exist in a concurrent environment, in
which the ordering of statements from different processes in an
execution is much less predictable, and therefore in which ensuring
liveness becomes important. Buffers, even though they may be
essentially sequential objects, operate mostly in a environment with
other processes around. It's appropriate that they should be
specified accordingly and liveness must be proven.
\medskip
{\it Where is the particular example you chose relevant, and are there
any `real' implementations around that directly relate to your
specification of an implementation?}
\medskip
\noindent
Every buffer implemented as an array in this way is a real
implementation. A buffer implemented by a linked
list or a ring buffer is different from this, and this proof shows nothing
about such buffers. But one should be able to prove
that they have the properties of a buffer also.
I prove that the properties of a buffer are true of an array
with the operations I describe. That shows that a particular
algorithm implements a buffer, if by buffer you mean something
that satisfies the specification {\it AbstractBuffer}.
The practical application is simply that if you implement
a buffer by an array in this way, and want to know for sure that you have made
no mathematical mistakes in your algorithm design,
this proof shows that. And similarly if you join two
such buffer implementations in series.
But the subject matter of this paper is not really buffers.
It's proofs and how to do them, in detail.
\medskip
{\it What's the point of doing such a detailed and intricate proof?}
\medskip
\noindent
To distinguish between correct and incorrect proofs. Every correct
proof admits such detail, and no incorrect proof does. If the detail
works out, the proof is correct. Conversely, if the proof is
incorrect, the detail will not work. Hence the point of the intricacy
is to ensure that the proof is correct. A formal logic is defined
syntactically by inference rules. Proofs of algorithm properties do
not necessarily require elegant new theorems of the sort that
excite mathematicians -- after all, the mathematical insight went into
devising the algorithm, and if you don't already have some idea of
why the algorithm works, you're unlikely to be able to construct a formal
proof of it. Rather, a formal proof requires a careful checking of details,
and some organisation. If the details are not checked carefully,
there may be gaps or falsehoods. It's easy to
omit proof obligations which are only
discovered in the course of performing the formal proof. I know of no
scientist working in verification who has not had such an experience.
`Normal' mathematical standards do not seem to suffice -- Lamport
notes that many concurrent `algorithms' which were incorrect have been
published along with their `proofs' \cite{LadLamRoeOli94}.
There is also the problem of scale. Methods of organising
large proofs to ensure that all the bits fit together requires
some kind of reliable decomposition. Syntactic proof methods
lend themselves to decompositions of this sort -- if all the bits
are locally correct, there is nothing more to do to make the proof
globally correct. There is less of a problem of `interpretation' or `gaps'
than with proofs that may not be completely formal and which rely on
an interpretive or implicit semantic component.
Finally, syntactic proofs are the only type
of proof universally recognised by logicians as unequivocally
demonstrating the relation of logical consequence between assumptions
and conclusions. Of course, the proof rules have to be justified
semantically, but as for a programming language, this is done
once and thoroughly, and then all syntactically correct proofs using
these rules are known to be valid according to the intended meaning
of the formulae.
\medskip
{\it How can I tell the difference between an `important' or
`creative' proof step, and one that is just routine?}
\medskip
\noindent
That's not so easy. There are some simple heuristics, such as that
a step which has few substeps when expanded fully in logic is likely
to be easier to perform than one which has many. However, it's not so
obvious how to distinguish a routine proof with many steps from a
`creative' proof with many steps. One can attempt to locate the
`creativity' in a proof by finding metalogical theorems such as Theorem
\ref{the:sufficient}, which factor out logically routine steps
and focus on the `proof obligations', the logically variable steps.
The obligations are necessary and sufficient conditions for a
proof of a certain form to exist. This kind of theorem seems to
be the closest we can come to defining `creativity' in a proof
at the moment.
\section{A Short Introduction to TLA}
TLA is an assertional method designed for the specification and
verification of concurrent algorithms. It is based on the by-now
standard view of a system execution as an infinite progressive
sequence of states, each state being defined by the collection of
values of variables that are true in that state. As the system
executes, the states change. A specification (or a program)
constrains the states to change in certain specific ways. For example,
executing an assignment $x <- 1$ causes the state to change so that in
the new state the value of the variable $x$ is $1$. The state may also
change in other ways. For example, suppose microcode variables are
included in the state. Many of these variables will also change
during the assignment, as will the register in which this assignment
is carried out. But the specification may not need to comment on
these changes -- they can be `hidden'.
An {\it action} in TLA is semantically a relation between states.
Let ${\cal A}$ be an action. A pair of
states $\langle a, b \rangle :in: {\cal A}$ just in case
intuitively state $b$ is what you get by executing the
action in state $a$.
We specify action ${\cal A}$ by giving a syntactic definition of the
relation. For example, an action such as $x <- 1$ can be defined by the
logical relation {\it x-in-the-new-state} $= 1$. Notice that according
to this definition, any
action which ensures $x$ in the new state is equal to $1$ counts as a
$x <- 1$ action. That's appropriate and useful in concurrency -- a state change may incorporate many
actions defined in a description
(may {\it satisfy} many action predicates).
What's
important is that the relation between the new state and the old is
precisely defined. If certain other variables may not change during
the execution of an action, we have explicitly to state that in
the action definition.
TLA refers to {\it x-in-the-new-state} syntactically
by use of a special operator. In
defining actions, one wants to talk about {\it x-in-the-old-state}
and {\it x-in-the-new-state}, and to specify {\it mathematical}
relations between these values. Values of variables in the old state
are referred to by just using the variable name, say $x$.
Values of the variable $x$ in the new state
are referred to as $x'$, as in the language Z \cite{Spi89}.
A logical definition of the action will be a mathematical
expression involving $x$ and $x'$.
The assignment action could be specified by the equation $x' = 1$.
Notice that `normal' mathematical equality is used in this
definition; in fact,
normal mathematical assertions and inferences can be made
without tears.
TLA uses
normal mathematics for talking about everything. It includes
Zermelo-Fraenkel set theory (just as in mathematics), so
Riemann integrals and suchlike can be defined and used in a TLA
specification just as you would use them in normal mathematics
\cite{Lam-hybrid}.
Variables such as $x$ above may change value from state to state,
and are called {\it flexible variables}.
In contrast, there are parameters of formulas (for example, the
$a$ in the action $push(a)$ which we use) which have a constant
but unspecified value, not dependent on state. They are called
{\it rigid variables}. Quantification over rigid and over flexible
variables is different. If $x$ is flexible, the quantifier $:EE: x$
means that {\it there is a value of $x$ in every state} such that [...].
In different states, the properties asserted of $x$ may or may not hold
(for example, the value of $x$ may be related to the sequence of
previous values of $x$). In contrast, quantification over rigid
variables is just normal first-order quantification. The difference
is not so important for the buffer example, since only rigid
variables are quantified over.
TLA is a temporal logic. How much temporal logic is actually used
in a TLA proof that an implementation meets a specification?
In the example in this paper, quite a lot. But the amount of
pure temporal logic used seems to vary much less than linearly
with the size of the specification.
Hence not much of a real example consists of temporal reasoning
-- maybe 2--4\%, as may be seen from the
lengthy proof in \cite{LadLamRoeOli95}. Most of the reasoning is
straight mathematics and propositional and predicate
logic. Theorem \ref{the:sufficient} shows that,
of the five proof obligations incurred for an
implementation proof, four are classical logic and just one is
temporal logic - and that one is the liveness proof.
TLA$^+$ includes
the usual module constructs, as well as
useful shorthand notation for functions and finite sets,
and such goodies.
An uncommon logical feature of TLA is the $\CHOOSE$ operator,
Hilbert's $\epsilon$ operator.
I use it in the definition of the length of a sequence.
$\CHOOSE$ binds a variable and a predicate, and the result is
a term. $\CHOOSE x: P(x)$ denotes an object that
satisfies $P$, if there is one, although it is not specified which
if there is more than one, and has an unspecified value
if there is no such object. It is extensional, in that it
selects the same object satisfying equivalent predicates.
It's a primitive of \TLA+.
$\CHOOSE$ satisfies axioms such as the following:
\[ |- (:E: x : P(x)) => P(\CHOOSE x : P(x))
\]
provided that no instance of $\CHOOSE x : P(x)$
appears within the scope of a construct (a quantifier or another $\CHOOSE$)
that binds a free variable $y$ in $\CHOOSE x : P(x)$ (this condition may be
fulfilled by a suitable change of bound variable at the appropriate place);
and
\[ |- (:A: x : (A(x) :equiv: B(x))) =>
(\CHOOSE x : A(x)) = (\CHOOSE x : B(x))
\]
Proving implementations correct in TLA is done by writing both
the high-level specification and a description of the
implementation in TLA, and proving that the implementation
description logically implies the specification.
Implementation is implication. The proof is carried out by
using the assertional method \cite{Gri81} as adapted to concurrent
programs by Ashcroft \cite{Ash75}. Section
\ref{sec:implscheme} gives a general TLA proof scheme for showing
that one description implements another.
A proof using this schema may be many pages long, for
examples see \cite{LadLamRoeOli95}.
A TLA specification has the form
$Init /\ [][{\cal N}]_{vars} /\ F_{vars}({\cal M})$.
$Init$ is a description of the initial state in
which the system starts. $[][{\cal N}]_{vars}$ asserts the
{\it safety} properties of the system. A safety property asserts that
nothing untoward happens during execution. The standard type of safety
property asserts that every action taken by the system in an execution
is a legitimate action defined by the specification. $[]$ is the
{\it always} operator, saying that the following
assertion is true in every state.
${\cal N}$ is the disjunction of all the action definitions of the
system, i.e. it asserts that every step is a step which satisfies one
or other of the action predicates of the specification.
$[{\cal N}]_{vars}$ is defined to mean ${\cal N} \/ vars' = vars$,
i.e. that either ${\cal N}$ holds or the variables of interest do not change
value during the step (in which case
the step is a so-called {\it stuttering} step).
(When considering liveness,
we shall also need the notation $<<{\cal N}>>_{vars}$ which is
defined to mean ${\cal N} /\ vars' :neq: vars$.)
The entire formula $[][{\cal N}]_{vars}$
thus says that every step is either a legitimate
action, or the values of the variables of interest don't change.
But one way for a system to satisfy the safety formula is for it never
to progress beyond the initial state, i.e. that $vars' = vars$ is true
at every state change, for ever. In order for the system to do any
useful work, it must actually take some legitimate step at some point.
The various ways of saying this are known as {\it liveness}
properties. The forms of liveness property used in TLA specifications
is a conjunction of so-called {\it Weak Fairness} requirements
on actions and {\it Strong Fairness} requirements, specified in the
formula above by the conjunct $F_{vars}({\cal M})$. In this paper, we
only consider weak fairness requirements, denoted $WF_{vars}({\cal
M})$. To assert weak fairness of an action ${\cal A}$ means to say
that if ${\cal A}$ is enabled continuously, it must eventually happen
in some state change. For an action to be {\it enabled} in a state $s$
simply means that there is some state $t$ such that the pair of states
$\langle s,t \rangle$ satisfies the action predicate (it doesn't say
that $t$ {\it must be} the next state in the execution, just that it
{\it could} be). More formally, $Enabled({\cal A})$ is the formula
obtained from ${\cal A}$ by replacing the primed variables by new
variables and existentially quantifying them all. For example,
$Enabled(unity) == :E: c : c = 1$. To perform a proof, it's not
necessary to know how weak fairness is expressed in temporal logic,
since there's a TLA proof rule geared specially to the proof of Weak
Fairness properties. But this proof rule ensures that liveness
properties of the specification must be proved from liveness
properties asserted of the implementation -- they don't come from
nowhere! Sufficiently much liveness must be included in the
implementation description to enable the liveness properties of the
specification to be proved. (Connoisseurs who wish to consider other
liveness properties than weak or strong fairness may write them out
directly in temporal logic and reason with them using the PTL
rules. TLA only includes specific inference rules for weak and strong
fairness.)
One final TLA operator that appears in the proof rules is the
`{\it leadsto}' operator, $\leadsto$. To say $P \leadsto Q$ is
to say that it is always the case that if $P$ then eventually $Q$,
expressed in temporal logic by $[](P => <>Q)$. We shall use the
$\leadsto$ operator and its definition in the proof.
This brief description of TLA leads to the
specification and the proof of implementation.
\section{Constructing the Specifications}
\label{sec:specs}
We use three specifications for this example. {\it AbstractBuffer}
(Figure \ref{fig:AbstractBuffer})
specifies the actions, safety and liveness properties which define
a bounded buffer of size $N$, as in
Figure \ref{fig:abstractbufferpicture}.
The buffer is represented by variable {\it buffer}, and has {\it push}
and {\it pop} actions respectively
to put elements in the buffer (provided that
there aren't $N$ elements there already), and to take elements out
of the buffer (provided there is something there).
The {\it buffer} itself is a sequence. A state of the buffer is
given by the values of variables, that is, by the value of
{\it buffer}. The final portion of the module contains the
assertion of properties of the buffer: that it starts off empty,
that every state change of the buffer is caused by a push or pop
action, and that always if there's something in the buffer, eventually
a pop must happen (recall that we don't require that anything
is ever put in the buffer).
\begin{figure}
\setlength{\unitlength}{9pt}
\begin{center}
\begin{picture}(24,10)
\put(3,7){$buffer$}
\put(18,7){$buffer'$}
\put(1,5){$< b,x,y,d >$}
\put(10,5){\vector(4,0){5}}
\put(12,6){$pop$}
\put(17,5){$< x,y,d >$}
\put(1,0){$< b,x,y,d >$}
\put(10,0){\vector(4,0){5}}
\put(11,1){$push(a)$}
\put(17,0){$< b,x,y,d,a >$}
\end{picture}
\end{center}
\caption{An Abstract Buffer with Operations}
\label{fig:abstractbufferpicture}
\end{figure}
\begin{figure}
\setlength{\unitlength}{10pt}
\begin{center}
\begin{picture}(40,20)
\put(6,18){$Buffer$}
\put(27,18){$Buffer'$}
\put(0,15){\framebox(2,2){$\bot$}}
\put(2,15){\framebox(2,2){$b$}}
\put(4,15){\framebox(2,2){$x$}}
\put(6,15){\framebox(2,2){$y$}}
\put(8,15){\framebox(2,2){$\bot$}}
\put(10,15){\framebox(2,2){$d$}}
\put(12,15){\framebox(2,2){$\bot$}}
\put(15,16){\vector(4,0){5}}
\put(16,17){$push(a)$}
\put(21,15){\framebox(2,2){$\bot$}}
\put(23,15){\framebox(2,2){$b$}}
\put(25,15){\framebox(2,2){$x$}}
\put(27,15){\framebox(2,2){$y$}}
\put(29,15){\framebox(2,2){$\bot$}}
\put(31,15){\framebox(2,2){$d$}}
\put(33,15){\framebox(2,2){$a$}}
\put(0,10){\framebox(2,2){$\bot$}}
\put(2,10){\framebox(2,2){$b$}}
\put(4,10){\framebox(2,2){$x$}}
\put(6,10){\framebox(2,2){$y$}}
\put(8,10){\framebox(2,2){$\bot$}}
\put(10,10){\framebox(2,2){$d$}}
\put(12,10){\framebox(2,2){$a$}}
\put(15,11){\vector(4,0){5}}
\put(16,12){$move(6)$}
\put(21,10){\framebox(2,2){$\bot$}}
\put(23,10){\framebox(2,2){$b$}}
\put(25,10){\framebox(2,2){$x$}}
\put(27,10){\framebox(2,2){$y$}}
\put(29,10){\framebox(2,2){$d$}}
\put(31,10){\framebox(2,2){$\bot$}}
\put(33,10){\framebox(2,2){$a$}}
\put(0,5){\framebox(2,2){$\bot$}}
\put(2,5){\framebox(2,2){$b$}}
\put(4,5){\framebox(2,2){$x$}}
\put(6,5){\framebox(2,2){$y$}}
\put(8,5){\framebox(2,2){$d$}}
\put(10,5){\framebox(2,2){$\bot$}}
\put(12,5){\framebox(2,2){$a$}}
\put(15,6){\vector(4,0){5}}
\put(16,7){$move(2)$}
\put(21,5){\framebox(2,2){$b$}}
\put(23,5){\framebox(2,2){$\bot$}}
\put(25,5){\framebox(2,2){$x$}}
\put(27,5){\framebox(2,2){$y$}}
\put(29,5){\framebox(2,2){$d$}}
\put(31,5){\framebox(2,2){$\bot$}}
\put(33,5){\framebox(2,2){$a$}}
\put(0,0){\framebox(2,2){$b$}}
\put(2,0){\framebox(2,2){$\bot$}}
\put(4,0){\framebox(2,2){$x$}}
\put(6,0){\framebox(2,2){$y$}}
\put(8,0){\framebox(2,2){$d$}}
\put(10,0){\framebox(2,2){$\bot$}}
\put(12,0){\framebox(2,2){$a$}}
\put(15,1){\vector(4,0){5}}
\put(16,2){$pop$}
\put(21,0){\framebox(2,2){$\bot$}}
\put(23,0){\framebox(2,2){$\bot$}}
\put(25,0){\framebox(2,2){$x$}}
\put(27,0){\framebox(2,2){$y$}}
\put(29,0){\framebox(2,2){$d$}}
\put(31,0){\framebox(2,2){$\bot$}}
\put(33,0){\framebox(2,2){$a$}}
\end{picture}
\end{center}
\caption{A Concrete Buffer with Operations}
\label{fig:concretebufferpicture}
\end{figure}
The specification {\it ConcreteBuffer} (Figure \ref{fig:ConcreteBuffer})
implements this in a very
particular way. The {\it Buffer} is an array of fixed length $N$,
as in Figure \ref{fig:concretebufferpicture}, where $N = 7$.
Data values are inserted at the end of the array by {\it push}, provided
this terminal position is void, and are removed from the first
position of the
array by {\it pop}, providing this position is non-void.
Thus $push$, but not $pop$, is parametrised by the data value that
is pushed. Since this value does not change from state to state
(although the element is moved around, it doesn't spontaneously
change), it is a rigid variable so we may employ
a first-order existential quantifier. $:E: a : push(a)$ asserts
that some element has been pushed.
To move elements from the rear of the {\it Buffer} to the front, there
is an action {\it move} which moves an element into the
position in front, if this position is void.
$move$ is itself parametrised by the starting position of the
element moved. This parameter is another rigid variable.
Existential quantifiers over these variables are the only
quantifiers that appear in the specification.
The specification is a logical formula asserting that the array starts
out filled with $\bot$ elements (which are not $Data$, by
the assertion), that every state change of the Buffer
is caused by a push, a pop or a move, and finally that if something
can be popped, it eventually will be, and that if something can
be moved, it eventually will be (recall our discussion in
Section \ref{sec:intro} why this is necessary).
We need to show that
the concrete buffer defined as an array implements the abstract buffer.
A different way to
implement the concrete buffer would be as a linked list.
{\it AbstractBuffer} and {\it ConcreteBuffer}, as well as {\it Sequences},
import the {\it Naturals}
module, which is not given here. {\it Naturals} includes definitions of the
set $Nat$ of natural numbers along with the standard arithmetic operations.
It also defines such constant symbols as $1$, $2$.
Since there are many ways one could wish to write such a module, we don't
commit ourselves to one. The use we make of it is slight and any
reasonable definition will suffice.
Finally, we write a double-buffer specification {\it DoubleBuffer}
(Figure \ref{fig:DoubleBuffer}),
which implements a buffer of size $2N$ by arranging two concrete
buffers of size $N$ in series, as in
Figure \ref{fig:doublebufferpicture} in which $N = 3$.
This specification has similar actions to the concrete buffer,
with identical enabling conditions,
but one {\it move} action consists in shuttling an element from the
front of the rear buffer to the rear of the front buffer. This
is accomplished by an action which is simultaneously a {\it pop}
of the rear buffer and a {\it push} of the same popped element onto
the front buffer.
An important feature of the $DoubleBuffer$ specification is
that the {\it same} element that is popped from the rear buffer must
appear on the front buffer.
Consequently, $move(N+1)$ is defined as $B1.pop /\ B2.push(B1[1])$.
\begin{figure}
\setlength{\unitlength}{9pt}
\begin{center}
\begin{picture}(40,25)
\put(1,23){$Buffer1$}
\put(9,23){$Buffer2$}
\put(26,23){$Buffer1'$}
\put(34,23){$Buffer2'$}
\put(0,20){\framebox(2,2){$\bot$}}
\put(2,20){\framebox(2,2){$b$}}
\put(4,20){\framebox(2,2){$\bot$}}
\put(6,21){\line(2,0){2}}
\put(8,20){\framebox(2,2){$d$}}
\put(10,20){\framebox(2,2){$\bot$}}
\put(12,20){\framebox(2,2){$\bot$}}
\put(15,21){\vector(4,0){9}}
\put(17,22){$push(a)$}
\put(25,20){\framebox(2,2){$\bot$}}
\put(27,20){\framebox(2,2){$b$}}
\put(29,20){\framebox(2,2){$\bot$}}
\put(31,21){\line(2,0){2}}
\put(33,20){\framebox(2,2){$d$}}
\put(35,20){\framebox(2,2){$\bot$}}
\put(37,20){\framebox(2,2){$a$}}
\put(0,15){\framebox(2,2){$\bot$}}
\put(2,15){\framebox(2,2){$b$}}
\put(4,15){\framebox(2,2){$\bot$}}
\put(6,16){\line(2,0){2}}
\put(8,15){\framebox(2,2){$d$}}
\put(10,15){\framebox(2,2){$\bot$}}
\put(12,15){\framebox(2,2){$a$}}
\put(15,16){\vector(4,0){9}}
\put(17,17){$move(6)$}
\put(25,15){\framebox(2,2){$\bot$}}
\put(27,15){\framebox(2,2){$b$}}
\put(29,15){\framebox(2,2){$\bot$}}
\put(31,16){\line(2,0){2}}
\put(33,15){\framebox(2,2){$d$}}
\put(35,15){\framebox(2,2){$a$}}
\put(37,15){\framebox(2,2){$\bot$}}
\put(0,10){\framebox(2,2){$\bot$}}
\put(2,10){\framebox(2,2){$b$}}
\put(4,10){\framebox(2,2){$\bot$}}
\put(6,11){\line(2,0){2}}
\put(8,10){\framebox(2,2){$d$}}
\put(10,10){\framebox(2,2){$a$}}
\put(12,10){\framebox(2,2){$\bot$}}
\put(15,11){\vector(4,0){9}}
\put(17,12){$move(4)$}
\put(16,10){\small $B1.pop \; \wedge$}
\put(16,9){\small $B2.push(B1[1])$}
\put(25,10){\framebox(2,2){$\bot$}}
\put(27,10){\framebox(2,2){$b$}}
\put(29,10){\framebox(2,2){$d$}}
\put(31,11){\line(2,0){2}}
\put(33,10){\framebox(2,2){$\bot$}}
\put(35,10){\framebox(2,2){$a$}}
\put(37,10){\framebox(2,2){$\bot$}}
\put(0,5){\framebox(2,2){$\bot$}}
\put(2,5){\framebox(2,2){$b$}}
\put(4,5){\framebox(2,2){$d$}}
\put(6,6){\line(2,0){2}}
\put(8,5){\framebox(2,2){$\bot$}}
\put(10,5){\framebox(2,2){$a$}}
\put(12,5){\framebox(2,2){$\bot$}}
\put(15,6){\vector(4,0){9}}
\put(17,7){$move(2)$}
\put(25,5){\framebox(2,2){$b$}}
\put(27,5){\framebox(2,2){$\bot$}}
\put(29,5){\framebox(2,2){$d$}}
\put(31,6){\line(2,0){2}}
\put(33,5){\framebox(2,2){$\bot$}}
\put(35,5){\framebox(2,2){$a$}}
\put(37,5){\framebox(2,2){$\bot$}}
\put(0,0){\framebox(2,2){$b$}}
\put(2,0){\framebox(2,2){$\bot$}}
\put(4,0){\framebox(2,2){$d$}}
\put(6,1){\line(2,0){2}}
\put(8,0){\framebox(2,2){$\bot$}}
\put(10,0){\framebox(2,2){$a$}}
\put(12,0){\framebox(2,2){$\bot$}}
\put(15,1){\vector(4,0){9}}
\put(17,2){$pop$}
\put(25,0){\framebox(2,2){$\bot$}}
\put(27,0){\framebox(2,2){$\bot$}}
\put(29,0){\framebox(2,2){$d$}}
\put(31,1){\line(2,0){2}}
\put(33,0){\framebox(2,2){$\bot$}}
\put(35,0){\framebox(2,2){$a$}}
\put(37,0){\framebox(2,2){$\bot$}}
\end{picture}
\end{center}
\caption{A Double Buffer with Operations}
\label{fig:doublebufferpicture}
\end{figure}
The goal is to show that this double buffer implements an abstract
buffer of size $2N$. First, we prove that the double buffer actually
implements a single array buffer with the same total number of places.
We then prove that a concrete array buffer implements the abstract
buffer specification. These two assertions are contained in
Module $Theorems$ (Figure \ref{fig:Theorems}).
TLA is a system for writing and verifying
specifications and implementations of concurrent algorithms.
Is there any explicit concurrency in this specification? Just a little,
in module {\it DoubleBuffer}.
The specification {\it DoubleBuffer} specifies a serial
`composition' of two buffers, of which the actions are not
generally constrained to synchronise. The only exception to
this is the {\it move(N+1)}, which is defined to be
simultaneously a {\it pop} of {\it Buffer2} and a {\it push}
of the same element on {\it Buffer1}.
TLA specifications in general don't enforce concurrency like
this, but don't rule it out either, except by means of
whatever conditions are
explicitly written in the action definitions.
\section{Proving the Implementation}
\label{sec:informal-proof}
\begin{figure}
\noindent
{\bf Temporal Logic Rules}\\[.2\baselineskip]
\(
\begin{array}{@{}l@{\hspace{2.5em}}ll@{}}
{\rm STL}1.\
\proofrule{\mbox{$F$ a tautology}}{\Box F}
& {\rm STL}4.\ \proofrule{F\implies G}{\Box F \;\implies\;\Box G}
& \\
%
{\rm STL}2.\ \provable{\Box F \,\implies\, F}
& {\rm STL}5.\ \provable{\Box(F\land G)\;\equiv\;(\Box F)\land(\Box G)}
\\
%
{\rm STL}3.\ \provable{\Box\Box F \;\equiv\;\Box F}
& \multicolumn{2}{@{}l@{}}{
{\rm STL}6.\ \provable{(\Diamond\Box F) \,\land\, (\Diamond\Box G) \;\equiv\;
\Diamond\Box (F \land G)}}\\
&% {\rm STL}7.\ \provable{\Box \Diamond\Box F \,\equiv\, \Diamond\Box F}
\\
\multicolumn{2}{@{}l@{}}{{\rm LATTICE}.\
\proofrule{\mbox{\hspace{1em} $\succ$ a well-founded partial order on a
set \rvbl{S}}\\
\mbox{\hspace{4em} \rvbl{c} not free in $F$ or $G$}\\
F\land (\rvbl{c}\in\rvbl{S}) \;\implies\;
(H_{\rvbl{c}}\,\leadsto\,
(G\,\lor\,\exists \rvbl{d}\in\rvbl{S} :
(\rvbl{c}\succ\rvbl{d})\land H_{\rvbl{d}}))}
{F\;\implies\;((\exists\rvbl{c}\in\rvbl{S}: H_{\rvbl{c}})\leadsto G)}}
& \\
\end{array}
\)\\[\baselineskip]
%
{\bf Basic TLA Rules}\\[.2\baselineskip]
\( {\rm TLA}1. \
\proofrule{P\land(f'=f)\;\implies\;P'}%
{\Box P \;\equiv\; P \land \Box\sqact{{}P\implies P'}{f}}
\)%
\hspace{2.5em}%
\( {\rm TLA}2.\
\proofrule{P\land\sqact{A}{f}\;\implies\; Q\land\sqact{B}{g}}%
{\Box P\land\Box \sqact{A}{f}\;\implies\; \Box Q\land\Box\sqact{B}{g}}
\)\\[\baselineskip]
%
{\bf Invariance and Weak Fairness Rules}\\[.3\baselineskip]
\(\begin{array}{@{}l@{\hspace{1em}}l@{}}
{\rm INV}1.\
\proofrule{I \land\sqact{N}{f}\implies I'}%
{I \land\Box\sqact{N}{f}\implies \Box I} &
{\rm INV}2.\
\provable{\Box I \;\implies\;
(\Box\sqact{N}{f} \,\equiv\, \Box\sqact{N\land I\land I'}{f})}
\vspace{2em}
\\
\begin{array}{@{}l@{}}
{\rm WF}1.\\ \hspace*{1em}
\proofrule{P \land \sqact{N}{f} \;\implies\; (P' \lor Q')\\
P \land \anact{N\land \act{A}}{f} \;\implies\; Q'\\
P \;\implies\; \enabled{\anact{A}{f}}}%
{\Box\sqact{N}{f}\land\WF_{f}{(\act{A})} \;\implies\; (P\leadsto Q)}
\end{array}
&
\vspace{2em}
\begin{array}{@{}l@{}}
{\rm WF}2.\\\hspace*{1em}
\proofrule{\anact{N\land\act{B}}{f} \;\implies\;
\anact{{}\overline{\act{M}}}{\overline{g}} \\
P \land P' \land\anact{N\land\act{A}}{f}
\land \overline{\enabled{\anact{M}{g}}} %% added 11/93
\;\implies\; \act{B}\\
P \land \overline{\enabled{\anact{M}{g}}} \;\implies\;
\enabled{\anact{A}{f}}\raisebox{.2em}{\strut}\\
\Box\sqact{N\land\lnot\act{B}}{f} \land \WF_{f}{(\act{A})} \land \Box F
\\ \mbox{\s{2.5}}
\land \Diamond\Box\overline{\enabled{\anact{M}{g}}} %% added 11/93
\;\implies\; \Diamond\Box P}%
{\Box\sqact{N}{f} \land \WF_{f}{(\act{A})} \land \Box F \;\implies\;
\overline{\WF_{g}{(\act{M})}}\raisebox{.2em}{\strut}}
\end{array}
\\
\end{array}
\)%
\\
{\bf where} \ \
\begin{tabular}{@{}l@{\hspace{2em}}l@{}}
$F$, $G$, $H_{\rvbl{c}}$ are TLA formulas &
$P$, $Q$, $I$ are predicates \\
\act{A}, \act{B}, \act{N}, \act{M} are actions &
$f$, $g$ are state functions
\end{tabular}
\caption{Axioms and proof rules used of TLA.}
\label{fig:TLA-rules}
\end{figure}
The proof that the system specified in {\it DoubleBuffer} implements the
system specified in {\it AbstractBuffer}, with the parameter $N$ of
{\it AbstractBuffer} instantiated as $2.N$, is straightforward.
Implementation in TLA is implication, and we can use the
transitivity of implication.
We show that the double buffer implements a single concrete buffer,
and then we show that a concrete buffer implements an abstract buffer.
(In the Module $Theorems$, where this is stated, we retain the
parameter $N$ in this second assertion for convenience. Literally,
the parameter should be $2N$, but since the parameter is a
parameter of the proof, a proof for $N$ is isomorphic to a proof for
$2N$. We might as well save ink and say `$N$'.)
Implementations of specifications, that is, implications of TLA
specification formulas, are proved according to a template given in
Section \ref{sec:implscheme}, available also as \cite{LadLam94}. Call
the description of the implementation the {\it implementiens}, and the
description that is (asserted to be) implemented the {\it implementiendum}.
The safety properties are proved by step
simulation: a step of the implementiens is either an action step or a
stuttering step of the implementiendum. The liveness properties of
the implementiendum are proved from the liveness properties of the
implementiens. In order to show the implication, it may be necessary
to add history variables (state functions which record part of the
history of a computation), or so-called prophecy variables (variables
that predict a future action) to a specification. One may always add
such variables, and the specification obtained by adding history
variables is logically equivalent to the original specification when
the history variables are hidden by quantification.
Prophecy variables were defined in \cite{AbaLam91:ref}. A prophecy
variable, $Descending$, a so-called stuttering variable, is used in
the proof that the concrete buffer implements the abstract buffer.
It's a state function defined in the scope of the proof. A stuttering
variable is a variable on a well-founded order (often, as in this
case, the natural numbers) which is decreased by implementation steps
which are stuttering steps of the specification. When this variable
reaches a minimum value, then some implementation step occurs which is
a non-stuttering step of the specification. Its value at the outset
predicts how many stuttering steps may occur before a non-stuttering
step is enabled. In this case, how many $CB.move$s may occur before a
$CB.pop$ is enabled.
Firstly, we show that {\it DoubleBuffer}
is a {\it ConcreteBuffer} with size $2.N$ -- the first
implication in Module $Theorems$.
The step simulation is straightforward.
There are no stuttering steps. The {\it push} action is the
{\it push} of the rear buffer, the {\it pop} is the {\it pop}
of the forward buffer, and the {\it move} action is the {\it move}
action of whichever buffer holds the two slots -- in the case it's
a move from the first slot of the rear buffer to the last slot of
the forward buffer, the {\it move} action is simultaneously a {\it pop}
of the rear and a {\it push} of the forward with the same element
(whatever that element may be).
In order to show that the {\it ConcreteBuffer} implements an
{\it AbstractBuffer}, we need to produce a state function in
{\it ConcreteBuffer} that satisfies the conditions of being the
{\it AbstractBuffer.buffer}. This association of a state function with each
existentially quantified variable of {\it AbstractBuffer} in
the theorem is called the {\it refinement mapping}.
It's easy to see what this mapping should be. In {\it AbstractBuffer},
the {\it buffer} is regarded as a sequence of whatever has been put in, up
to length $N$. In {\it ConcreteBuffer}, there are always $N$ slots
in {\it Buffer}, some of
which are filled and some not. If we ignore the slots that aren't
filled, the sequence of elements is the sequence needed for the
{\it AbstractBuffer.buffer}. In Module {\it Sequences}, a standard TLA$^+$
module used in other specifications \cite{LadLamRoeOli94}, the
function {\it SelectSeq} forms a sequence from an input sequence by
selecting those elements of the input sequence which satisfy a
particular predicate. (The alert may notice that $Sequences$ imports
a module $Naturals$, which defines certain properties of the natural
numbers. We don't bother to include the module as a figure.)
We choose the predicate saying that a value is
non-void, and define the state function consisting of the sequence of
non-void elements of the {\it ConcreteBuffer.Buffer}. This state function
implements {\it AbstractBuffer.buffer}, and is the sole
state-function/variable pair in the refinement mapping.
Step simulation is straightforward. The action
{\it ConcreteBuffer.pop}
simulates the action {\it AbstractBuffer.pop}, and similarly for
{\it ConcreteBuffer.push} and {\it AbstractBuffer.push}.
The only conditions that
are not trivial are those on {\it Len(buffer)}. However, it is easy to prove
that {\it ConcreteBuffer.push} adds a non-void element to
{\it ConcreteBuffer.Buffer}, without removing any non-void elements that are
there already and thus increases the length of
the state function \mbox{\it SelectSeq(Buffer,NonVoid)} simulating
{\it AbstractBuffer.buffer} by $1$. Similarly, it is easy to see that
{\it ConcreteBuffer.pop} decreases the number of non-void elements of
{\it ConcreteBuffer.Buffer} by $1$, without adding any.
It is straightforward to show that the
action {\it ConcreteBuffer.move} is a stuttering step of {\it AbstractBuffer}.
This completes the demonstration of safety properties.
Liveness properties are more complex. The {\it AbstractBuffer}
satisfies the liveness property that if a {\it pop} may be done,
eventually it will be done. Since the {\it pop} of the {\it AbstractBuffer}
is simulated by the {\it pop} of the {\it ConcreteBuffer},
{\it ConcreteBuffer.pop} needs to satisfy the same liveness property.
However, the enabling conditions for the {\it AbstractBuffer.pop} and the
{\it ConcreteBuffer.pop} are not the same! {\it AbstractBuffer.pop} is
enabled when there is some non-void element in
{\it AbstractBuffer.buffer}. However,
{\it ConcreteBuffer.pop} has the {\it prima facie} stronger requirement that
there must be a non-void element in the {\it first place} of its {\it Buffer}.
But if there is not, and there is a non-void element somewhere in the
{\it Buffer}, some {\it move} is enabled. To see why, note that there
must be some first $k$ such that $Buffer[k]$ is non-void. Thus
$Buffer[k-1]$ must be void by hypothesis (since $k$ is at least
$2$ by hypothesis, $Buffer[k-1]$ is a well-defined $Buffer$ place)
and hence $move(k)$ is enabled. Other moves may also be enabled, but
this first element is important since it's the one which will eventually
be popped off the front by a $ConcreteBuffer.pop$ operation, which
simulates the $AbstractBuffer.pop$ that we are trying to show will
occur.
Any enabled concrete $move$ remains enabled if a concrete $push$ occurs,
since only $Buffer[N]$ is affected by a $push$, only $move(N)$ has
any precondition concerning $Buffer[N]$, and $move(N)$ is
not enabled if $push$ is enabled.
Similar reasoning shows that when $move(k)$ occurs,
any enabled $move(j)$ for $j :neq: k$ remains enabled
(note that $move(k+1)$ could not have been enabled if $move(k)$
was enabled, but might be enabled afterwards if $Buffer[k+1]$ were
non empty).
Thus, for $k$ the first number such that $Buffer[k]$ is non-void,
$move(k)$ remains enabled through other $move$, or $push$,
operations. Weak liveness guarantees it will eventually be executed,
whence $Buffer[k-1]$ becomes the first non-void $Buffer$ place.
The same reasoning applies now to $Buffer[k-1]$ and $move(k-1)$.
Intuitively, the first non-void element will eventually make it to
$Buffer[1]$, at which point $ConcreteBuffer.pop$ becomes enabled.
$ConcreteBuffer.pop$ remains not enabled until $Buffer[1]$ is
non-void, at which point it becomes enabled.
It remains enabled and thus must eventually
be executed, according to its weak fairness requirement.
This step simulates $AbstractBuffer.pop$. Thus is the weak-fairness
requirement on {\it AbstractBuffer.pop} satisfied.
Notice that one requires not only the liveness of {\it ConcreteBuffer.pop}
to show that {\it AbstractBuffer.pop} is live, but also the liveness of
{\it ConcreteBuffer.move}. If {\it ConcreteBuffer.move} is not required to be
live, the {\it ConcreteBuffer} could sit there with elements non-void,
and with {\it ConcreteBuffer.push} and {\it ConcreteBuffer.pop} both disabled,
waiting for an enabled {\it move} that will never happen.
The liveness of {\it AbstractBuffer.pop} could not be proven
in this case,
since {\it AbstractBuffer.pop} is enabled, but will not be executed.
Thus, the liveness of {\it ConcreteBuffer.move} is crucial for
the proof of liveness of {\it AbstractBuffer.pop}.
We have informally indicated the refinement mapping, and the step
simulation, and how the step simulation and the liveness requirements
are proved. This suffices to construct a formal proof in TLA of the
implementation according to the template given in
Section \ref{sec:implscheme}.
There are two minor counters to our claim that the proof we give is
syntacically formal. One step of temporal logic is required which
we were unable to derive strictly from STL1--STL6 and the Lattice Rule, the
TLA temporal logic rules. Further, we were unable to modify our
use of WF2, the Lattice Rule and WF1 in order to avoid this
calculation in temporal logic.
Problems of this sort may easily be avoided by adjoining
rules of propositional logic, as we noted in
Section \ref{sec:conclusions}.
A simple derivation may be given using Modus Ponens and
tautologies, and we include one due to Lamport.
The second counter is that we require a small bit of simple arithmetic
in order to perform the calculations with $Descending$ for the
stuttering steps ($Descending$ is a so-called {\em stuttering variable}).
We have described no $Naturals$ module and therefore we assume that
the arithmetic is adequate to derive the trivial arithmetic results
we need. The steps which need such arithmetic are marked in the
proof, and the reader can easily see by inspection that the
arithmetic is correct - it involves merely noting such facts
as that
(a) $+$ is associative; (b) $(i - 2^{N-(k-1)} + 2^{N-k}) < i$ for
$k :in: 2..N$; and (c) $2^{N-N} = 1$ and $(i-1) < i$.
Assertion (a) should be included in any arithmetic
module, since it's a basic axiom for addition, and (b) and (c) require
only easy calculations with exponents and $<$. We assume the reader
is sufficiently assured of the truth of these two assertions not to
need a formal proof of either.
\section{The Proof Style}
\label{sec:proof-style}
We use the hierarchical proof style advocated
by (and supported by the macros of) Lamport \cite{Lam93proof}.
Each step is justified by a
sequence of lower-level steps, such that it is a rigorous logical
consequence of the lower-level steps. The inference rules which may be
used to derive a step from a sequence of lower-level steps are
given in Figure \ref{fig:TLA-rules} (the use of propositional
logic, predicate logic, and some elementary propositional temporal logic,
is understood). Each lower-level step in
turn must be justified in the same manner. The proof stops
descending when propositional logic, or obvious theorems concerning
data structures such as the natural numbers, or properties of
sequences, or any known mathematical theorems, are reached. In
practice, one halts a descent in a proof when one is certain
how to complete the proof but too lazy to do so.
The point about the style is that a proof in TLA is logically
rigorous, and entirely syntactic. That means that each step in the
proof is derived from other, named, steps using only the permitted
syntactic rules of inference. Furthermore, no new principles need
to be used in order to prove a new example correct. TLA is a logic,
with general rules, and has remained stable for a number of years.
Every proof of correctness in TLA uses the same old TLA rules,
albeit in a new way.
The proof style supports this rigorous syntactic derivation.
The proof steps are numbered in the following way: step $<>.m$ has
as substeps steps $<>.1$ through $<>.k$, with the last
step being a $QED$ step. The proof of the $QED$ step is usually a
commentary indicating how the steps $<>.1$ through $<>.k$
yield the proof of step $<>.m$.
Formulas are also written in a special form. Conjuncts and disjuncts
may be lengthy, and so rather than being written in a line, they are
written as tableaux in which each line is a separate con(dis)junct,
preceded by the con(dis)junction sign. Thus nested con(dis)juncts
are indented, and consequently much easier to read or to manipulate.
The notation is explained in \cite{Lam94formula}.
\section{The Proof Scheme for Implementation}
\label{sec:implscheme}
In order to introduce the proof style, and to give a taste of a TLA
proof, we present a general proof scheme to be used when showing
that one specification $(:E: y: Init_1 /\ [][N_1]_v /\ L_1)$
implements another $(:E: x: Init_2 /\ [][N_2]_w /\ L_2)$. This schema
illustrates the hierarchical proof style recommended for use with TLA
\cite{Lam93proof}, and briefly explained in Section
\ref{sec:proof-style} above.
\bigskip
{\bf To Prove:}
\[(:E: y: Init_1 /\ [][N_1]_v /\ L_1) => (:E: x: Init_2 /\ [][N_2]_w
/\ L_2)
\]
%
\begin{proof}\tla\small
%
\vs{1.2}
Most of the structure of an implementation proof is just logic,
that is, propositional logic and application of TLA Proof Rules,
as may be seen below. The important parts to show for each individual
problem instance are indicated by `********'. All of these are simply
mathematics and not temporal logic, with the important
exception of the liveness proof. Showing liveness can
involve some non-trivial temporal logic, and uses
rules WF1, Lattice, and WF2.
\vs{1.2}
First, the definition of the refinement mapping.
\vs{1.2}
%
\pflet{$\overline{Init_2} == Init_2 ___with___ x <- \overline{x}\\
\overline{w} == w ___with___ x <- \overline{x}\\
\overline{L_2} == L_2 ___with___ x <- \overline{x}\\
$}
%
%
\vs{1.2}
Now the proof that the initial condition of the implementation
implies the initial condition of the specification.
\vs{1.2}
%
\step{init-impl}{$Init_1 => \overline{Init_2}$}
\begin{proof}
\pf\ ******This must be shown for each problem instance.*******~\qed
\end{proof}%init-impl
%
%
\vs{1.2}
Next, the proof that the initial condition and safety property of
the implementation implies that there is an invariant.
\vs{1.2}
%
\step{inv-holds}{$ Init_1 /\ [][N_1]_v => []Inv$}
\begin{proof}
\step{ih.1}{$Init_1 => Inv$}
\begin{proof}
\pf\ ******This must be shown for each problem
instance.*******~\qed
\end{proof}%ih.1
%
\step{ih.2}{$Inv /\ [][N_1]_v => []Inv$}
\begin{proof}
\pf\
\step{ih.2.1}{$Inv /\ [N_1]_v => Inv'$}
\begin{proof}
\pf\ ******This must be shown for each problem
instance*******.~\qed
\end{proof}%ih.2.1
\qedstep%ih.2
\begin{proof}
\pf\ Immediate from \stepref{ih.2.1} by Rule INV1.~\qed
\end{proof}%ih.2.1
%
\end{proof}%ih.2
\qedstep%inv-holds
\begin{proof}
\pf\ Immediate from \stepref{ih.1} and \stepref{ih.2} by
propositional logic.~\qed
\end{proof}%qed.inv-holds
\end{proof}%inv.holds
%
\vs{1.2}
Using the invariant and the safety property of the implementation,
one derives the safety property of the specification. Some manipulation
using the invariant is needed in the substeps.
\vs{1.2}
%
\step{safe-holds}{$Init_1 /\ [][N_1]_v => [][\overline{N_2}]_{\overline{w}}$}
\begin{proof}
\pf\
\step{sh.2}{$ []Inv /\ [][N_1]_v => [][\overline{N_2}]_{\overline{w}}$}
\begin{proof}
\pf\
\step{sh.2.2}{$ []Inv /\ [][N_1]_v => [][N_1 /\ Inv /\ Inv']_v
$}
\begin{proof}
\pf\ Immediate from INV2 using STL5.~\qed
\end{proof}%sh.2.2
%
\step{sh.2.1}{$ [][N_1 /\ Inv /\ Inv']_v =>
[][\overline{N_2}]_{\overline{w}} $}
\begin{proof}
\pf\
\step{sh.2.1.1}{$ [N_1 /\ Inv /\ Inv']_v =>
[\overline{N_2}]_{\overline{w}} $}
\begin{proof}
\pf\ ******This must be shown*********.~\qed
\end{proof}%sh.2.1.1
\qedstep%sh.2.1
\begin{proof}
\pf\ Immediate from \stepref{sh.2.1.1} by STL4.~\qed
\end{proof}%qed.sh.2.1
%
\end{proof}%sh.2.1
%
%
\qedstep%sh.2
\begin{proof}
\pf\ Immediate from \stepref{sh.2.2}
and \stepref{sh.2.1} by propositional logic.~\qed
\end{proof}%qed.sh.2
\end{proof}%sh.2
\qedstep%safe-holds
\begin{proof}
\pf\ Immediate from \stepref{inv-holds}
and \stepref{sh.2} by propositional logic.~\qed
\end{proof}%qed.safe-holds
\end{proof}%safe-holds
%
%
\vs{1.2}
The proof that the liveness property of the specification is
implied by the implementation has no general form. The
TLA Proof Rules WF1, Lattice, and WF2 are used, and it is necessary
to instantiate the hypotheses of these rules by appropriate
formula taken from the problem instance. This argument may
involve some non-trivial temporal logic, but is the only part
of a TLA implementation proof to do so.
\vs{1.2}
%
\step{live-holds}{$ []Inv /\ [][N_1]_v /\ L_1 => \overline{L_2}$}
\begin{proof}
\pf\ *******Uses Rules WF1, WF2 and the Lattice Rule.*********
\end{proof}%live-holds
%
\step{alltogether}{$Init_1 /\ [][N_1]_v /\ L_1 => \overline{Init_2}
/\ [][\overline{N_2}]_{\overline{w}}
/\ \overline{L_2}$}
\begin{proof}
\pf\ Immediate from \stepref{init-impl},
\stepref{safe-holds} and
\stepref{live-holds} by propositional logic.~\qed
\end{proof}%alltogether
%
\qedstep
\begin{proof}
\pf\ Theorem follows from \stepref{alltogether} by quantifying
over the free variables.~\qed
\end{proof}%qed.theorem
\end{proof}
\smallskip
One may observe directly from the proof scheme above that to
prove the theorem for any given problem instance, it
suffices to
show just the following five steps, only the last of which may involve
temporal logic:
\begin{proof}\tla\small
%
\step{init-impl}{$Init_1 => \overline{Init_2}$}
%
\step{ih.1}{$Init_1 => Inv$}
%
\step{ih.2.1}{$Inv /\ [N_1]_v => Inv'$}
%
\step{sh.2.1.1}{$ [N_1 /\ Inv /\ Inv']_v =>
[\overline{N_2}]_{\overline{w}} $}
%
\step{live-holds}{$ []Inv /\ [][N_1]_v /\ L_1 => \overline{L_2}$}
%
\end{proof}
\smallskip
The following theorem summarises this information.
\begin{theorem}\label{the:sufficient}
$|-_{TLA} (:E: y: Init_1 /\ [][N_1]_v /\ L_1) =>
(:E: x: Init_2 /\ [][N_2]_w /\ L_2)$ \\
$\mbox{\rm if there is a formula} \; Inv \;
\mbox{\rm and refinement mapping} \; \O{x} \; \mbox{\rm such that }$ \\
$|-_{Predicate Logic} (Init_1 => \overline{Init_2})
%
/\ (Init_1 => Inv)
%
/\ (Inv /\ [N_1]_v => Inv')$ \\
%
$/\ \; ([N_1 /\ Inv /\ Inv']_v => [\overline{N_2}]_{\overline{w}})$
{\rm and}
%
$|-_{TLA}([]Inv /\ [][N_1]_v /\ L_1 => \overline{L_2})$
\end{theorem}
Theorem \ref{the:sufficient} will be used to simplify syntactically
the proof of liveness of the abstract buffer from the liveness
assumptions of the concrete buffer.
\begin{figure}
\vspace*{-1.25\baselineskip}
\small
\begin{module}{AbstractBuffer}
\begin{decls}
\IMPORT Naturals, Sequences
\end{decls}
\midbar
\begin{decls}{parameters}
buffer : \VARIABLE \\
Data, N : \CONSTANT
\end{decls}
\midbar
\begin{decls}{predicates}
Init == buffer = <<>>
\end{decls}
\begin{decls}{actions}
%
push(a) == \begin{conj}
a :in: Data \\
Len(buffer) < N \\
buffer' = buffer :o: <> \\
\end{conj}\\
%
pop == \begin{conj}
Len(buffer) > 0 \\
buffer' = tail(buffer) \\
\end{conj}
%
\end{decls}
\begin{decls}{temporal}
Spec == \begin{conj}
Init \\
[][pop \/ :E: b : push(b)]_{buffer} \\
WF_{buffer}(pop)
\end{conj}
\end{decls}
\end{module}
\addvspace{-\baselineskip}
\caption[]{Module $AbstractBuffer$}
\label{fig:AbstractBuffer}
\end{figure}
\begin{figure}
\vspace*{-1.25\baselineskip}
\small
\begin{module}{ConcreteBuffer}
\begin{decls}{parameters}
Buffer : \VARIABLE \\
\bot, Data, N : \CONSTANT
\end{decls}
\midbar
\begin{decls}
\IMPORT Naturals, Sequences
\end{decls}
\midbar
\begin{decls}{assumption}
N \in Nat
\end{decls}
\begin{decls}{definition}
\bot == \CHOOSE x : x \not\in Data
\end{decls}
\begin{decls}{predicates}
Init == \begin{conj}
:A: n :in: 1 :dd: N: Buffer[n] = \bot \\
\end{conj}
\end{decls}
\begin{decls}{actions}
%
push(a) == \begin{conj}
a :in: Data \\
Buffer[N] = \bot \\
Buffer'[N] = a \\
:A: i :in: 1 :dd: (N-1) : \UNCHANGED Buffer[i]
\end{conj}\\
%
pop == \begin{conj}
Buffer[1] # \bot \\
Buffer'[1] = \bot \\
:A: i :in: 2 :dd: N : \UNCHANGED Buffer[i]
\end{conj}\\
%
move(k) == \begin{conj}
k :in: 2 :dd: N \\
Buffer[k] # \bot \\
Buffer[k-1] = \bot \\
Buffer'[k] = \bot \\
Buffer'[k-1] = Buffer[k] \\
:A: i :in: 1 :dd: (k-2) : \UNCHANGED Buffer[i]\\
:A: i :in: (k+1) :dd: N : \UNCHANGED Buffer[i]
\end{conj}\\
%
\end{decls}
\begin{decls}{temporal}
Spec == \begin{conj}
Init \\
[][pop \/ :E: b : push(b)
\/ :E: k: move(k)]_{Buffer} \\
WF_{Buffer}(pop) \\
WF_{Buffer}(:E: k : move(k))
\end{conj}
\end{decls}
\end{module}
\addvspace{-\baselineskip}
\caption[]{Module $ConcreteBuffer$}
\label{fig:ConcreteBuffer}
\end{figure}
\begin{figure}
\vspace*{-1.25\baselineskip}
\small
\begin{module}{DoubleBuffer}
\begin{decls}{parameters}
Buffer1, Buffer2 : \VARIABLE \\
Data, N : \CONSTANT
\end{decls}
\midbar
\IMPORT Naturals, Sequences\\
\INCLUDE ConcreteBuffer \AS B1 \WITH Buffer $<-$ Buffer1, N $<-$ N \\
\INCLUDE ConcreteBuffer \AS B2 \WITH Buffer $<-$ Buffer2, N $<-$ N \\
%
\midbar
\begin{decls}{predicates}
Init == \begin{conj}
:A: n :in: 1 :dd: N: Buffer1[n] = \bot \\
:A: n :in: 1 :dd: N: Buffer2[n] = \bot \\
\end{conj}
\end{decls}
\begin{decls}{actions}
%
push(a) == B2.push(a) /\ \UNCHANGED Buffer1\\
% \begin{conj}
% a :in: Data \\
% Buffer2[N] = \bot \\
% Buffer2'[N] = a \\
% :A: i :in: 1 :dd: (N-1) : \UNCHANGED Buffer2[i] \\
% \UNCHANGED Buffer1
% \end{conj}\\
%
pop == B1.pop /\ \UNCHANGED Buffer2 \\
% \begin{conj}
% Buffer1[1] # \bot \\
% Buffer1'[1] = \bot \\
% :A: i :in: 2 :dd: N : \UNCHANGED Buffer1[i] \\
% \UNCHANGED Buffer2
% \end{conj}\\
%
move(k) == \begin{conj}
k :in: 2..2N \\
\begin{noj}
k :in: 2 :dd: N \\
__ => \\
__ \begin{conj}
B1.move(k)\\
:A: i :in: (2 :dd: N) :\: \{ k,k-1 \} :
\UNCHANGED Buffer1[i] \\
\UNCHANGED Buffer2 \\
\end{conj}
\end{noj}\\
\begin{noj}
k :in: N+2 :dd: 2N
__ => \\
__ \begin{conj}
B2.move(k-N)\\
:A: i :in: (2 :dd: N) :\: \{ k-N,k-N-1 \} :
\UNCHANGED Buffer2[i] \\
\UNCHANGED Buffer1 \\
\end{conj}
\end{noj} \\
\begin{noj}
k = N+1 \\
__ => \\
__ B2.pop /\ B1.push(Buffer2[1])
\end{noj}
\end{conj}\\
%
\end{decls}
\begin{decls}{temporal}
Spec == \begin{conj}
Init \\
[][pop \/ :E: b : push(b)
\/ :E: k: move(k)]_{Buffer1,Buffer2} \\
WF_{Buffer1,Buffer2}(pop) \\
WF_{Buffer1,Buffer2}(:E: k : move(k))
\end{conj}
\end{decls}
\end{module}
\addvspace{-\baselineskip}
\caption[]{Module $DoubleBuffer$}
\label{fig:DoubleBuffer}
\end{figure}
% \begin{module}{ConcreteBufferWithHistory}
%
%
% \begin{decls}{parameters}
% Buffer, NumberInBuffer : \VARIABLE \\
% Data, N : \CONSTANT
% \end{decls}
%
% \midbar
%
% \begin{decls}{predicates}
% Init == \begin{conj}
% :A: n :in: 1 :dd: N: Buffer[n] = \bot \\
% NumberInBuffer = 0
% \end{conj}
% \end{decls}
%
% \begin{decls}{actions}
%%
% push(a) == \begin{conj}
% a :in: Data \\
% Buffer[N] = \bot \\
% Buffer'[N] = a \\
% NumberInBuffer' = NumberInBuffer+1
% \end{conj}\\
%%
% pop(b) == \begin{conj}
% b :in: Data \\
% Buffer[1] = b \\
% Buffer'[1] = \bot \\
% NumberInBuffer' = NumberInBuffer-1
% \end{conj}\\
%%
% move(k) == \begin{conj}
% k :in: 2 :dd: N \\
% Buffer[k] # \bot \\
% Buffer[k-1] = \bot \\
% Buffer'[k] = \bot \\
% Buffer'[k-1] = Buffer[k] \\
% NumberInBuffer' = NumberInBuffer
% \end{conj}\\
%%
% \end{decls}
%
% \begin{decls}{temporal}
% Spec == \begin{conj}
% Init \\
% [][pop \/ :E: b : push(b)
% \/ :E: k: move(k)]_{Buffer,NumberInBuffer} \\
% WF[pop]_{Buffer,NumberInBuffer} \\
% WF[:E: k : move(k)]_{Buffer,NumberInBuffer}
% \end{conj}
% \end{decls}
%
% \end{module}
\begin{figure}
\vspace*{-1.25\baselineskip}
\small
\begin{module}{Sequences}
\IMPORT Naturals
\midbar
\begin{nodecls}
Head(s) \= \kill
%
m :dd: n \>== \{ i :in: Nat : (m :leq: i) /\ (i :leq: n)\}\\
%
Len(s) \>== \CHOOSE n : (n :in: Nat) /\ ((\DOMAIN s) = (1 :dd: n))\\
%
Head(s) \>== s[1]\\
%
Tail(s) \>== [i :in: 1 :dd: (Len(s)-1) |-> s[i+1]]\\
%
s :o: t \>== [i :in: 1 :dd: (Len(s)+Len(t)) |->
\IF{i :leq: Len(s)} \THEN s[i] \\
\ELSE t[i - Len(s)] ] \FI\\
%
Seq(S) \>== \UNION \{[(1 :dd: n) -> S] : n :in: Nat\} \\
%
SelectSeq(s, test(:?:)) ==
\LET F[\,t : Seq(\{ s[i] : i :in: (1 :dd: Len(s))\})\,] == \\
___\IF{t=<<>>}
\THEN <<>>\\
\ELSE \IF{test(Head(t))} \\
\THEN <> :o: F[Tail(t)] \\
\ELSE F[Tail(t)] \FI \FI \\
\IN F[s] \NI \\
%
SubSeq(s, m, n) == [i :in: (1 :dd: (1+n-m)) |-> s[i+m-1]]
\end{nodecls}
\end{module}
\addvspace{-\baselineskip}
\caption[]{Module $Sequences$}
\label{fig:Sequences}
\end{figure}
\begin{figure}
\vspace*{-1.25\baselineskip}
\small
\begin{module}{Theorems}
\INCLUDE Doublebuffer \AS DB(N) \\
\INCLUDE ConcreteBuffer \AS CB(N) \\
\INCLUDE AbstractBuffer \AS AB(N) \\
\midbar
\begin{decls}{theorems}
%
:E: Buffer1, Buffer2 : DB(N).Spec => :E: Buffer : CB(2N).Spec \\
%
:E: Buffer : CB(N).Spec => :E: buffer : AB(N).Spec
%
\end{decls}
\end{module}
\addvspace{-\baselineskip}
\caption[]{Module $Theorems$}
\label{fig:Theorems}
\end{figure}
\newpage
\section{The Proof}
\label{sec:proof}
We must assume that TLA contains either Modus Ponens plus a set of
Hilbert-style axioms,
or a set of Gentzen-style `introduction' and `elimination' rules,
for propositional logic,
as suggested in Section \ref{sec:conclusions}.
We assume propositional and predicate logic as given -
neither the purpose of TLA nor of this paper is to
explain to the reader how to use them in a proof.
\small
\begin{proof}
I use a conventional wording for indicating the completion of a proof
as follows:
\begin{itemize}
\item the word {\em follows} indicates that some propositional or
predicate logic manipulation is required;
\item {\em follows immediately} indicates that trivial propositional
or predicate logic is required, such as forward-chaining of conditionals,
rearrangement of hypotheses, or substution and quantification of a variable;
\item {\em immediate} indicates that a direct textual substitution is required.
\end{itemize}
These `rules' are not necessarily applied rigorously.
\vs{1.2}
The first theorem proves that the $DoubleBuffer$ implements a $Buffer$
of twice the size. This is a straightforward proof following the
schema in Section \ref{sec:implscheme}. It is also rigorously
logically correct in TLA without recourse to metatheorems concerning
the logic. Each step of the
$DoubleBuffer$ is a step of the $Buffer$, so there are no stuttering
steps. Furthermore, since liveness of all the $move$ operations of
the $DoubleBuffer$ is specified, including the $move$ that is
a $pop$ of the rear buffer and simultaneously a $push$ on
the from buffer, this guarantees that elements will
be shuttled from the front of the rear buffer to the rear of the
front buffer, even though liveness is not specified for $push$ for
any individual buffer.
\vs{1.2}
%
\step{theorem-1}{$:E: Buffer1, Buffer2 : DB(N).Spec =>
:E: Buffer : CB(2N).Spec
$}
\begin{proof}
\vs{1.2}
\pflet{$
\O{Buffer} == Buffer1 :o: Buffer2
$}
%
\vs{1.2}
The first step is showing that the initial conditions of the
specification are satisfied by the initial conditions of the
implementation.
\vs{1.2}
%
\step{init-impl}{$DB.Init => \overline{CB(2N).Init}$}
\begin{proof}
\pf\ $DB.Init == \begin{conj}
:A: n :in: 1 :dd: N: Buffer1[n] = \bot \\
:A: n :in: 1 :dd: N: Buffer2[n] = \bot \\
\end{conj} \\
\O{CB.Init} == :A: n :in: 1 :dd: 2N: \O{Buffer}[n] = \bot \\
$
The proof follows immediately from the definitions of
$Buffer1 :o: Buffer2$ and \O{Buffer}.~\qed
\end{proof}%init-impl
%
\vs{1.2}
The invariant is that every element in all of the buffer arrays
is either an element of the set $Data$ or is $\bot$. We now
prove the invariant. It's messy to write but trivial to prove.
I believe it's marginally easier to survey this proof when the
invariant is written explicitly, and the formulae are not so complicated
as to force explicit definition at this stage.
\vs{1.2}
%
\pflet{$ k : \CONSTANT $}
\step{inv}{$
DB.Init /\ [][:E: a : DB.push(a) \/ DB.pop
\/ :E: k : DB.move(k)]_{Buffer1,Buffer2} \\
___ => \\
[](k :in: 1 :dd: N =>
\begin{conj}
(Buffer1[k] :in: Data \/ Buffer1[k] = \bot) \\
(Buffer2[k] :in: Data \/ Buffer2[k] = \bot))
\end{conj}
$}
%
\begin{proof}
\pf\
\step{inv.1}{$ DB.Init => \\
__ (k :in: 1 :dd: N =>
\begin{conj}
(Buffer1[k] :in: Data \/ Buffer1[k] = \bot) \\
(Buffer2[k] :in: Data \/ Buffer2[k] = \bot) )
\end{conj}
$}
\begin{proof}
\pf\
\step{inv.1.1}{$ DB.Init => (k :in: 1 :dd: N =>
(Buffer1[k] = \bot /\ Buffer2[k] = \bot))
$}
\begin{proof}
\pf\ Follows immediately from the definition of $DB.Init$.~\qed
\end{proof}%inv.1.1
\qedstep%inv.1
\begin{proof}
\pf\ Follows immediately by propositional logic from
\stepref{inv.1.1}.~\qed
\end{proof}%qed.inv.1
%
\end{proof}%inv.1
%
\step{inv.2}{$ \begin{conj}
(k :in: 1 :dd: N =>
\begin{conj}
(Buffer1[k] :in: Data \/ Buffer1[k] = \bot) \\
(Buffer2[k] :in: Data \/ Buffer2[k] = \bot))
\end{conj} \\
[][:E: a : DB.push(a) \/ DB.pop
\/ :E: k : DB.move(k)]_{Buffer1,Buffer2}
\end{conj} \\
=> \\
__ [](k :in: 1 :dd: N =>
\begin{conj}
(Buffer1[k] :in: Data \/ Buffer1[k] = \bot) \\
(Buffer2[k] :in: Data \/ Buffer2[k] = \bot))
\end{conj}
$}
\begin{proof}
\pf\
\step{inv.2.x}{$\begin{conj}
(k :in: 1 :dd: N =>
\begin{conj}
(Buffer1[k] :in: Data \/ Buffer1[k] = \bot) \\
(Buffer2[k] :in: Data \/ Buffer2[k] = \bot))
\end{conj} \\
[:E: a : DB.push(a) \/ DB.pop
\/ :E: k : DB.move(k)]_{Buffer1,Buffer2}
\end{conj} \\
__ => \\
___ (k :in: 1 :dd: N =>
\begin{conj}
(Buffer1'[k] :in: Data \/ Buffer1'[k] = \bot) \\
(Buffer2'[k] :in: Data \/ Buffer2'[k] = \bot))
\end{conj}
$}
\begin{proof}
\pf\
%
\step{inv.2.x.1}{$\begin{conj}
(k :in: 1 :dd: N =>
\begin{conj}
(Buffer1[k] :in: Data \/ Buffer1[k] = \bot) \\
(Buffer2[k] :in: Data \/ Buffer2[k] = \bot))
\end{conj} \\
:E: a : DB.push(a)
\end{conj} \\
__=> \\
___ (k :in: 1 :dd: N =>
\begin{conj}
(Buffer1'[k] :in: Data \/ Buffer1'[k] = \bot) \\
(Buffer2'[k] :in: Data \/ Buffer2'[k] = \bot))
\end{conj}
$}
\begin{proof}
\pf\
%
%
\pflet{$ a : \CONSTANT $}
\step{inv.2.x.1.1}{$\begin{conj}
(k :in: 1 :dd: N =>
\begin{conj}
(Buffer1[k] :in: Data \/ Buffer1[k] = \bot) \\
(Buffer2[k] :in: Data \/ Buffer2[k] = \bot)) \\
\end{conj} \\
DB.push(a)
\end{conj} \\
__ => \\
___ (k :in: 1 :dd: N =>
\begin{conj}
(Buffer1'[k] :in: Data \/ Buffer1'[k] = \bot) \\
(Buffer2'[k] :in: Data \/ Buffer2'[k] = \bot))
\end{conj}
$}
\begin{proof}
\pf\
%
\step{inv.2.x.1.1.1}{$k :in: 1 :dd: (N-1)
/\ DB.push(a) => \\
___ Buffer1'[k] = Buffer1[k] /\ Buffer2'[k] = Buffer2[k]
$}
\begin{proof}
\pf\
Follows immediately from the definition of $DB.push(a)$.~\qed
\end{proof}%inv.2.x.1.1.1
%
\step{inv.2.x.1.1.2}{$ DB.push(a) =>
Buffer1'[N] = Buffer1[N]
$}
\begin{proof}
\pf\ Follows immediately
from the definition of $DB.push(a)$.~\qed
\end{proof}%inv.2.x.1.1.2
%
\step{inv.2.x.1.1.3}{$ DB.push(a) =>
Buffer2'[N] :in: Data
$}
\begin{proof}
\pf\ Follows immediately from the definition of $DB.push(a)$.~\qed
\end{proof}%inv.2.x.1.1.3
%
\qedstep%inv.2.x.1.1
\begin{proof}
\pf\ Follows by cases and elementary equational reasoning
from \stepref{inv.2.x.1.1.1}, \stepref{inv.2.x.1.1.2}
and \stepref{inv.2.x.1.1.3}.~\qed
\end{proof}%qed.inv.2.x.1.1
%
\end{proof}%inv.2.x.1.1
%
\qedstep%inv.2.x.1
\begin{proof}
\pf\ Follows immediately by existential quantification over $a$
from \stepref{inv.2.x.1.1}.~\qed
\end{proof}%qed.inv.2.x.1
%
\end{proof}%inv.2.x.1
%
\step{inv.2.3}{$\begin{conj}
(k :in: 1 :dd: N =>
\begin{conj}
(Buffer1[k] :in: Data \/ Buffer1[k] = \bot) \\
(Buffer2[k] :in: Data \/ Buffer2[k] = \bot))
\end{conj} \\
DB.pop
\end{conj} \\
__ => \\
___ (k :in: 1 :dd: N =>
\begin{conj}
(Buffer1'[k] :in: Data \/ Buffer1'[k] = \bot) \\
(Buffer2'[k] :in: Data \/ Buffer2'[k] = \bot))
\end{conj}
$}
\begin{proof}
\pf\ Follows from the definition of $DB.pop$
using similar reasoning to that used for $DB.push(a)$.~\qed
\end{proof}%inv.2.3
%
\step{inv.2.4}{$\begin{conj}
(k :in: 1 :dd: N =>
\begin{conj}
(Buffer1[k] :in: Data \/ Buffer1[k] = \bot) \\
(Buffer2[k] :in: Data \/ Buffer2[k] = \bot))
\end{conj} \\
:E: j : DB.move(j)
\end{conj} \\
__ => \\
___ (k :in: 1 :dd: N =>
\begin{conj}
(Buffer1'[k] :in: Data \/ Buffer1'[k] = \bot) \\
(Buffer2'[k] :in: Data \/ Buffer2'[k] = \bot))
\end{conj}
$}
\begin{proof}
%
\pflet{$ j : \CONSTANT $}
%
\step{inv.2.4.1}{$\begin{conj}
(k :in: 1 :dd: N =>
\begin{conj}
(Buffer1[k] :in: Data \/ Buffer1[k] = \bot) \\
(Buffer2[k] :in: Data \/ Buffer2[k] = \bot))
\end{conj} \\
DB.move(j)
\end{conj} \\
__ => \\
___ (k :in: 1 :dd: N =>
\begin{conj}
(Buffer1'[k] :in: Data \/ Buffer1'[k] = \bot) \\
(Buffer2'[k] :in: Data \/ Buffer2'[k] = \bot))
\end{conj}
$}
\begin{proof}
\pf\ Follows similarly to the case for $DB.push(a)$ by division into
cases $j = k, j= (k-1), j \not\in \{ k, k-1 \}$
and using propositional logic on the definition of
$DB.move$.~\qed
\end{proof}%inv.2.4.1
\qedstep%inv.2.4
\begin{proof}
\pf\ Follows immediately from \stepref{inv.2.4.1} by existential
quantification over $j$ in $DB.move(j)$.~\qed
\end{proof}%qed.inv.2.4
%
\end{proof}%inv.2.4
%
\step{inv.2.5}{$\begin{conj}
(k :in: 1 :dd: N =>
\begin{conj}
(Buffer1[k] :in: Data \/ Buffer1[k] = \bot) \\
(Buffer2[k] :in: Data \/ Buffer2[k] = \bot))
\end{conj} \\
Buffer1' = Buffer1 \\
Buffer2' = Buffer2
\end{conj} \\
__ => \\
___ (k :in: 1 :dd: N =>
\begin{conj}
(Buffer1'[k] :in: Data \/ Buffer1'[k] = \bot) \\
(Buffer2'[k] :in: Data \/ Buffer2'[k] = \bot))
\end{conj}
$}
\begin{proof}
\pf\ Follows immediately by substitution.~\qed
\end{proof}%inv.2.5
%
\qedstep%inv.2.x
\begin{proof}
\pf\ Follows immediately by propositional logic from
\stepref{inv.2.x.1},
\stepref{inv.2.3}, \stepref{inv.2.4}
and \stepref{inv.2.5}.~\qed
\end{proof}%qed.inv.2.x
%
\end{proof}%inv.2.x
\qedstep%inv.2
\begin{proof}
\pf\ Immediate from \stepref{inv.2.x} by Rule INV1.~\qed
\end{proof}%qed.inv.2
\end{proof}%inv.2
%
\qedstep%inv
\begin{proof}
\pf\ Follows immediately from \stepref{inv.1}
and \stepref{inv.2} by propositional logic.~\qed
\end{proof}%qed.inv
%
\end{proof}%inv
\vs{1.2}
The safety property of the specification follows from the safety
property of the implementation using the invariant. In this proof, we
explicitly define $Inv$ to be the invariant, which makes the layout of
the proof more manageable. Safety follows by step
simulation. There are no stuttering steps in this case -- every step
of the double buffer is a corresponding step of the double-size single
buffer.
\vs{1.2}
%
\step{safety}{$ \begin{conj}
DB.Init \\
[][\begin{disj}
:E: a : DB.push(a) \\
DB.pop \\
:E: k : DB.move(k)]_{Buffer1,Buffer2}
\end{disj}
\end{conj} \\
__ => \\
__ [][\begin{disj}
\O{:E: a : CB(2N).push(a)} \\
\O{CB(2N).pop} \\
\O{:E: k : CB(2N).move(k)}]_{\O{Buffer}}
\end{disj}
$}
\begin{proof}
\pf\
\pflet{$
k : \CONSTANT \\
Inv == (k :in: 1 :dd: N =>
\begin{conj}
(Buffer1[k] :in: Data \/ Buffer1[k] = \bot) \\
(Buffer2[k] :in: Data \/ Buffer2[k] = \bot))
\end{conj} \\
$}
\step{safety.z}{$
\begin{conj}
[]Inv \\
[][\begin{disj}
:E: a : DB.push(a) \\
DB.pop \\
:E: k : DB.move(k)]_{Buffer1,Buffer2}
\end{disj}
\end{conj} \\
__ => \\
__ [] [\begin{disj}
\O{:E: a : CB(2N).push(a)} \\
\O{CB(2N).pop} \\
\O{:E: k : CB(2N).move(k)}]_{\O{Buffer}}
\end{disj}
$}
\begin{proof}
\pf\
%
\step{safety.y}{$
\begin{conj}
[]Inv \\
[][\begin{disj}
:E: a: DB.push(a) \\
DB.pop \\
:E: k: DB.move(k)]_{Buffer1,Buffer2}
\end{disj}
\end{conj} \\
__ => \\
__ [][\begin{conj}
\begin{disj}
:E: a: DB.push(a) \\
DB.pop \\
:E: k: DB.move(k)
\end{disj} \\
Inv \\
Inv']_{Buffer1,Buffer2}
\end{conj}
$}
\begin{proof}
\pf\ Immediate from INV2 using propositional logic.~\qed
\end{proof}%safety.y
%
\step{safety.x}{$
[][\begin{conj}
\begin{disj}
:E: a : DB.push(a) \\
DB.pop \\
:E: k : DB.move(k)
\end{disj} \\
Inv \\
Inv' ]_{Buffer1,Buffer2}
\end{conj} \\
__ => \\
__ [] [\begin{disj}
\O{:E: a : CB(2N).push(a)} \\
\O{CB(2N).pop} \\
\O{:E: k : CB(2N).move(k)}]_{\O{Buffer}}
\end{disj}
$}
\begin{proof}
\pf\
\step{safety.1}{$
[\begin{conj}
\begin{disj}
:E: a : DB.push(a) \\
DB.pop \\
:E: k : DB.move(k))
\end{disj} \\
Inv \\
Inv']_{Buffer1,Buffer2}
\end{conj} \\
__ => \\
__ [\begin{disj}
\O{:E: a : CB(2N).push(a)} \\
\O{CB(2N).pop} \\
\O{:E: k : CB(2N).move(k)}]_{\O{Buffer}}
\end{disj}
$}
\begin{proof}
\pf\
\step{safety.1.1}{$
:E: a : DB.push(a) /\ Inv /\ Inv' \\
__ => \\
__ \O{:E: a : CB(2N).push(a)} $}
\begin{proof}
\pf\
%
\pflet{$ a : \CONSTANT $}
%
\step{safety.1.1.1}{$
DB.push(a) \\
__ => \\
__ \O{CB(2N).push(a)}
$}
\begin{proof}
\pf\ Follows from substitution $\O{Buffer}[2N] = Buffer2[N]$.~\qed
\end{proof}%safety.1.1.1
%
\qedstep%safety.1.1
\begin{proof}
\pf\ Follows immediately by existentially quantifying over $a$
and adjoining $Inv$ and $Inv'$ to the hypothesis.~\qed
\end{proof}%qed.safety.1.1
\end{proof}%safety.1.1
%
\step{safety.1.2}{$
DB.pop /\ Inv /\ Inv' \\
__ => \\
__ \O{CB(2N).pop}
$}
\begin{proof}
\pf\ Follows similarly to \stepref{safety.1.1}.~\qed
\end{proof}%safety.1.2
%
\step{safety.1.3}{$
:E: k : DB.move(k) /\ Inv /\ Inv' \\
__ => \\
__ \O{:E: k : CB(2N).move(k)}
$}
\begin{proof}
\pf\
%
\vs{1.2}
The variable $k$ is already declared. We want the free $k$
in $Inv$ to be instantiated to the variable in $move$ when the
quantifier $:E: k : $ is removed. The reader may convince herself
that this is legitimate predicate logic. We note it here to avoid
momentary confusion.
\vs{1.2}
%
\step{safety.1.3.x}{$
DB.move(k) /\ Inv /\ Inv' \\
__ => \\
__ \O{CB(2N).move(k)}
$}
\begin{proof}
\pf\
%
The proof is broken into components, corresponding to
the antecedents of the conjuncts of $DB.move$.
A full proof would contain a little propositional reasoning
but is otherwise similar to \stepref{safety.1.1}.
Some care is needed in checking $move(N+1)$,
where the invariant from \stepref{inv} is used.
%
\step{safety.1.3.x.0}{$
\begin{conj}
\begin{disj}
k :in: 2..N \\
k :in: (N+2)..2N
\end{disj} \\
DB.move(k) /\ Inv /\ Inv'
\end{conj}\\
__ => \\
__ \O{CB(2N).move(k)}
$}
\begin{proof}
\pf\ Follows similarly to \stepref{safety.1.1}.~\qed
\end{proof}%safety.1.3.x.0
%
\step{safety.1.3.1}{$
DB.move(N+1) /\ Inv /\ Inv' \\
__ => \\
__ \O{CB(2N).move(N+1)}
$}
\begin{proof}
\pf\
$Inv$ is needed here to ensure that the popped value is
actually in $Data$ so that the hypothesis and
therefore the result of the push follows.
%
\step{safety.1.3.1.1}{$
\begin{conj}
DB.B1.push(Buffer2[1]) \\
DB.B2.pop
\end{conj} \\
__ :equiv:
\begin{conj}
Buffer2[1] :in: Data \\
\O{Buffer}[N] = \bot \\
\O{Buffer}'[N] = Buffer2[1] \\
\O{Buffer}[N+1] = Buffer2[1] \\
\O{Buffer}'[N+1] = \bot \\
:A: i :in: 1 :dd: (N-1) : \UNCHANGED \O{Buffer[i]} \\
:A: i :in: (N+2) :dd: 2N : \UNCHANGED \O{Buffer[i]} \\
\end{conj}
$}
\begin{proof}
\pf\ Follows immediately from the definitions of $\O{Buffer}$,
$Buffer2$, $DB.B1.push$ and $DB.B2.pop$.~\qed
\end{proof}%safety.1.3.1.1
%
%
\qedstep%safety.1.3.1
\begin{proof}
\pf\ Follows immediately using
the right-hand-side of the equivalence
in \stepref{safety.1.3.1.1},
$Inv$ and $Inv'$ from the definition of
$DB.move(N+1)$ by substitution.~\qed
\end{proof}%qed.safety.1.3.1
\end{proof}%safety.1.3.1
%
\qedstep%safety.1.3.x
\begin{proof}
\pf\ Follows immediately from cases \stepref{safety.1.3.x.0} and
\stepref{safety.1.3.1} by propositional logic.~\qed
\end{proof}%qed.safety.1.3.x
%
\end{proof}%safety.1.3.x
%
\qedstep%safety.1.3
\begin{proof}
\pf\ Follows immediately
from \stepref{safety.1.3.x} by propositional
logic and by predicate logic quantification over the
parameter of $move$.~\qed
\end{proof}%qed.safety.1.3
%
\end{proof}%safety.1.3
%
\step{safety.1.4}{$
Buffer1' = Buffer1 /\ Buffer2' = Buffer2 \\
__ => \\
__ \O{Buffer}' = \O{Buffer}
$}
\begin{proof}
\pf\ Immediate from the definition of $Buffer$.~\qed
\end{proof}%safety.1.4
%
\qedstep%qed.safety.1
\begin{proof}
\pf\ Follows immediately from \stepref{safety.1.1},
\stepref{safety.1.2},\stepref{safety.1.3}
and \stepref{safety.1.4}
by propositional logic.~\qed
\end{proof}%qed.safety.1
\end{proof}%safety.1
%
\qedstep%safety.x
\begin{proof}
\pf\ Immediate from \stepref{safety.1} by STL4.~\qed
\end{proof}%qed.safety.x
%
\end{proof}%safety.x
%
\qedstep%safety.z
\begin{proof}
\pf\ Follows immediately by propositional logic
from \stepref{safety.y} and
\stepref{safety.x}.~\qed
\end{proof}%qed.safety.z
%
\end{proof}%safety.z
%
\qedstep%safety
\begin{proof}
\pf\ Follows immediately by propositional logic
from \stepref{inv} and \stepref{safety.z}.~\qed
\end{proof}%qed.safety
\end{proof}%safety
%
\vs{1.2}
Liveness for this implementation is straightforward. The liveness
for the actions of $CB$ follow directly from the liveness of
the corresponding actions of $DB$. Thus, a simplified version of
proof rule WF2 may be used.
\vs{1.2}
%
\step{live}{$
\begin{conj}
DB.Init \\
[][\begin{disj}
:E: a : DB.push(a) \\
DB.pop \\
:E: k : DB.move(k)]_{Buffer1,Buffer2}
\end{disj} \\
L_1
\end{conj} \\
__ => \\
__ \overline{L_2}
$}
\begin{proof}
\pf\
%
\vs{1.2}
For direct step-simulation, the liveness proof is considerably simplified.
The next step is the simplified version of the TLA proof rule WF2 for
the case of direct step simulation.
\vs{1.2}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
\step{live-1}{$ []Inv /\ L_1 => \O{L_2}
$}
\begin{proof}
\pf\
%
\pflet{$ \act{B} : \PROPVARIABLE \\
\act{M} : \PROPVARIABLE \\
\act{A} : \PROPVARIABLE $}
%
\step{live-1.0}{$
\proofrule{\anact{\act{B}}{f} \;\implies\;
\anact{{}\overline{\act{M}}}{\overline{g}} \\
Inv \land \overline{\enabled{\anact{M}{g}}} \;\implies\;
\enabled{\anact{B}{f}}\raisebox{.2em}{\strut} }%
{ \WF_{f}{(\act{A})} \land \Box Inv \;\implies\;
\overline{\WF_{g}{(\act{M})}}\raisebox{.2em}{\strut}}
$}
\begin{proof}
\pf\ This is an instantiation of proof rule WF2 for direct step
simulation. The instantiation is: \\
$P == Inv$,\\
${\cal B} == {\cal A}$,\\
${\cal N} == \TRUE$,\\
$F == Inv$. \\
The second premise follows
by propositional logic since ${\cal B} == {\cal A}$,
the antecedent of the third premise is simplified, and
the consequent of the fourth premise becomes
$<>[]Inv$, thus the fourth premise follows from
the contrapositive $\provable{F \,\implies\, \Diamond F}$
of STL 2 and may be omitted.~\qed
\end{proof}%live-1.0
%
\step{live-1.1}{$\begin{conj}
\WF_{Buffer1,Buffer2}(:E: a : DB.push(a)) \\
[]Inv
\end{conj}\\
__ => \\
__ \O{\WF_{Buffer}(:E: a : CB.push(a))}
$}
\begin{proof}
\pf\ This step is an instantiation of
the conclusion of \stepref{live-1.0}.
Thus the substeps are instantiations of
the premises of \stepref{live-1.0}.
%
\step{live-1.1.1}{$<<:E: a : DB.push(a)>>_{Buffer1,Buffer2} \\
__ => \\
__ <<\O{:E: a : CB.push(a)}>>_{\O{Buffer}}
$}
\begin{proof}
\pf\ This has been shown in the safety step \stepref{safety},
modulo trivial propositional logic.~\qed
\end{proof}%live-1.1.1
%
\step{live-1.1.2.x}{$\begin{conj}
Inv \\
\O{Enabled<<:E: a : CB.push(a)>>_{Buffer}}
\end{conj} \\
__ => \\
__ Enabled<<:E: a : DB.push(a)>>_{Buffer1,Buffer2}
$}
\begin{proof}
\pf\
%
\step{live-1.1.2}{$
\O{Enabled<<:E: a : CB.push(a)>>_{Buffer}} \\
__ => \\
__ Enabled<<:E: a : DB.push(a)>>_{Buffer1,Buffer2}
$}
\begin{proof}
\pf\
%
\step{live-1.1.2.1}{$
\begin{conj}
\begin{noj}
\O{Enabled<<:E: a : CB.push(a)>>_{Buffer}} \\
__ :equiv: \\
__ :E: c, a :
\begin{conj}
a :in: Data \\
\O{Buffer}[2N] = \bot \\
c[2N] = a \\
:A: i :in: 1 :dd: (2N-1):
c[i] = \O{Buffer}[i] \\
c # \O{Buffer}
\end{conj} \\
___ :equiv: \\
___ :E: a : \begin{conj}
a :in: Data \\
Buffer2[N] = \bot
\end{conj}
\end{noj} \\
\begin{noj}
Enabled<<:E: a : DB.push(a)>>_{Buffer1,Buffer2} \\
__ :equiv: \\
__ :E: c, d, a :
\begin{conj}
a :in: Data \\
Buffer2[N] = \bot \\
c[N] = a \\
:A: i :in: 1 :dd: (N-1):
c[i] = Buffer2[i] \\
:A: i :in: 1 :dd: N:
d[i] = Buffer1[i] \\
c # Buffer1 \\
d # Buffer2
\end{conj} \\
___ :equiv: \\
___ :E: a : \begin{conj}
a :in: Data \\
Buffer2[N] = \bot
\end{conj}
\end{noj}
\end{conj}
$}
\begin{proof}
\pf\ The first equivalences
are the definitions of the $Enabled$ formulas,
which are the definitions of the actions plus the
added condition defined by the $<<>>_f$ construct,
with the primed variables quantified out.
The second equivalences are the formulas with
the quantifiers passed down through the conjuncts
by predicate logic, and then eliminating truths
concerning the TLA primitives, namely the data
structures, which follow from elementary set theory.~\qed
\end{proof}%live-1.1.2.1
%
\qedstep%live-1.1.2
\begin{proof}
\pf\ Follows immediately from the second equivalences in
\stepref{live-1.1.2.1}.~\qed
\end{proof}%qed.live-1.1.2
%
\end{proof}%live-1.1.2
%
\qedstep%live-1.1.2.x
\begin{proof}
\pf\ Follows immediately by adjoining $Inv$ to the hypothesis.~\qed
\end{proof}%qed.live-1.1.2.x
%
\end{proof}%live-1.1.2.x
%
\qedstep%live-1.1
\begin{proof}
\pf\ Immediate from \stepref{live-1.1.1} and \stepref{live-1.1.2.x}
using the proof rule in \stepref{live-1.0}.~\qed
\end{proof}%qed.live-1.1
%
\end{proof}%live-1.1
%
\step{live-1.2}{$ \begin{conj}
\WF_{Buffer1,Buffer2}(DB.move(k)) \\
[]Inv \\
\end{conj} \\
__ => \\
__ \WF_{\O{Buffer}}(\O{CB.move(k)})
$}
\begin{proof}
\pf\ Follows similarly to \stepref{live-1.1}, but requires some
propositional logic with the definition of $DB.move$.~\qed
\end{proof}%live-1.2
%
\step{live-1.3}{$\begin{conj}
\WF_{Buffer1,Buffer2}(DB.pop) \\
[]Inv
\end{conj}\\
__ => \\
__ \WF_{\O{Buffer}}(\O{CB.pop})
$}
\begin{proof}
\pf\ Similar to \stepref{live-1.1}.~\qed
\end{proof}%live-1.3
%
\qedstep%live-1
\begin{proof}
\pf\ Follows immediately by quantification and propositional logic
from \stepref{live-1.1}, \stepref{live-1.2}
and \stepref{live-1.3}.~\qed
\end{proof}%qed.live-1
%
\end{proof}%live-1
%
\qedstep%live
\begin{proof}
\pf\ Follows immediately by propositional logic from \stepref{inv}
and \stepref{live-1}.~\qed
\end{proof}%qed.live
%
\end{proof}%live
%
\step{alltogether}{$Init_1 /\ [][N_1]_v /\ L_1 \\
__ => \\
__ \overline{Init_2}
/\ [][\overline{N_2}]_{\overline{w}}
/\ \overline{L_2}$}
\begin{proof}
\pf\ Follows immediately from \stepref{init-impl},
\stepref{safety} and
\stepref{live} by propositional logic.~\qed
\end{proof}%alltogether
%
\qedstep
\begin{proof}
\pf\ Theorem follows immediately from \stepref{alltogether} by quantifying
over the free variables.~\qed
\end{proof}%qed.theorem
\end{proof}%first-implementation
%
%%%%%%%%%%%%%%
\vs{1.2}
The second proof shows that a concrete $Buffer$
implements an abstract $buffer$.
The refinement mapping is not as trivial as it was in the first proof.
The abstract buffer is the selection of non-$\bot$ elements from
the concrete buffer.
The safety step is straightforward.
$CB.move$ is shown to be a stuttering step of $AbstractBuffer$.
The main part of the proof concerns the proof of liveness, because
there are stuttering variables -- the one we use is a state function called
$Descending$. That means that the Lattice Rule has to be used to
obtain the postcondition that holds after the stuttering variable
has counted down all the way.
\vs{1.2}
%
\step{theorem-2}{$:E: Buffer : CB(N).Spec => :E: buffer : AB(N).Spec
$}
\begin{proof}
\vs{1.2}
\pflet{$
NonVoid(k) == k # \bot \\
\O{buffer} == SelectSeq(Buffer,NonVoid) \\
FirstFull == Buffer[1] # \bot \\
NotEmpty == :E: i :in: 1 :dd: N : Buffer[i] # \bot \\
NotStuffed == :E: i :in: 1 :dd: (N-1) : Buffer[i] = \bot \\
Descending(Buffer) == \sum_{i=1}^N (Buffer[i] = \bot).2^{N-i} \\
MaxDescending == \sum_{i=1}^N 2^{N-i} \\
% Code(Buffer) == \sum_{i=1}^N (Buffer[i] = \bot).2^{i-1} \\
% EmptyPlaces(Buffer) == Size(\{ i | Buffer[i] = \bot \})\\
% Descending(Buffer) == Code(Buffer) + 2.EmptyPlaces(Buffer)\\
$}
%
\vs{1.2}
The refinement mapping is given above.
We use Theorem \ref{the:sufficient}
to reduce the proof, and we choose the invariant {\bf True}.
Two of the clauses of Theorem \ref{the:sufficient}, namely
$Init => Inv$ and $Inv /\ [N_1]_v => Inv'$ are then trivial.
We prove \stepref{theorem-2} by proving the other three clauses
of Theorem \ref{the:sufficient}.
\vs{1.2}
In the $Safety$ part of the proof below, we make use of a
temporal-logical truth, that $[] :E: i :in: 0 :dd: Maxdescending :
Descending(Buffer) = i$. One option is to use this as an
invariant. We choose not to do so, both because we prefer the stronger
form, because it allows us to omit two proof steps (as above), and it
allows the proof to maintain hierarchical structure without
duplication or cross-referencing of proof steps. We explain later in
more detail.
\vs{1.2}
Note that $FirstFull :equiv: NonVoid(Buffer[1])$ and
$NotEmpty :equiv: :E: i :in: 1 :dd: N : NonVoid(Buffer[i])$.
These may figure in proof justifications, but are not given
as individual proof steps.
\vs{1.2}
Firstly, the concrete initial state implies the abstract
initial state.
\vs{1.2}
%
\step{init-impl-2}{$CB.Init => \O{AB.Init}$}
\begin{proof}
\pf\
\step{init-impl.1}{$ SelectSeq([i :in: 1 :dd: N |-> \{ \bot \} ], NonVoid)
= <<>> $}
\begin{proof}
\pf\ Immediate from the definitions.~\qed
\end{proof}%init-impl.1
\qedstep%init-impl-2
\begin{proof}
\pf\ Follows immediately from \stepref{init-impl.1} and
the definitions.~\qed
\end{proof}%qed.init-impl-2
\end{proof}%init-impl-2
%
\vs{1.2}
Secondly, the concrete safety property implies the abstract safety property.
A $CB.move$ operation is a stuttering step for the abstract buffer.
\vs{1.2}
%
\step{safety-2}{$
[\begin{disj}
:E: a : CB.push(a) \\
CB.pop \\
:E: k : CB.move(k)]_{Buffer}
\end{disj} \\
__ => \\
__ [\begin{disj}
\O{:E: a : AB.push(a)} \\
\O{AB.pop}]_{\O{buffer}}
\end{disj}
$}
\begin{proof}
\pf\
%
\vs{1.2}
First, a concrete $push$ is an abstract $push$.
\vs{1.2}
%
\step{safety.1.1}{$
:E: a : CB.push(a) \\
__ => \\
__ \O{:E: a : AB.push(a)}
$}
\begin{proof}
\pf\
%
\pflet{$ a : \CONSTANT $}
%
\step{safety.1.1.1}{$
CB.push(a) \\
__ => \\
__ \O{AB.push}(a)
$}
\begin{proof}
\pf\
\step{safety.1.1.1.x}{$
CB.push(a) => a :in: Data
$}
\begin{proof}
\pf\ Immediate from the definition of $CB.push$.~\qed
\end{proof}%safety.1.1.1.x
%
\step{safety.1.1.1.y}{$
CB.push(a) => Len(\O{buffer}) < N
$}
\begin{proof}
\pf\
%
\step{safety.1.1.1.y.1}{$
CB.push(a) => Buffer[N] = \bot
$}
\begin{proof}
\pf\ Immediate from the definition of $CB.push(a)$.~\qed
\end{proof}%safety.1.1.1.y.1
%
\step{safety.1.1.1.y.2}{$
Buffer[N] = \bot =>
Len(SelectSeq(Buffer, NonVoid)) < N
$}
\begin{proof}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\pf\ Follows from the definition of
$SelectSeq$ and $Len$, along with a
certain amount of data structure
manipulation, which is omitted.~\qed
\end{proof}%safety.1.1.1.y.2
%
\qedstep%safety.1.1.1.y
\begin{proof}
\pf\ Follows from \stepref{safety.1.1.1.y.1},
\stepref{safety.1.1.1.y.2} and the definition of
$\O{buffer}$ by propositional logic.~\qed
\end{proof}%qed.safety.1.1.1.y
%
\end{proof}%safety.1.1.1.y
%
\step{safety.1.1.1.z}{$
CB.push(a) => \O{buffer}' = \O{buffer} :o: <>
$}
\begin{proof}
\pf\
%
\step{safety.1.1.1.z.1}{$
CB.push(a) \\
__ => \\
__ \begin{noj}
SelectSeq(Buffer, NonVoid) = \\
SelectSeq([i :in: 1 :dd: (N-1) |-> Buffer[i]], NonVoid)
\end{noj}
$}
\begin{proof}
\pf\
%
\step{safety.1.1.1.z.1.1}{$
CB.push(a) => Buffer[N] = \bot
$}
\begin{proof}
\pf\ Immediate from the definition of $CB.push(a)$.~\qed
\end{proof}%safety.1.1.1.z.1.1
%
\step{safety.1.1.1.z.1.2}{$
Buffer[N] = \bot \\
__ => \\
__ \begin{noj}
SelectSeq(Buffer, NonVoid) = \\
SelectSeq([i :in: 1 :dd: (N-1) |-> Buffer[i]], NonVoid)
\end{noj}
$}
\begin{proof}
%%%%%%%%%%%%%%%%%%%%%%%%%%%
\pf\ Follows immediately from the definition of
$SelectSeq$ using manipulations of the
data structure.~\qed
\end{proof}%safety.1.1.1.z.1.2
%
\qedstep%safety.1.1.1.z.1
\begin{proof}
\pf\ Follows immediately from \stepref{safety.1.1.1.z.1.1}
and \stepref{safety.1.1.1.z.1.2} by
propositional logic.~\qed
\end{proof}%qed.safety.1.1.1.z.1
%
\end{proof}%safety.1.1.1.z.1
%
\step{safety.1.1.1.z.2}{$
CB.push(a) \\
__ => \\
__ Buffer' =
[i :in: 1 :dd: (N-1) |-> Buffer[i]] :o: <>
$}
\begin{proof}
\pf\ Follows from the definition of $CB.push$ and the sequence
operations.~\qed
\end{proof}%safety.1.1.1.z.2
%
\step{safety.1.1.1.z.3}{$
SelectSeq([i :in: 1 :dd: (N-1) |-> Buffer[i]] :o: <>, NonVoid) = \\
SelectSeq([i :in: 1 :dd: (N-1) |-> Buffer[i]],NonVoid) :o: <>
$}
\begin{proof}
\pf\ Follows from the definition of $SelectSeq$
and $NonVoid$.~\qed
\end{proof}%safety.1.1.1.z.3
%
\qedstep%safety.1.1.1.z
\begin{proof}
\pf\ Follows immediately from \stepref{safety.1.1.1.z.1},
\stepref{safety.1.1.1.z.2}, \stepref{safety.1.1.1.z.3}
by propositional logic, substitution,
and the definition of $\O{buffer}$.~\qed
\end{proof}%qed.safety.1.1.1.z
%
\end{proof}%safety.1.1.1.z
%
\qedstep%safety.1.1.1
\begin{proof}
\pf\ Follows immediately from \stepref{safety.1.1.1.x},
\stepref{safety.1.1.1.y} and
\stepref{safety.1.1.1.z}
using propositional logic.~\qed
\end{proof}%safety.1.1.1
%
\end{proof}%safety.1.1.1
%
\qedstep%safety.1.1
\begin{proof}
\pf\ Follows immediately by
existentially quantifying over $a$.~\qed
\end{proof}%qed.safety.1.1
\end{proof}%safety.1.1
%
\vs{1.2}
Second, a concrete $pop$ is an abstract $pop$.
\vs{1.2}
%
\step{safety.1.2}{$
CB.pop \\
__ => \\
__ \O{AB.pop}
$}
\begin{proof}
\pf\ Follows similarly to \stepref{safety.1.1}.~\qed
\end{proof}%safety.1.2
%
\vs{1.2}
Third, a concrete $move$ is an abstract stuttering step -- it doesn't
change any of the values of the abstract variables.
\vs{1.2}
%
\step{safety.1.3}{$
:E: k : CB.move(k) \\
__ => \\
__ (\O{buffer}' = \O{buffer})
$}
\begin{proof}
\pf\
%
\pflet{$ k : \CONSTANT $}
%
\step{safety.1.3.x}{$
CB.move(k) \\
__ => \\
__ (\O{buffer}' = \O{buffer})
$}
\begin{proof}
\pf\
%
\vs{1.2}
In the following, we want to write
$SelectSeq([i :in: 1 :dd: N |-> Buffer[i]], NonVoid)$
as the composition of three sequences, consisting respectively of the
first $k-2$ elements, the $k-1$'th and $k$'th, and the tail $N-k$
elements, and similarly with
$SelectSeq([i :in 1 :dd: N |-> Buffer'[i]], NonVoid)$,
to show that they're the same, by showing for the three individual
components of the sequence that they're the same. Since sequences
always have as domain an initial segment of the natural numbers (namely
$1 :dd: Len(S)$ for a sequence $S$), some rewriting on the indices has
to be performed to make this turn out correctly.
\vs{1.2}
%
\step{safety.1.3.1}{$
CB.move(k) \\
__ => \\
__ \begin{conj}
\begin{noj}
SelectSeq([i :in: 1 :dd: (k-2) |-> Buffer'[i]],NonVoid) = \\
SelectSeq([i :in: 1 :dd: (k-2) |-> Buffer[i]],NonVoid)
\end{noj}\\
\begin{noj}
SelectSeq([i :in: 1 :dd: (N-k) |-> Buffer'[i+k]],NonVoid) = \\
SelectSeq([i :in: 1 :dd: (N-k) |-> Buffer[i+k]],NonVoid)
\end{noj}\\
\end{conj}
$}
\begin{proof}
\pf\
%
\step{safety.1.3.1.1}{$
CB.move(k) \\
__ => \\
__ :A: i :in: 1 :dd: (k-2): Buffer'[i] = Buffer[i]
$}
\begin{proof}
\pf\ Follows immediately
from the definition of $CB.move(k)$.~\qed
\end{proof}%safety.1.3.1.1
%
\step{safety.1.3.1.2}{$
CB.move(k) \\
__ => \\
__ :A: i :in: (k+1) :dd: N: Buffer'[i] = Buffer[i]
$}
\begin{proof}
\pf\ Follows immediately
from the definition of $CB.move(k)$.
Note that if
$k = N$ then $(k+1) :dd: N = \emptyset$.~\qed
\end{proof}%safety.1.3.1.2
%
\qedstep%safety.1.3.1
\begin{proof}
\pf\ Follows by substitution of $Buffer'$ for
$Buffer$ from \stepref{safety.1.3.1.1}
and \stepref{safety.1.3.1.2}, by simple
transformation of indices, and by
data structure rules which say that
two terms representing finite functions
with equal values for all their arguments
are equal.~\qed
\end{proof}%qed.safety.1.3.1
%
\end{proof}%safety.1.3.1
%
\vs{1.2}
%
\step{safety.1.3.1x}{$
CB.move(k) \\
__ => \\
__ \begin{noj}
SelectSeq(<>,NonVoid) = \\
SelectSeq(<>,NonVoid) = \\
<>
\end{noj}
$}
\begin{proof}
\pf\ Follows from the definition of $SelectSeq$
and $CB.move(k)$.~\qed
\end{proof}%safety.1.3.1x
%
%
\step{safety.1.3.2}{$
k :in: 2 :dd: N \\
__ => \\
__ \begin{conj}
\begin{noj}
SelectSeq(Buffer,NonVoid) = \\
__ \begin{noj}
SelectSeq([i :in: 1 :dd: (k-2) |-> Buffer[i]],NonVoid) :o: \\
SelectSeq(<>,NonVoid) :o: \\
SelectSeq([i :in: 1 :dd: N-k |-> Buffer[i+k]],NonVoid)
\end{noj}
\end{noj} \\
\begin{noj}
SelectSeq(Buffer',NonVoid) = \\
__ \begin{noj}
SelectSeq([i :in: 1 :dd: (k-2) |-> Buffer'[i]],NonVoid) :o: \\
SelectSeq(<>,NonVoid) :o: \\
SelectSeq([i :in: 1 :dd: N-k |-> Buffer'[i+k]],NonVoid)
\end{noj}
\end{noj}
\end{conj}
$}
\begin{proof}
%%%%%%%%%%%%%%%%%%%%%%%%
\pf\ Follows from the definitions of
$ :o: $ and $SelectSeq$.
Note that if $k = N$ then $(k+1) :dd: N = \emptyset$,
$SelectSeq(\emptyset, P) = <<>>$,
and $\alpha :o: <<>> = \alpha$ for any
sequence $\alpha$, all by definition.~\qed
\end{proof}%safety.1.3.2
%
\qedstep%safety.1.3.x
\begin{proof}
\pf\ Follows from \stepref{safety.1.3.1},
\stepref{safety.1.3.1x} and \stepref{safety.1.3.2}
and the definition of $\O{buffer}$
by propositional logic and simple equational
reasoning.~\qed
\end{proof}%qed.safety.1.3.x
%
\end{proof}%safety.1.3.x
%
\qedstep%safety.1.3
\begin{proof}
\pf\ Follows immediately from \stepref{safety.1.3.x},
by quantification over $k$.~\qed
\end{proof}%qed.safety.1.3
%
\end{proof}%safety.1.3
%
\step{safety.1.4}{$
Buffer' = Buffer \\
__ => \\
__ \O{buffer}' = \O{buffer}
$}
\begin{proof}
\pf\ Immediate from the definition of $\O{buffer}$.~\qed
\end{proof}%safety.1.4
%
\qedstep%qed.safety-2
\begin{proof}
\pf\ Follows immediately from \stepref{safety.1.1},
\stepref{safety.1.2}, \stepref{safety.1.3}
and \stepref{safety.1.4}
by propositional logic.~\qed
\end{proof}%qed.safety-2
\end{proof}%safety-2
%
%
\vs{1.2}
Thirdly, the concrete liveness property implies the abstract
liveness property. Liveness of the abstract {\it pop} operation needs
to be shown.
The abstract {\it pop} is enabled via the refinement mapping under
conditions under which the corresponding concrete operation is not
enabled (namely that there's {\it something} in the buffer somewhere,
even though it may not be in {\mit Buffer[1]}). However, under these
conditions, one of the the concrete {\it move} operations is enabled,
as noted in Section \ref{sec:informal-proof}. The enabling
conditions for {\it move}s, along with weak fairness of each {\it move},
entail that {\it move} actions must occur. These actions lead eventually
to the satisfaction of the enabling conditions for concrete {\it pop},
(i.e., there's something in {\mit Buffer[1]}) which simulates
abstract {\it pop}. So weak fairness of concrete {\it pop} ensures
eventual execution and thus, along with the invariant, entails
weak fairness of abstract {\it pop}.
\vs{1.2}
The proof method is: one picks a state function which acts as a
stuttering variable, i.e. it decreases during a stuttering step, and
its minimal values correspond to the condition one wants to reach (in
this case, the enabling condition for $CB.pop$). One uses WF1 to show
that the state function satisfies these requirements. Then, the
Lattice Rule allows one to conclude that if the state function has
some, any, value, this leads eventually to the goal condition being
reached. Some temporal logic is used to convert this assertion into the
fourth premise of an application of WF2 (it is straightforward to show
the other three premises of WF2).
\vs{1.2}
The structure of the proof is: the high level
is an application of WF2; the next lower level consists of its four
premises. The fourth of these is the tricky one, justified by a
conclusion of the Lattice Rule plus some temporal logic. The
conclusion of the Lattice Rule is justified by the premise. The
premise is justified by an application of WF1. The three premises of
WF1 are shown directly.
\vs{1.2}
In this proof, the premises of TLA rules WF1, WF2 and Lattice are
given their exact syntactic form as needed for the rule, and
substepping is used to perform simple logical manipulations, rather than
combining them into one step. This makes the proof easier to
survey.
\vs{1.2}
%
\step{lively-2}{$\begin{conj}
[][:E: a : CB.push(a) \/ CB.pop
\/ :E: k : CB.move(k)]_{Buffer} \\
WF_{Buffer}(CB.pop) \\
WF_{Buffer}(:E: k : CB.move(k))
\end{conj} \\
__ => \\
__ \O{WF_{buffer}(AB.pop)}
$}
\begin{proof}
\pf\
%
\vs{1.2}
This step is the conclusion of an application of WF2, which is
justified by giving the four premises. The fourth premise is the
most difficult to prove. The others are straightforward.
\vs{1.2}
%
\step{lively-2.1}{$
<<(:E: a : CB.push(a) \/ CB.pop
\/ :E: k : CB.move(k)) /\ CB.pop>>_{Buffer} \\
__ => \\
__ <<\O{AB.pop}>>_{\O{buffer}}
$}
\begin{proof}
\pf\
%
\step{lively-2.1.0}{$
\begin{conj}
\begin{disj}
:E: a : CB.push(a) \\
CB.pop \\
:E: k : CB.move(k))
\end{disj} \\
CB.pop
\end{conj} \\
__ :equiv: \\
__ CB.pop
$}
\begin{proof}
\pf\ Follows immediately by propositional logic from the definitions
of $CB.push$, $CB.pop$ and $CB.move$.~\qed
\end{proof}%lively-2.1.0
%
\step{lively-2.1.1}{$
<>_{Buffer} \\
__ => \\
__ <<\O{AB.pop}>>_{\O{buffer}}
$}
\begin{proof}
\pf\ Recall that the statement means
$CB.pop /\ Buffer' :neq: Buffer =>
\O{AB.pop} /\ \O{buffer}' :neq: \O{buffer}$.
Thus it follows, modulo a little propositional logic,
from the analysis needed for
step-simulation in \stepref{safety-2}
(which we didn't give explicitly, but noted it is similar
to the $push$ simulation).~\qed
\end{proof}%lively-2.1.1
%
\qedstep%lively-2.1
\begin{proof}
\pf\ Follows immediately
by proposition logic from \stepref{lively-2.1.0}
and \stepref{lively-2.1.1}.~\qed
\end{proof}%qed.lively-2.1
%
\end{proof}%lively-2.1
%
\vs{1.2}
%
\step{lively-2.2}{$
\begin{conj}
FirstFull \\
FirstFull' \\
<>_{Buffer} \\
\O{Enabled<>_{buffer}}
\end{conj} \\
__ => \\
__ CB.pop
$}
\begin{proof}
\pf\
%
\step{lively-2.2.1}{$
<>_{Buffer} \\
__ => \\
__ CB.pop
$}
\begin{proof}
\pf\ Follows immediately by propositional logic.~\qed
\end{proof}%lively-2.2.1
%
\qedstep%lively-2.2
\begin{proof}
\pf\ Follows immediately
from \stepref{lively-2.2.1} by propositional logic.~\qed
\end{proof}%qed.lively-2.2
%
\end{proof}%lively-2.2
%
\vs{1.2}
%
\step{lively-2.3}{$
\begin{conj}
FirstFull \\
\O{Enabled<>_{buffer}}
\end{conj} \\
__ => \\
__ Enabled<>_{Buffer}
$}
\begin{proof}
\pf\
%
\step{lively-2.3.1}{$ FirstFull \\
__ => \\
__ Enabled<>_{Buffer}
$}
\begin{proof}
\pf\
%
\step{lively-2.3.1.1}{$Enabled<>_{buffer} \\
__ :equiv: \\
__ :E: d, c : \begin{conj}
d # Buffer
Buffer[1] # \bot \\
c[1] = \bot \\
:A: i :in: 2 :dd: N : c[i] = Buffer[i]
\end{conj} \\
___ :equiv: \\
___ Buffer[1] # \bot
$}
\begin{proof}
\pf\ The first equivalence is the definition of $Enabled$.
The second follows by moving the quantifiers inside
the conjunct by a standard rule of predicate logic,
and by observing that the first, third and fourth
conjuncts follow by predicate logic from the primitives
of TLA.~\qed
\end{proof}%lively-2.3.1.1
%
\qedstep%lively-2.3.1
\begin{proof}
\pf\ Follows immediately
from the definition of $FirstFull$ and the
second equivalence of \stepref{lively-2.3.1.1}.~\qed
\end{proof}%qed.lively-2.3.1
%
\end{proof}%lively-2.3.1
%
\qedstep%lively-2.3
\begin{proof}
\pf\ Follows immediately by propositional logic from
\stepref{lively-2.3.1}.~\qed
\end{proof}%qed.lively-2.3.1
%
\end{proof}%lively-2.3
%
\vs{1.2}
%
\step{lively-2.4}{$\begin{conj}
[][\begin{conj}
\begin{disj}
:E: a : CB.push(a) \\
CB.pop \\
:E: k : CB.move(k)
\end{disj} \\
~CB.pop]_{Buffer}
\end{conj} \\
WF_{Buffer}(CB.pop) \\
WF_{Buffer}(:E: k : CB.move(k)) \\
<>[]\O{Enabled<>_{buffer}}
\end{conj} \\
__ => \\
__ <>[]FirstFull
$}
\begin{proof}
\pf\
%
\vs{1.2}
The Lattice Rule is used to prove this fourth hypothesis of WF2.
First, some simplifications are needed.
\vs{1.2}
%
\step{lively-2.4.v}{$
\begin{conj}
\begin{disj}
:E: a : CB.push(a) \\
CB.pop \\
:E: k : CB.move(k)
\end{disj} \\
~CB.pop
\end{conj} \\
__ :equiv: \\
__ \begin{disj}
:E: a : CB.push(a) \\
:E: k : CB.move(k)
\end{disj}
$}
\begin{proof}
\pf\ Follows immediately by propositional logic.~\qed
\end{proof}%lively-2.4.v
%
\vs{1.2}
%
\step{lively-2.4.w}{$NotEmpty :equiv: \O{Enabled<>_{buffer}}
$}
\begin{proof}
\pf\
%
\step{lively-2.4.w.1}{$\O{Enabled<>_{buffer}} \\
__ :equiv: \\
__ :E: d , c : \begin{conj}
d # SelectSeq(Buffer,NonVoid) \\
Len(SelectSeq(Buffer,NonVoid) > 0 \\
c = tail(Buffer)
\end{conj} \\
___ :equiv: \\
___ \begin{conj}
Len(SelectSeq(Buffer,NonVoid) > 0 \\
:E: c : c = tail(Buffer)
\end{conj}
$}
\begin{proof}
\pf\ The first equivalence is the definition of $Enabled$.
The second follows by moving the quantifiers inside
the conjunct by standard rules of predicate logic,
and eliminating the first conjunct, which follows
by predicate logic from the TLA requirement
that any function symbol with any argument denotes,
and predicate logic.~\qed
\end{proof}%lively-2.4.w.1
%
\qedstep%lively-2.4.w
\begin{proof}
\pf\ Follows from the equivalence in \stepref{lively-2.4.w.1}
using definitions of $SelectSeq$, $Len$, and $NonVoid$,
by quantifier rules of predicate logic, and
by observing that the second conjunct
$:E: c : c = tail(Buffer)$ is a truth of predicate logic.~\qed
\end{proof}%qed.lively-2.4.w
%
\end{proof}%lively-2.4.w
%
\vs{1.2}
The next step is the simplified version of \stepref{lively-2.4},
to be proved using the Lattice Rule, along with some
manipulations in temporal logic.
\vs{1.2}
%
\step{lively-2.4.x}{$
\begin{conj}
[][:E: a : CB.push(a) \/ :E: k : CB.move(k)]_{Buffer} \\
WF_{Buffer}(CB.pop) \\
WF_{Buffer}(:E: k : CB.move(k)) \\
<>[]NotEmpty
\end{conj} \\
__ => \\
__ <>[]FirstFull
$}
\begin{proof}
\pf\
%
\vs{1.2}
\label{page:bridge-rule}
The substeps are a temporal logic proof rule, which I'll call the
{\it Bridge Rule}, and instances of its two hypotheses.
The Bridge Rule enables us to prove the fourth
hypothesis of WF2 using the Lattice Rule.
The first hypothesis of the instantiated Bridge Rule
is a (counterfactual) invariant statement,
that the condition intepreting $P$ in WF2 is invariant under the restricted
safety condition $[][{\cal N} /\ \neg {\cal B}]_f$.
The second hypothesis is
(a modified form of) the conclusion of the Lattice Rule.
The instantiated
conclusion of the Bridge rule is step \stepref{lively-2.4.x} above.
\vs{1.2}
We state the Bridge Rule using propositional variable $X$, which
will be interpreted so that $[]X$ is equivalent to
\[ \begin{conj}
[][:E: a : CB.push(a) \/ :E: k : CB.move(k)]_{Buffer}\\
WF_{Buffer}(CB.pop) \\
WF_{Buffer}(:E: k : CB.move(k)) \\
\end{conj} \\
\]
(Weak Fairness conditions are equivalent to a statement of the form
$[]Y$ by the complementary form of STL6, and conjunctions of statements
of the form $[]Y$ are equivalent to a single statement of that form by STL5.)
We will also use
propositional variables $NE$, which will be interpreted as $NotEmpty$,
and $FF$, which will be interpreted as $FirstFull$. However, the rule
is valid for any interpretation of the propositional variables $X$, $NE$
and $FF$.
\vs{1.2}
In the TL calculation below, we use the TL rules
\vs{1.2}
\begin{tabbing}
\hspace*{3mm} \= STL7 \hspace{3mm} \= $\provable{[]<>[]F :equiv: <>[]F}$ \\
\> STL2$^{<>}$ \> $\provable{F => <>F} $ \\
\> STL4$^{<>}$ \> $\proofrule{F => G}{<>F => <>G} $ \\
\> STL6$^{<>}$ \> $\provable{([]<>F) \/ ([]<>G) :equiv: []<>(F \/ G)} $
\end{tabbing}
\vs{1.2}
STL7 was included in an earlier version of TLA, but Lamport has omitted
it since Bryan Olivier noted it followed from STL1-STL6. The rules
STL2$^{<>}$, STL4$^{<>}$ and STL6$^{<>}$ are the complements of
STL2, STL4 and STL6 for $<>$, and are easily derivable from those rules.
\vs{1.2}
Additionally, there are some propositional logic rules that we use:
namely, replacing a subformula by a provably logically equivalent
subformula (called {\it substituting equivalent subformulas} below);
and strengthening an antecedent of a conditional (called
{\it strehngthening a hypothesis}).
\vs{1.2}
%
\pflet{$ X, Y, FF, NE : \PROPVARIABLE $}
%
\step{lively-TL-rule}{$
\proofrule{[]X /\ FF => []FF \\
[]X => [](NE => <>FF)}
{[]X /\ <>[]NE => <>[]FF}
$}
\begin{proof}%lively-TL-rule
\pf\
%
\step{lively-TL-rule.1}{$
\proofrule{Z /\ FF => []FF}{[]Z => [](FF => []FF)}
$}
\begin{proof}%lively-TL-rule.1
\pf\ Follows immediately
by trivial propositional logic and then STL4.~\qed
\end{proof}%lively-TL-rule.1
%
\vs{1.2}
The following rule follows from STL2, STL4 and propositional logic.
TL rules such as this may not be that easy to spot or to derive.
Lamport suggests that any correct calculation in propositional
temporal logic may be used where necessary in a formal TLA proof,
which suggests a semantic short-cut using the completeness
theorem: one may conclude that the rule is valid by observing
that it's valid in all linearly ordered Kripke frames, that is
to say, models in which states form a linear sequence. Then one
may conclude by completeness that it's derivable, and say in the
proof step, as we did for an earlier temporal logic step, that it's
`derivable from PTL'.
The straightforward proof we give was noted by Lamport. No derivation
of this proof rule has been found using only STL1--STL6 + Lattice Rule
alone.
\vs{1.2}
%
\step{lively-TL-rule.2}{$
\proofrule{[]X => [](NE => <>FF) \\
[]X => [](FF => []FF)}{[]X => [](NE => []<>FF)}% \\
$}
\begin{proof}
\pf\
\step{lively-2.4.2.1}{$
\begin{conj}
[](NE => <>FF) \\
[](FF => []FF)
\end{conj} \\
__ => \\
__ [](NE => []<>FF)
$}
\begin{proof}
\pf\
\step{lively-2.4.2.1.1}{$
\begin{conj}
[](NE => <>FF) \\
[](FF => []FF)
\end{conj} \\
__ => \\
__ \begin{conj}
[](NE => <>FF) \\
[](FF => <>[]FF)
\end{conj}
$}
\begin{proof}
\pfsketch\ Uses STL4, STL2$^{<>}$
and propositional logic.~\qed
\end{proof}%lively-2.4.2.1.1
%
\step{lively-2.4.2.1.2}{$
\begin{conj}
[](NE => <>FF) \\
[](FF => <>[]FF)
\end{conj} \\
__ :equiv: \\
__ \begin{conj}
(NE ~> FF) \\
(FF ~> []FF)
\end{conj}
$}
\begin{proof}
\pf\ Follows immediately from the definition of
$~>$. ~\qed
\end{proof}%lively-2.4.2.1.2
%
\step{lively-2.4.2.1.3}{$
\begin{conj}
(NE ~> FF) \\
(FF ~> []FF)
\end{conj} \\
__ => \\
__ NE ~> []FF
$}
\begin{proof}
\pfsketch\ Follows by transitivity of $~>$.~\qed
\end{proof}%lively-2.4.2.1.3
%
\step{lively-2.4.2.1.4}{$
NE ~> []FF :equiv: [](NE => <>[]FF)
$}
\begin{proof}
\pf\ Follows immediately by the definition of $~>$.~\qed
\end{proof}%lively-2.4.2.1.4
%
\step{lively-2.4.2.1.5}{$
[](NE => <>[]FF) => [](NE => []<>[]FF)
$}
\begin{proof}
\pfsketch\ Follows by STL4 and STL7.~\qed
\end{proof}%lively-2.4.2.1.5
%
\step{lively-2.4.2.1.6}{$
[](NE => []<>[]FF) => [](NE => []<>FF)
$}
\begin{proof}
\pfsketch\ Follows by STL4 and STL2, using
substitution of equivalent subformulas.~\qed
\end{proof}%lively-2.4.2.1.6
%
\qedstep%lively-2.4.2.1
\begin{proof}
\pf\ Immediate from \stepref{lively-2.4.2.1.1},
\stepref{lively-2.4.2.1.2}, \stepref{lively-2.4.2.1.3},
\stepref{lively-2.4.2.1.4}, \stepref{lively-2.4.2.1.5},
\stepref{lively-2.4.2.1.6} by `forward chaining'.~\qed
\end{proof}%qed.lively-2.4.2.1.
\end{proof}%lively-2.4.2.1
%
\qedstep%lively-TL-rule.2
\begin{proof}
\pf\ The conclusion of the rule follows from the
premises using \stepref{lively-2.4.2.1},
Modus Ponens, and propositional logic.~\qed
\end{proof}%qed.lively-TL-rule.2
%
\end{proof}%lively-TL-rule.2
%
\step{lively-TL-rule.3}{$
\proofrule{[]X => [](NE => []Y)}
{[]X /\ []NE => []Y}
$}
\begin{proof}%lively-TL-rule.3
\pf\ This rule follows from PTL.~\qed
\end{proof}%lively-TL-rule.3
%
\step{lively-TL-rule.3.5}{$
\proofrule{[]X /\ []NE => []Y}
{[](X /\ NE) => []Y}
$}
\begin{proof}%lively-TL-rule.3.5
\pf\ Immediate from STL5 and substitution of equivalent
subformulas.~\qed
\end{proof}%lively-TL-rule.3.5
%
\step{lively-TL-rule.4}{$
\proofrule{[](X /\ NE) => []Y)}
{<>[](X /\ NE) => <>[]Y}
$}
\begin{proof}%lively-TL-rule.4
\pf\ Immediate from STL4$^{<>}$.~\qed
\end{proof}%lively-TL-rule.4
%
\step{lively-TL-rule.5}{$
\proofrule{<>[](X /\ NE) => <>[]Y)}
{<>[]X /\ <>[]NE => <>[]Y}
$}
\begin{proof}%lively-TL-rule.5
\pf\ STL6 and substituting equivalent subformulas.~\qed
\end{proof}%lively-TL-rule.5
%
\step{lively-TL-rule.6}{$
\proofrule{<>[]X /\ <>[]NE => <>[]Y}
{[]X /\ <>[]NE => <>[]Y}
$}
\begin{proof}%lively-TL-rule.6
\pf\ Strengthening a hypothesis, using STL2$^{<>}$.~\qed
\end{proof}%lively-TL-rule.6
%
\step{lively-TL-rule.7}{$
\proofrule{[]X /\ <>[]NE => <>[]<>FF}
{[]X /\ <>[]NE => <>[]FF}
$}
\begin{proof}%lively-TL-rule.7
\pf\ STL7, and substituting equivalent subformulas.~\qed
\end{proof}%lively-TL-rule.7
%
\qedstep%qed.lively-TL-rule
\begin{proof}%qed.lively-TL-rule
\pf\ The conclusion of rule \stepref{lively-TL-rule} follows
from its hypotheses by forward-chaining the rules
\stepref{lively-TL-rule.1}, \stepref{lively-TL-rule.2},
\stepref{lively-TL-rule.3}, \stepref{lively-TL-rule.3.5},
\stepref{lively-TL-rule.4}, \stepref{lively-TL-rule.5},
\stepref{lively-TL-rule.6}, \stepref{lively-TL-rule.7},
since inference is transitive, using the
substitution \\
$Y == <>FF$ where needed.~\qed
\end{proof}%qed.lively-TL-rule
%
\end{proof}%lively-TL-rule
%
\vs{1.2}
We now prove the instantiated first hypothesis of the Bridge rule.
\vs{1.2}
%
\step{lively-2.4.1.y-sup}{$
\begin{conj}
[][\begin{disj}
:E: a : CB.push(a) \\
:E: k : CB.move(k)]_{Buffer}
\end{disj} \\
WF_{Buffer}(CB.pop) \\
WF_{Buffer}(:E: k : CB.move(k)) \\
FirstFull
\end{conj} \\
__ => \\
__ []FirstFull
$}
\begin{proof}
\pf\
%
%
\step{lively-2.4.1.y}{$
\begin{conj}
[][\begin{disj}
:E: a : CB.push(a) \\
:E: k : CB.move(k)]_{Buffer}
\end{disj} \\
FirstFull
\end{conj} \\
__ => \\
__ []FirstFull
$}
\begin{proof}
\pf\
%
\step{lively-2.4.1.y.1}{$
\begin{conj}
[\begin{disj}
:E: a : CB.push(a) \\
:E: k : CB.move(k)]_{Buffer}
\end{disj} \\
FirstFull
\end{conj} \\
__ => \\
__ FirstFull'
$}
\begin{proof}
\pf\ Follows from the definitions of
$push$ and $move$.~\qed
\end{proof}%lively-2.4.1.y.1
%
\qedstep%lively-2.4.1.y
\begin{proof}
\pf\ Immediate by TLA Proof Rule INV1 from
\stepref{lively-2.4.1.y.1}, with the
antecedent conjuncts in the other order.~\qed
\end{proof}%qed.lively-2.4.1.y
%
\end{proof}%lively-2.4.1.y
%
\qedstep%lively-2.4.1.y-sup
\begin{proof}%lively-2.4.1.y-sup
\pf\ Immediate from \stepref{lively-2.4.1.y} by
adding a hypothesis.~\qed
\end{proof}%qed.lively-2.4.1.y-sup
%
\end{proof}%lively-2.4.1.y-sup
%
\vs{1.2}
We now prove the instantiated second hypothesis of the Bridge Rule.
\vs{1.2}
%
\step{lively-2.4.1.x}{$
\begin{conj}
[][:E: a : CB.push(a) \/ :E: k : CB.move(k)]_{Buffer} \\
WF_{Buffer}(CB.pop) \\
WF_{Buffer}(:E: k : CB.move(k)) \\
\end{conj} \\
__ => \\
__ [](NotEmpty => <>FirstFull)
$}
\begin{proof}
\pf\
%
\vs{1.2}
First we show a temporal logic equivalence, used to change the
consequent, then we prove the step with the altered consequent.
\vs{1.2}
%
\step{lively-2.4.1.x.2}{$
(NotEmpty /\ NotStuffed => <>FirstFull) \\
__ :equiv: \\
__ (NotEmpty => <>FirstFull)
$}
\begin{proof}
\pf\
%
\step{lively-2.4.1.x.2.1}{$
~NotStuffed => <>FirstFull
$}
\begin{proof}
\pf\
%
\step{lively-2.4.1.x.2.1.1}{$
~NotStuffed => FirstFull
$}
\begin{proof}
\pf\ Follows immediately from the definitions.~\qed
\end{proof}%lively-2.4.1.x.2.1.1
%
\step{lively-2.4.1.x.2.1.2}{$
FirstFull => <>FirstFull
$}
\begin{proof}
\pf\ Immediate from STL2.~\qed
\end{proof}%lively-2.4.1.x.2.1.2
%
\qedstep%lively-2.4.1.x.2.1
\begin{proof}
\pf\ Follows immediately from
\stepref{lively-2.4.1.x.2.1.1} and
\stepref{lively-2.4.1.x.2.1.2}
by transitivity of $=>$.~\qed
\end{proof}%qed.lively-2.4.1.x.2.1
%
\end{proof}%lively-2.4.1.x.2.1
%
\qedstep%lively-2.4.1.x.2
\begin{proof}
\pf\ Follows by propositional logic from
\stepref{lively-2.4.1.x.2.1}.~\qed
\end{proof}%qed.lively-2.4.1.x.2
%
\end{proof}%lively-2.4.1.x.2
%
\vs{1.2}
%
\step{lively-2.4.1.w}{$
\begin{conj}
[][:E: a : CB.push(a) \/ :E: k : CB.move(k)]_{Buffer} \\
WF_{Buffer}(CB.pop) \\
WF_{Buffer}(:E: k : CB.move(k)) \\
\end{conj} \\
__ => \\
__ [](NotEmpty /\ NotStuffed => <>FirstFull)
$}
\begin{proof}
\pf\
%
\vs{1.2}
Some axiomatic arithmetic is required in the next stage.
No $Naturals$ module
has been given, but we assume that such a module would contain enough
arithmetic to allow the simple reasoning that follows.
This reasoning, from a reasonable set of axioms, is not very hard.
We also don't believe that it would be particularly enlightening
for our purposes in this paper to
develop it here. However, it is important to note that the proof
is correct only insofar as the arithmetic reasoning is correct.
Arithmetic reasoning will be used where noted also in further steps below.
\vs{1.2}
There is a question whether the following proof step should actually
be formulated rather as an invariant. It is true that it {\em is} an
invariant, but it is equally true that it is a temporal-logical
necessity provable in TLA. The key issue is that $Descending$ is
defined as a parametrised sum of products, of which products
one of the components is a Boolean value of a state predicate.
It follows directly from propositional logic that these state predicates
have one Boolean value or the other, hence the value of $Descending$ is always
defined. This is a stronger statement than claiming this
assertion as an invariant of the implementation proof (which is
simply to claim that it follows from the $Init$ and $Safety$ parts
of $CB.Spec$).
\vs{1.2}
There are two strong reasons to include the stronger version of the
assertion:
% commented out since the itemize environment generates a latex error
%\begin{itemize}
%\item
(a): proof of a stronger version of a statement, when true, would
in principle enable one to prove consequences more easily, and thereby
simplify later parts of a proof;
%\item
(b): if this were part of the invariant, then the invariant would no
longer be $\TRUE$, and it would be necessary to include proofs of the
obligations $Init -> Inv$ and $Inv /\ [{\cal N}_1]_v -> Inv'$ from
Theorem \ref{the:sufficient}. The second of these is very similar to,
but slightly weaker than, the claim needed for the hypothesis of the
Lattice Rule, below, that $[{\cal N}_1]_v$ implies that the value of
$Descending$ decreases. Thus, the proofs would be similar, and one
would either refer to this later step to provide a proof of the
former, or, forgoing such a cross-structure reference, duplicate the
proofs (with minor modifications) to maintain the hierarchical proof
structure. Either solution seems inelegant. The
current approach maintains the hierarchical structure while avoiding
duplication of proof steps.
%\end{itemize}
\vs{1.2}
%
\step{lively-2.4.1.0y}{$
[]:E: i :in: 0 :dd: MaxDescending: Descending(Buffer) = i
$}
\begin{proof}
\pf\
%
\step{lively-2.4.1.0}{$
:E: i :in: 0 :dd: MaxDescending: Descending(Buffer) = i
$}
\begin{proof}
\pf\
%
\step{lively-2.4.1.0.1}{$
Descending(Buffer) :in: 0 :dd: MaxDescending
$}
\begin{proof}
\pf\
%
\step{lively-2.4.1.0.1.e}{$
\begin{conj}
\sum_{k=1}^N (Buffer[k] = \bot).2^{N-k} :leq:
\sum_{k=1}^N 2^{N-k} \\
0 :leq: \sum_{k=1}^N (Buffer[k] = \bot).2^{N-k}
\end{conj}
$}
\begin{proof}
\pf\
%
\pflet{$ k : \CONSTANT $}
%
\step{lively-2.4.1.0.1.1}{$
k :in: 1 :dd: N \\
__ => \\
__ \begin{conj}
(Buffer[k] = \bot).2^{N-k} :leq: 2^{N-k} \\
0 :leq: (Buffer[k] = \bot).2^{N-k}
\end{conj}
$}
\begin{proof}
\pf\
%
\step{lively-2.4.1.0.1.1.1}{$
k :in: 1 :dd: N
__ => \\
__ \begin{disj}
(Buffer[k] = \bot).2^{N-k} = 0 \\
(Buffer[k] = \bot).2^{N-k} = 2^{N-k}
\end{disj}
$}
\begin{proof}
\pf\
%
\step{lively-2.4.1.0.1.1.1.1}{$
k :in: 1 :dd: N
__ => \\
__ \begin{disj}
(Buffer[k] = \bot) = 0 \\
(Buffer[k] = \bot) = 1
\end{disj}
$}
\begin{proof}
\pf\ By definition of Boolean values.~\qed
\end{proof}%lively-2.4.1.0.1.1.1.1
%
\step{lively-2.4.1.0.1.1.1.2}{$
\begin{conj}
0.2^{N-k} = 0 \\
1.2^{N-k} = 2^{N-k}
\end{conj}
$}
\begin{proof}
\pf\ By standard axioms of arithmetic.~\qed
\end{proof}%lively-2.4.1.0.1.1.1.2
%
\qedstep%lively-2.4.1.0.1.1.1
\begin{proof}
\pf\ Follows by substitution and propositional logic from
\stepref{lively-2.4.1.0.1.1.1.1} and
\stepref{lively-2.4.1.0.1.1.1.2}.~\qed
\end{proof}%qed.lively-2.4.1.0.1.1.1
%
\end{proof}%lively-2.4.1.0.1.1.1
%
\step{lively-2.4.1.0.1.1.2}{$
\begin{conj}
0 :leq: 2^{N-k} \\
2^{N-k} :leq: 2^{N-k}
\end{conj}
$}
\begin{proof}
\pf\ By standard axioms of arithmetic.~\qed
\end{proof}%lively-2.4.1.0.1.1.2
%
\qedstep%lively-2.4.1.0.1.1
\begin{proof}
\pf\ Follows immediately from
\stepref{lively-2.4.1.0.1.1.1} and
\stepref{lively-2.4.1.0.1.1.2}
using substitution and propositional logic.~\qed
\end{proof}%qed.lively-2.4.1.0.1.1
%
\end{proof}%lively-2.4.1.0.1.1
%
\step{lively-2.4.1.0.1.2}{$
:A: k :in: 1 :dd: N : a(k) :leq: b(k) \\
__ => \\
__ \sum_{k=1}^N a(k) :leq: \sum_{k=1}^N b(k)
$}
\begin{proof}
\pf\ This must follow from properties of
$\sum$ defined in the $Naturals$ module.~\qed
\end{proof}%lively-2.4.1.0.1.2
%
\qedstep%lively-2.4.1.0.1.e
\begin{proof}
\pf\ Follows immediately by substitution from
\stepref{lively-2.4.1.0.1.1} and
\stepref{lively-2.4.1.0.1.2}.~\qed
\end{proof}%qed.lively-2.4.1.0.1.e
%
\end{proof}%lively-2.4.1.0.1.e
%
\qedstep%lively-2.4.1.0.1
\begin{proof}
\pf\ Follows immediately from \stepref{lively-2.4.1.0.1.e}
by definitions of `$:dd:$', $Descending$, and
$MaxDescending$.~\qed
\end{proof}%qed.lively-2.4.1.0.1
%
\end{proof}%lively-2.4.1.0.1
%
\qedstep%lively-2.4.1.0
\begin{proof}
\pf\ Follows immediately from \stepref{lively-2.4.1.0.1}
by existential generalisation over $Descending(Buffer)$
using the variable $i$.~\qed
\end{proof}%qed.lively-2.4.1.0
%
\end{proof}%lively-2.4.1.0
%
%
\qedstep%lively-2.4.1.0y
\begin{proof}
\pf\ Immediate from \stepref{lively-2.4.1.0}
by STL1.~\qed
\end{proof}%qed.lively-2.4.1.0y
%
\end{proof}%lively-2.4.1.0y
%
\vs{1.2}
The next step is a conclusion of the Lattice Rule, with $P \leadsto Q$
written as $[](P => <>Q)$. Its justification is thus the premise of the
Lattice Rule.
\vs{1.2}
%
\step{lively-2.4.1.1}{$
\begin{conj}
[][:E: a : CB.push(a) \/ :E: k : CB.move(k)]_{Buffer} \\
WF_{Buffer}(CB.pop) \\
WF_{Buffer}(:E: k : CB.move(k)) \\
\end{conj} \\
__ => \\
__ [](\begin{noj}
\begin{conj}
:E: i :in: 0 :dd: MaxDescending: Descending(Buffer) = i \\
NotEmpty \\
NotStuffed
\end{conj}\\
__ => \\
__ <>FirstFull)
\end{noj}
$}
\begin{proof}
\pf\
%
\vs{1.2}
The substep is the premise of the Lattice Rule, with $P \leadsto Q$
written as $[](P => <>Q)$.
\vs{1.2}
%
\pflet{$ i : \CONSTANT $}
%
\step{lively-2.4.1.1.1}{$
\begin{conj}
[][\begin{disj}
:E: a : CB.push(a) \\
:E: k : CB.move(k)]_{Buffer}
\end{disj} \\
WF_{Buffer}(CB.pop) \\
WF_{Buffer}(:E: k : CB.move(k)) \\
i :in: 0 :dd: MaxDescending
\end{conj} \\
__ => \\
__ [](\begin{noj}
\begin{conj}
NotEmpty \\
NotStuffed \\
Descending(Buffer) = i )
\end{conj} \\
__ => \\
__ \begin{noj}
<>(\begin{disj}
FirstFull \\
\begin{noj}
:E: j :in: 0 :dd: MaxDescending :
\begin{conj}
j < i \\
NotEmpty \\
NotStuffed \\
Descending(Buffer) = j)
\end{conj}
\end{noj}
\end{disj}
\end{noj}
\end{noj}
$}
\begin{proof}
\pf\
This step is justified by an application
of WF1 (and simple propositional logic).
The substeps consist of the premises of WF1, with the instantiations \\
${\cal N} == :E: a : CB.push(a) \/ :E: k : CB.move(k)$, \\
${\cal A} == :E: k : CB.move(k)$,\\
$P == \begin{conj}
NotEmpty \\
NotStuffed \\
Descending(Buffer) = i
\end{conj}\\$
and
$Q == \begin{disj}
FirstFull \\
\begin{noj}
:E: j :in: 0 :dd: MaxDescending :
\begin{conj}
j < i \\
NotEmpty \\
NotStuffed \\
Descending(Buffer) = j)
\end{conj}
\end{noj}
\end{disj}$.
%
\vs{1.2}
First, some trivial simplification of the consequent, from
\[:E: j :in: 0 :dd: MaxDescending : j < i /\ \Theta\]
to $:E: j < i /\ \Theta$,
for appropriate $\Theta$.
\vs{1.2}
%
\step{lively-2.4.1.1.1.prelim}{$
\begin{conj}
\begin{noj}
:E: j < i : Descending(Buffer) = j \\
__ :equiv: \\
__ \begin{noj}
:E: j :in: 0 :dd: MaxDescending : \\
__ \begin{conj}
j < i \\
Descending(Buffer) = j
\end{conj}
\end{noj}
\end{noj} \\
\begin{noj}
:E: j < i : Descending(Buffer') = j \\
__ :equiv: \\
__ \begin{noj}
:E: j :in: 0 :dd: MaxDescending : \\
__ \begin{conj}
j < i \\
Descending(Buffer') = j
\end{conj}
\end{noj}
\end{noj}
\end{conj}
$}
\begin{proof}
\pf\ Follows from \stepref{lively-2.4.1.0y}
and its substeps by predicate logic.~\qed
\end{proof}%lively-2.4.1.1.1.prelim
%
\vs{1.2}
The next step is the first premise of WF1.
\vs{1.2}
%
\step{lively-2.4.1.1.1.x1}{$
\begin{conj}
NotEmpty \\
NotStuffed \\
Descending(Buffer) = i \\
\begin{disj}
:E: a : CB.push(a) \\
:E: k : CB.move(k) \\
Buffer' = Buffer
\end{disj}
\end{conj} \\
__ => \\
__ \begin{disj}
\begin{conj}
NotEmpty' \\
NotStuffed' \\
Descending(Buffer') = i
\end{conj}\\
FirstFull' \\
:E: j :in: 0 :dd: MaxDescending :
\begin{conj}
j < i \\
NotEmpty' \\
NotStuffed' \\
Descending(Buffer') = j
\end{conj}
\end{disj}
$}
\begin{proof}
\pf\
%
\step{lively-2.4.1.1.1.x1.1}{$
\begin{conj}
NotEmpty \\
NotStuffed \\
Descending(Buffer) = i \\
Buffer' = Buffer
\end{conj} \\
__ => \\
__ \begin{conj}
NotEmpty' \\
NotStuffed' \\
Descending(Buffer') = i
\end{conj}
$}
\begin{proof}
\pf\ Immediate by substitution of $Buffer'$ for $Buffer$.~\qed
\end{proof}%lively-2.4.1.1.1.x1.1
%
\vs{1.2}
The following step is also used later.
\vs{1.2}
%
\step{lively-2.4.1.1.1.x1.2}{$
\begin{conj}
NotEmpty \\
NotStuffed \\
Descending(Buffer) = i \\
:E: k : CB.move(k)
\end{conj} \\
__ => \\
__ \begin{disj}
FirstFull' \\
:E: j < i :
\begin{conj}
NotEmpty' \\
NotStuffed' \\
Descending(Buffer') = j
\end{conj}
\end{disj}
$}
%
\begin{proof}
\pf\
%
\step{lively-2.4.1.1.1.x1.2.u}{$
\begin{conj}
Descending(Buffer) = i \\
:E: k : CB.move(k)
\end{conj} \\
__ => \\
__ :E: j < i : Descending(Buffer') = j
$}
\begin{proof}
\pf\
%
\vs{1.2}
The following two steps use some more easy arithmetic. Properties of
$\sum$ are used which depend for their proofs on the
exact definition of arithmetic and $\sum$. Different
ways of specifying the properties might lead to different proofs.
The proof below is intended as an example
of how, and how long, such a proof would be. Any given
proof depends on what is defined, and how, in the $Naturals$ module,
which is not included. One possibility for defined $\sum$ is to
use a recursive definition, since $\sum$ is parameterised by $N$.
\vs{1.2}
%
\pflet{$ k : \CONSTANT $}
%
\step{lively-2.4.1.1.1.x1.2.1}{$
\begin{conj}
Descending(Buffer) = i \\
CB.move(k)
\end{conj} \\
__ => \\
__ Descending(Buffer') = i + (2^{N-k} - 2^{N-(k-1)})
$}
\begin{proof}
\pf\
%
\step{lively-2.4.1.1.1.x1.2.1.w}{$
CB.move(k) \\
__ => \\
__ \begin{noj}
Descending(Buffer') = \\
Descending(Buffer)
+ (2^{N-k} - 2^{N-(k-1)})
\end{noj}
$}
\begin{proof}
\pf\
%
\step{lively-2.4.1.1.1.x1.2.1.w.1}{$
CB.move(k) \\
__ => \\
__ \begin{noj}
\sum_{m=1}^N (Buffer'[m] = \bot).2^{N-m} = \\
\sum_{m=1}^N (Buffer[m] = \bot).2^{N-m}
+ (2^{N-k} - 2^{N-(k-1)})
\end{noj}
$}
\begin{proof}
\pf\
%
\step{lively-2.4.1.1.1.x1.2.1.w.1.y}{$
CB.move(k) \\
__ => \\
__ \begin{conj}
\begin{noj}
\sum_{m=1}^{k-2} (Buffer'[m] = \bot).2^{N-m} = \\
__ \sum_{m=1}^{k-2} (Buffer[m] = \bot).2^{N-m}
\end{noj} \\
\begin{noj}
\sum_{m=(k+1)}^{N} (Buffer'[m] = \bot).2^{N-m} = \\
__ \sum_{m=(k+1)}^{N} (Buffer[m] = \bot).2^{N-m}
\end{noj}
\end{conj}
$}
\begin{proof}
\pf\
%
\pflet{$ m : \CONSTANT $}
%
\step{lively-2.4.1.1.1.x1.2.1.w.1.y.1}{$
CB.move(k) \\
__ => \\
__ \begin{noj}
\begin{disj}
m :in: 1 :dd: (k-2) \\
m :in: (k+1) :dd: N
\end{disj}
__ => \\
__ Buffer'[m] = Buffer[m]
\end{noj}
$}
\begin{proof}
\pf\ Follows immediately
from the definition of $CB.move(k)$.~\qed
\end{proof}%lively-2.4.1.1.1.x1.2.1.w.1.y.1
%
\qedstep%lively-2.4.1.1.1.x1.2.1.w.1.y
\begin{proof}
\pf\ Follows from
\stepref{lively-2.4.1.1.1.x1.2.1.w.1.y.1},
given that $\sum_{k = 1}^N$ is a term, and
therefore admits replacement of subterms by
provably equal subterms by Leibniz's Law of
predicate logic.~\qed
\end{proof}%qed.lively-2.4.1.1.1.x1.2.1.w.1.y
%
\end{proof}%lively-2.4.1.1.1.x1.2.1.w.1.y
%
\step{lively-2.4.1.1.1.x1.2.1.w.1.z}{$
CB.move(k) \\
__ => \\
__ \begin{conj}
(Buffer'[k] = \bot).2^{N-k} = 2^{N-k}\\
(Buffer'[k-1] = \bot).2^{N-(k-1)} = 0 \\
(Buffer[k] = \bot).2^{N-k} = 0 \\
(Buffer[k-1] = \bot).2^{N-(k-1)} = 2^{N-(k-1)} \\
\end{conj}
$}
\begin{proof}
\pf\
%
\step{lively-2.4.1.1.1.x1.2.1.w.1.z.1}{$
CB.move(k) \\
__ => \\
__ \begin{conj}
(Buffer'[k] = \bot) = 1\\
(Buffer'[k-1] = \bot) = 0 \\
(Buffer[k] = \bot) = 0 \\
(Buffer[k-1] = \bot) = 1 \\
\end{conj}
$}
\begin{proof}
\pf\ Immediate from the definition of $CB.move(k)$.~\qed
\end{proof}%lively-2.4.1.1.1.x1.2.1.w.1.z.1
%
\step{lively-2.4.1.1.1.x1.2.1.w.1.z.2}{$
__ \begin{conj}
1.2^{N-k} = 2^{N-k}\\
0.2^{N-(k-1)} = 0 \\
0.2^{N-k} = 0 \\
1.2^{N-(k-1)} = 2^{N-(k-1)} \\
\end{conj}
$}
\begin{proof}
\pf\ Instantiations of standard axioms for $1$ and $0$
in arithmetic.~\qed
\end{proof}%lively-2.4.1.1.1.x1.2.1.w.1.z.2
%
\qedstep%lively-2.4.1.1.1.x1.2.1.w.1.z
\begin{proof}
\pf\ Follows immediately by substitution
and propositional logic
using \stepref{lively-2.4.1.1.1.x1.2.1.w.1.z.1} and
\stepref{lively-2.4.1.1.1.x1.2.1.w.1.z.2}.~\qed
\end{proof}%qed.lively-2.4.1.1.1.x1.2.1.w.1.z
%
\end{proof}%lively-2.4.1.1.1.x1.2.1.w.1.z
%
\vs{1.2}
The following step follows from various properties of $\sum$,
including those which follow from a recursive definition, and
properties of `$+$' and `$-$'. We state the step and simply
assert that it follows. It should be clear that it is
semantically correct, although syntactically an atomic step.
\vs{1.2}
%
\step{lively-2.4.1.1.1.x1.2.1.w.1.x}{$
\begin{noj}
\sum_{m=1}^N (Buffer'[m] = \bot).2^{N-m} = \\
__ \begin{noj}
\sum_{m=1}^{k-2} (Buffer'[m] = \bot).2^{N-m} + \\
(Buffer'[k-1] = \bot).2^{N-(k-1)} + \\
(Buffer'[k] = \bot).2^{N-k} + \\
\sum_{m=(k+1)}^{N} (Buffer'[m] = \bot).2^{N-m}
\end{noj} \\
__ = \\
__ \begin{noj}
\sum_{m=1}^{k-2} (Buffer[m] = \bot).2^{N-m} + \\
0 + 2^{N-k} + \\
\sum_{m=(k+1)}^{N} (Buffer[m] = \bot).2^{N-m}
\end{noj} \\
__ = \\
__ \begin{noj}
\sum_{m=1}^{k-2} (Buffer[m] = \bot).2^{N-m} + \\
+ 2^{N-(k-1)} + 0 + \\
\sum_{m=(k+1)}^{N} (Buffer[m] = \bot).2^{N-m} + \\
(2^{N-k} - 2^{N-(k-1)}) \\
\end{noj} \\
__ = \\
__ \begin{noj}
\sum_{m=1}^N (Buffer[m] = \bot).2^{N-m} + \\
(2^{N-k} - 2^{N-(k-1)}) \\
\end{noj}
\end{noj}
$}
\begin{proof}
\pf\ Follows from the definition of $\sum$,
and the associativity and commutativity
of $\sum$ and `$+$', and other
standard arithmetic axioms,
including those for `$-$',
using \stepref{lively-2.4.1.1.1.x1.2.1.w.1.y}
and \stepref{lively-2.4.1.1.1.x1.2.1.w.1.z}.~\qed
\end{proof}%lively-2.4.1.1.1.x1.2.1.w.1.x
%
\qedstep%lively-2.4.1.1.1.x1.2.1.w.1
\begin{proof}
\pf\ Follows immediately from
\stepref{lively-2.4.1.1.1.x1.2.1.w.1.x}
using transitivity of equality.~\qed
\end{proof}%qed.lively-2.4.1.1.1.x1.2.1.w.1
%
\end{proof}%lively-2.4.1.1.1.x1.2.1.w.1
%
\qedstep%lively-2.4.1.1.1.x1.2.1.w
\begin{proof}
\pf\ Immediate from
\stepref{lively-2.4.1.1.1.x1.2.1.w.1} by substitution of
$Descending(Buffer')$ and $Descending(Buffer)$
for the $\sum$'s.~\qed
\end{proof}%qed.lively-2.4.1.1.1.x1.2.1.w
%
\end{proof}%lively-2.4.1.1.1.x1.2.1.w
%
\qedstep%lively-2.4.1.1.1.x1.2.1
\begin{proof}
\pf\ Immediate from \stepref{lively-2.4.1.1.1.x1.2.1.w}
using the definition $i == Descending(Buffer)$.~\qed
\end{proof}%qed.lively-2.4.1.1.1.x1.2.1
%
\end{proof}%lively-2.4.1.1.1.x1.2.1
%
\step{lively-2.4.1.1.1.x1.2.2}{$
i + (2^{N-k} - 2^{N-(k-1)}) < i
$}
\begin{proof}
\pf\
%
\step{lively-2.4.1.1.1.x1.2.2.1}{$
i + (2^{N-k} - 2^{N-(k-1)}) = \\
i + (2^{N-k} - 2^{N-k+1}) = \\
i + (1.2^{N-k} - 2.2^{N-k}) = \\
i + 2^{N-k}(1 - 2) = \\
i - 2^{N-k} < i
$}
\begin{proof}
\pf\ Follows by axioms of arithmetic for
`-', for `1', for exponent, the distributive law
of multiplication over `-', the equation $1 - 2 = -1$,
and the law $x + (-1).y = x - y$.~\qed
\end{proof}%lively-2.4.1.1.1.x1.2.2.1
%
\qedstep%lively-2.4.1.1.1.x1.2.2
\begin{proof}
\pf\ Follows immediately from \stepref{lively-2.4.1.1.1.x1.2.2.1}
by transitivity of equality.~\qed
\end{proof}%qed.lively-2.4.1.1.1.x1.2.2
%
\end{proof}%lively-2.4.1.1.1.x1.2.2
%
\qedstep%lively-2.4.1.1.1.x1.2.u
\begin{proof}
\pf\ Follows from \stepref{lively-2.4.1.1.1.x1.2.1}
and \stepref{lively-2.4.1.1.1.x1.2.2}
by setting $j == i + (2^{N-k} - 2^{N-(k-1)})$,
using
$Descending(Buffer') = j => j :in: 0 :dd: MaxDescending$
from an earlier proof step, then
existential quantification over $j$,
followed by
existential quantification over $k$.~\qed
\end{proof}%qed.lively-2.4.1.1.1.x1.2.u
%
\end{proof}%lively-2.4.1.1.1.x1.2.u
%
\vs{1.2}
%
\step{lively-2.4.1.1.1.x1.2.v}{$
\begin{conj}
:E: k : CB.move(k) \\
~(NotEmpty' /\ NotStuffed')
\end{conj} \\
__ => \\
__ FirstFull'
$}
\begin{proof}
\pf\
%
\step{lively-2.4.1.1.1.x1.2.v.1}{$
:E: k : CB.move(k) \\
__ => \\
__ NotEmpty'
$}
\begin{proof}
\pf\ Follows immediately by predicate logic from the definitions of
$CB.move$ and $NotEmpty$.~\qed
\end{proof}%lively-2.4.1.1.1.x1.2.v.1
%
\step{lively-2.4.1.1.1.x1.2.v.2}{$
\begin{conj}
:E: k : CB.move(k) \\
~(NotEmpty' /\ NotStuffed')
\end{conj} \\
__ => \\
__ ~NotStuffed'
$}
\begin{proof}
\pf\ Follows immediately by propositional logic from
\stepref{lively-2.4.1.1.1.x1.2.v.1}.~\qed
\end{proof}%lively-2.4.1.1.1.x1.2.v.2
%
\step{lively-2.4.1.1.1.x1.2.v.3}{$
~NotStuffed' \\
__ => \\
__ FirstFull'
$}
\begin{proof}
\pf\ Follows immediately from the definitions.~\qed
\end{proof}%lively-2.4.1.1.1.x1.2.v.3
%
\qedstep%lively-2.4.1.1.1.x1.2.v
\begin{proof}
\pf\ Follows immediately by propositional logic from
\stepref{lively-2.4.1.1.1.x1.2.v.2} and
\stepref{lively-2.4.1.1.1.x1.2.v.3}.~\qed
\end{proof}%qed.lively-2.4.1.1.1.x1.2.v
%
\end{proof}%lively-2.4.1.1.1.x1.2.v
%
\qedstep%lively-2.4.1.1.1.x1.2
\begin{proof}
\pf\ Follows immediately from \stepref{lively-2.4.1.1.1.x1.2.u}
and \stepref{lively-2.4.1.1.1.x1.2.v}
by propositional logic and by moving the quantifier
$:E: j $.~\qed
\end{proof}%qed.lively-2.4.1.1.1.x1.2
%
\end{proof}%lively-2.4.1.1.1.x1.2
%
\vs{1.2}
%
\step{lively-2.4.1.1.1.x1.3.x}{$
\begin{conj}
NotEmpty \\
NotStuffed \\
Descending(Buffer) = i \\
:E: a : CB.push(a)
\end{conj} \\
__ => \\
__ :E: j < i : \begin{conj}
NotEmpty' \\
NotStuffed' \\
Descending(Buffer') = j
\end{conj}
$}
\begin{proof}
\pf\
%
\step{lively-2.4.1.1.1.x1.3}{$
\begin{conj}
NotEmpty \\
NotStuffed \\
Descending(Buffer) = i \\
:E: a : CB.push(a)
\end{conj} \\
__ => \\
__ \begin{conj}
NotEmpty' \\
NotStuffed' \\
:E: j < i : Descending(Buffer') = j
\end{conj}
$}
\begin{proof}
\pf\
%
\vs{1.2}
The following step also uses some simple arithmetic.
\vs{1.2}
%
\step{lively-2.4.1.1.1.x1.3.1}{$
\begin{conj}
Descending(Buffer) = i \\
:E: a : CB.push(a)
\end{conj} \\
__ => \\
__ :E: j < i: Descending(Buffer') = j
$}
\begin{proof}
\pf\
%
\step{lively-2.4.1.1.1.x1.3.1.1}{$
\begin{conj}
Descending(Buffer) = i \\
:E: a : CB.push(a)
\end{conj} \\
__ => \\
__ Descending(Buffer') = i - 1
$}
\begin{proof}
\pf\ Follows from definition of $Descending$
and $CB.move$ using $2^{N-N} = 1$
and other arithmetic axioms.~\qed
\end{proof}%lively-2.4.1.1.1.x1.3.1.1
%
\qedstep%lively-2.4.1.1.1.x1.3.1
\begin{proof}
\pf\ Follows immediately
from \stepref{lively-2.4.1.1.1.x1.3.1.1}
by substituting $j == i-1$ and existentially
quantifying over $j$.~\qed
\end{proof}%qed.lively-2.4.1.1.1.x1.3.1.1
%
\end{proof}%lively-2.4.1.1.1.x1.3.1
%
\step{lively-2.4.1.1.1.x1.3.3}{$
\begin{conj}
NotStuffed \\
NotEmpty \\
:E: a : CB.push(a)
\end{conj} \\
__ =>
__ \begin{conj}
NotStuffed' \\
NotEmpty' \\
\end{conj}
$}
\begin{proof}
\pf\ Follows from definition of $NotStuffed$ and
$NotEmpty$ by using predicate logic rules
for quantifiers.~\qed
\end{proof}%lively-2.4.1.1.1.x1.3.3
%
\qedstep%lively-2.4.1.1.1.x1.3
\begin{proof}
\pf\ Follows immediately by propositional logic from
\stepref{lively-2.4.1.1.1.x1.3.1} and
\stepref{lively-2.4.1.1.1.x1.3.3}.~\qed
\end{proof}%qed.lively-2.4.1.1.1.x1.3
%
\end{proof}%lively-2.4.1.1.1.x1.3
%
\qedstep%lively-2.4.1.1.1.x1.3.x
\begin{proof}
\pf\ Follows immediately by moving the quantifier
$:E: j < i$ outside the conjunction
in the consequent.~\qed
\end{proof}%qed.lively-2.4.1.1.1.x1.3.x
%
\end{proof}%qed.lively-2.4.1.1.1.x1.3.x
%
\vs{1.2}
%
\qedstep%lively-2.4.1.1.1.x1
\begin{proof}
\pf\ Follows immediately
by propositional logic from
\stepref{lively-2.4.1.1.1.x1.1},
\stepref{lively-2.4.1.1.1.x1.2} and
\stepref{lively-2.4.1.1.1.x1.3.x},
using \stepref{lively-2.4.1.1.1.prelim}
to substitute the quantifier
$:E: j :in: 0 :dd: Maxdescending : j < i /\ :dd:$ for
the quantifier $:E: j < i : :dd:$.~\qed
\end{proof}%qed.lively-2.4.1.1.1.x1
%
\end{proof}%lively-2.4.1.1.1.x1
%
\vs{1.2}
End of proof of first premise of WF1 application.
The next step is the statement and proof of the second premise of WF1.
\vs{1.2}
%
\step{lively-2.4.1.1.1.2}{$
\begin{conj}
NotStuffed \\
NotEmpty \\
Descending(Buffer) = i \\
<<\begin{conj}
\begin{disj}
:E: a : CB.push(a) \\
:E: k : CB.move(k)
\end{disj} \\
:E: k : CB.move(k)>>_{Buffer}
\end{conj}
\end{conj}\\
__ => \\
__ \begin{disj}
FirstFull' \\
:E: j :in: 0 :dd: MaxDescending :
\begin{conj}
j < i \\
NotStuffed' \\
NotEmpty' \\
Descending(Buffer') = j
\end{conj}
\end{disj}
$}
\begin{proof}
\pf\
%
\step{lively-2.4.1.1.1.2.w}{$
\begin{conj}
\begin{disj}
:E: a : CB.push(a) \\
:E: k : CB.move(k)
\end{disj} \\
:E: k : CB.move(k)
\end{conj} \\
__ :equiv: \\
__ :E: k : CB.move(k)
$}
\begin{proof}
\pf\ Follows by propositional logic
from the definitions of $CB.pop$
and $CB.move$.~\qed
\end{proof}%lively-2.4.1.1.1.2.w
%
\step{lively-2.4.1.1.1.2.x}{$
\begin{conj}
NotStuffed \\
NotEmpty \\
Descending(Buffer) = i \\
<<:E: k : CB.move(k)>>_{Buffer}
\end{conj}\\
__ => \\
__ \begin{disj}
FirstFull' \\
:E: j :in: 0 :dd: MaxDescending :
\begin{conj}
j < i \\
NotStuffed' \\
NotEmpty' \\
Descending(Buffer') = j
\end{conj}
\end{disj}
$}
\begin{proof}
\pf\ Follows by propositional logic, by
strengthening the consequent, from the
second
substep of \stepref{lively-2.4.1.1.1.x1}
using \stepref{lively-2.4.1.1.1.prelim}.~\qed
\end{proof}%lively-2.4.1.1.1.2.x
%
\qedstep%lively-2.4.1.1.1.2
\begin{proof}
\pf\ Follows by propositional logic from
\stepref{lively-2.4.1.1.1.2.w} and
\stepref{lively-2.4.1.1.1.2.x}.~\qed
\end{proof}%qed.lively-2.4.1.1.1.2
%
\end{proof}%lively-2.4.1.1.1.2
%
\vs{1.2}
The next step is the statement and proof of the third premise of WF1.
\vs{1.2}
%
\step{lively-2.4.1.1.1.3}{$
\begin{conj}
NotStuffed \\
NotEmpty \\
Descending(Buffer) = i
\end{conj}\\
__ => \\
__ Enabled<<:E: k : CB.move(k)>>_{Buffer}
$}
\begin{proof}
\pf\ Follows by quantifier rules of predicate logic
from definition of
$NotStuffed$ and $NotEmpty$.~\qed
\end{proof}%lively-2.4.1.1.1.3
%
\vs{1.2}
%
\qedstep%lively-2.4.1.1.1
\begin{proof}
\pf\ Immediate from TLA Rule WF1 with hypotheses
\stepref{lively-2.4.1.1.1.x1},
\stepref{lively-2.4.1.1.1.2} and
\stepref{lively-2.4.1.1.1.3}
under the substitutions \\
$P == \begin{conj}
NotEmpty \\
NotStuffed \\
Descending(Buffer) = i
\end{conj}$\\
${\cal N} == \begin{disj}
:E: a : CB.push(a) \\
:E: k : CB.move(k)\mbox{\rm ,}
\end{disj}$ \\
$f == Buffer$,\\
$Q == \begin{disj}
FirstFull \\
:E: j :in: 0 :dd: MaxDescending :
\begin{conj}
j < i \\
NotEmpty \\
NotStuffed \\
Descending(Buffer) = j,
\end{conj}
\end{disj}$ \\
$A == :E: k : CB.move(k) $,\\
and the definition
$(P \leadsto Q) :equiv: [](P => <>Q)$.
Finally, the statement $i :in: 0 :dd: MaxDescending$
is conjoined to the hypotheses.~\qed
\end{proof}%qed.lively-2.4.1.1.1
%
\end{proof}%lively-2.4.1.1.1
%
%
%\vs{1.2}
%
%
% \step{lively-2.4.1.1.1x}{$
% \begin{conj}
% [][\begin{disj}
% :E: a : CB.push(a) \\
% :E: k : CB.move(k)]_{Buffer}
% \end{disj} \\
% WF_{Buffer}(CB.pop) \\
% WF_{Buffer}(:E: k : CB.move(k)) \\
% i :in: 0 :dd: MaxDescending
% \end{conj} \\
% __ => \\
% __ [](\begin{noj}
% \begin{conj}
% NotEmpty \\
% NotStuffed \\
% Descending(Buffer) = i )
% \end{conj} \\
% __ => \\
% __ \begin{noj}
% <>(\begin{disj}
% FirstFull \\
% \begin{noj}
% :E: j < i :
% \begin{conj}
% NotEmpty \\
% NotStuffed \\
% Descending(Buffer) = j)
% \end{conj}
% \end{noj}
% \end{disj}
% \end{noj}
% \end{noj}
% $}
% \begin{proof}
% \pf\ Follows by conjoining the predicate
% $i :in: 0 :dd: MaxDescending$
% to the antecedent of
% \stepref{lively-2.4.1.1.1}.~\qed
% \end{proof}%lively-2.4.1.1.1x
%
%\vs{1.2}
%
%This completes the application of WF1 to obtain the premise of the
%Lattice Rule. The Lattice Rule may now be applied, and the
%existential hypothesis eliminated from the conclusion of the Rule
%by application of the following step:
%
\vs{1.2}
%
\qedstep%lively-2.4.1.1
\begin{proof}
\pf\ Immediate from \stepref{lively-2.4.1.1.1} by the
Lattice Rule of TLA with substitutions
${\sf c} == i$,
${\sf S} == 1 :dd: N$,
$\succ == >$,\\
$H_{\sf c} == \begin{conj}
NotEmpty \\
NotStuffed \\
Descending(Buffer) = i,
\end{conj}$ \\
$G == FirstFull$, and \\
$F ==
\begin{conj}
[][:E: a : CB.push(a) \/ :E: k : CB.move(k)]_{Buffer} \\
WF_{Buffer}(CB.pop) \\
WF_{Buffer}(:E: k : CB.move(k)).
\end{conj}$
\end{proof}%qed.lively-2.4.1.1
%
\end{proof}%lively-2.4.1.1
%
\vs{1.2}
%
\step{lively-2.4.1.2}{$
\begin{conj}
[][:E: a : CB.push(a) \/ :E: k : CB.move(k)]_{Buffer} \\
WF_{Buffer}(CB.pop) \\
WF_{Buffer}(:E: k : CB.move(k)) \\
\end{conj} \\
__ => \\
__ \begin{noj}
[](:E: i :in: 0 :dd: MaxDescending: Descending(Buffer) = i \\
___ => \\
___ \begin{noj}
(NotEmpty /\ NotStuffed) \\
__ => \\
__ <>FirstFull) )
\end{noj}
\end{noj}
$}
\begin{proof}
\pf\ Follows from \stepref{lively-2.4.1.1} by temporal
logic, replacing a formula of the form $A /\ B => C$
with one of the form $A => (B => C)$ in the consequent,
in which \\
$A == :E: i :in: 0 :dd: MaxDescending: Descending(Buffer) = i$, \\
$B == (NotEmpty /\ NotStuffed)$, and
$C == <>FirstFull$.~\qed
\end{proof}%lively-2.4.1.2
%
\vs{1.2}
%
\step{lively-2.4.1.3}{$
\begin{conj}
[][:E: a : CB.push(a) \/ :E: k : CB.move(k)]_{Buffer} \\
WF_{Buffer}(CB.pop) \\
WF_{Buffer}(:E: k : CB.move(k)) \\
[](:E: i :in: 0 :dd: MaxDescending: Descending(Buffer) = i)
\end{conj} \\
__ => \\
__ \begin{noj}
[]( (NotEmpty /\ NotStuffed) \\
__ => \\
__ <>FirstFull)
\end{noj}
$}
\begin{proof}
\pf\ Follows from \stepref{lively-2.4.1.2} by temporal
logic, using the following temporal logic rule:
%
\vs{1.2}
%
\pflet{$ X, Y, Z : \PROPVARIABLE $}
%
\step{lively-2.4.1.3.1}{$
\proofrule{[]X => [](Z => []Y)}{[]X /\ []Z => []Y}
$}
\begin{proof}
\pf\ This calculation in temporal logic follows from
PTL.~\qed
\end{proof}%lively-2.4.1.3.1
%
\qedstep%lively-2.4.1.3
\begin{proof}
\pf\ Follows from \stepref{lively-2.4.1.2}
using \stepref{lively-2.4.1.3.1} with \\
$Z == (:E: i :in: 0 :dd: MaxDescending: Descending(Buffer) = i)$,\\
$Y == (NotEmpty /\ NotStuffed) => <>FirstFull$.~\qed
\end{proof}%qed.lively-2.4.1.3
%
\end{proof}%lively-2.4.1.3
%
\vs{1.2}
%
\qedstep%lively-2.4.1.w
\begin{proof}
\pf\ Follows by propositional logic from
\stepref{lively-2.4.1.3} using
\stepref{lively-2.4.1.0y} to eliminate
a conjunct in the antecedent.~\qed
\end{proof}%qed.lively-2.4.1.w
%
\end{proof}%lively-2.4.1.w
%
\vs{1.2}
%
\qedstep%lively-2.4.1.x
\begin{proof}
\pf\ Follows immediately by replacing the consequent
of \stepref{lively-2.4.1.w} by an
equivalent formula from
\stepref{lively-2.4.1.x.2}.~\qed
\end{proof}%qed.lively-2.4.1.x
%
\end{proof}%lively-2.4.1.x
%
\vs{1.2}
This completes the proof of the second hypothesis of the Bridge rule
\stepref{lively-TL-rule}. We now use the Bridge rule to infer its
(instantiated) conclusion, which is the desired simplified
version of the fourth premiss of
WF2, step \stepref{lively-2.4.x}.
\vs{1.2}
%
\qedstep%lively-2.4.x
\begin{proof}
\pf\ Follows from the Bridge rule
\stepref{lively-TL-rule} under the instantiation \\
$[]X == \begin{conj}
[][\begin{disj}
:E: a : CB.push(a) \\
:E: k : CB.move(k)]_{Buffer}
\end{disj} \\
WF_{Buffer}(CB.pop) \\
WF_{Buffer}(:E: k : CB.move(k)) \\
\end{conj} $, \\
$NE == NotEmpty$,\\
$FF == FirstFull$,
with first hypothesis \stepref{lively-2.4.1.y-sup}
and second hypothesis \stepref{lively-2.4.1.x}.~\qed
\end{proof}%qed.lively.2.4.x
%
\end{proof}%lively-2.4.x
%
\vs{1.2}
We now conclude the validity of
the fourth premiss of WF2 from its simplified version
\stepref{lively-2.4.x}.
\vs{1.2}
%
\qedstep%lively-2.4
\begin{proof}
\pf\ Follows immediately from \stepref{lively-2.4.x}
using \stepref{lively-2.4.v}
and \stepref{lively-2.4.w}.~\qed
\end{proof}%qed.lively-2.4
%
\end{proof}%lively-2.4
%
\vs{1.2}
We now conclude the validity of the conclusion of WF2, which gives the
desired liveness property of $AB.pop$ from the specification of $CB$.
\vs{1.2}
%
\qedstep%lively-2
\begin{proof}
\pf\ Follows from \stepref{lively-2.1}, \stepref{lively-2.2},
\stepref{lively-2.3} and \stepref{lively-2.4} using
TLA Proof Rule WF2 with
${\cal N} == (:E: a : CB.push(a) \/ CB.pop
\/ :E: k : CB.move(k))$, $f = Buffer$,
${\cal A} == {\cal B} == CB.pop$,
${\cal M} == AB.pop$, $g = buffer$,
$P == FirstFull$,
$[]F == WF_{Buffer}(:E: k : CB.move(k))$.~\qed
\end{proof}%qed.lively-2
%
\end{proof}%lively-2
%
\vs{1.2}
Finally, we conclude the main theorem from the steps demonstrating the
proof obligations for the
initial condition, the safety property, and the liveness property.
\vs{1.2}
%
\qedstep%theorem-2
\begin{proof}
\pf\ Follows from \stepref{init-impl-2},
\stepref{safety-2} and
\stepref{lively-2} by Theorem \ref{the:sufficient}, using
$Inv = \mbox{\bf True}$.~\qed
\end{proof}%qed.theorem-2
\end{proof}%theorem-2
%
\end{proof}%both-theorems
\normalsize
\subsection*{Acknowledgements}
Paul Gibson and Abdellilah Mokkedem suggested this example.
I started this work while visiting INRIA Lorraine, where I was
hosted by Dominique M\'ery and Patrick Rambert. Bob Johnston told
me about the F3314 chip in which this concrete buffer design was
implemented. And Leslie
Lamport developed TLA, the support tool {\tt pf.sty} for hierarchical
proofs, the macros {\tt tla.sty} for writing TLA specifications and of
course \LaTeX \hspace{2pt}, without which we'd {\it all} be barbarians.
Hearty thanks to all.
\bibliographystyle{alpha}
\bibliography{liter}
\end{document}