This is an archived page and is no longer updated.
Please visit our current pages at https://rvs-bi.de

On Needing Models

Peter B. Ladkin

Research Report RVS-RR-96-11

Limiting Program Verification

In (1), Brian Cantwell Smith wonders about certain catastrophic computer system failures, such as the 1960 American Ballistic Missile Early Warning System mistaking a moon rise for a massive missile attack. We should certainly be concerned about such failures and what we can do to prevent them. The RISKS Compendium (3), compiled by Peter Neumann from 1985, contains examples of many computer system failures, some mundane, some social in nature whose root causes are human imprecision or carelessness, some more subtle that only experience alerted us to. I'm interested in airplanes, for example, and compendia of interesting incidents with computer control of research aircraft may be found in John Rushby's anomalies list (4); discussion from the RISKS newsletter and papers on incidents and accidents with commercial transport aircraft with fly-by-wire control in my Hypertext compendium (5).

Program verification is one technique used in analysis of critical software and other parts of computer systems. Smith wants to draw a drastic conclusion about verification.

Do the techniques of program verification hold enough promise so that, if these new systems could all be proven correct, we could all sleep more easily at night? [..] And my answer, to give away the punch-line, is no.

He bases this conclusion on a conception of how computer systems function, based on how they are designed and built:

When you design and build a computer system, you first formulate a model of the problem you want it to solve, and then construct a computer program in its terms.

Smith thinks that basing a system design on a model precludes the possibility of rigorous mathematical verification, because mathematical verification says nothing about the relationship between the model and the `real world'.

What about the [relationship between model and real-world]? The answer, and one of the main points I hope you will take away from this discussion, is that, at this point in intellectual history, we have no theory of this [..] relationship.
He regards formulating a model to be a form of abstraction. He alludes briefly to another, more general, argument about abstraction:
What is important, for our purposes, is that every model deals with its subject matter at some particular level of abstraction, paying attention to certain details, throwing away others, grouping together similar aspects into common categories, and so forth. [..] Models have to ignore things exactly because they view the world at a level of abstraction. [..] it would not be too much to say that every act of conceptualization, analysis, categorization, does a certain amount of violence to its subject matter, in order to get at the underlying regularities that group things together.

So Smith believes that computer systems must be designed based on a model of the problem, that verification can only determine the logical relationship between a system and the model on which it's based, that we have no theory of the relationship between such a model and the `real world' (except maybe for that part of the `real world' which is mathematical, an exception he may allow - see below), alternatively that a model is an abstraction which must `do violence' to its subject matter, and that therefore verification cannot assure that systems function as intended.

Many computer scientists as well as others find these kinds of considerations and conclusions quite reasonable. I want to doubt both the validity of Smith's main conclusion and the accuracy of many of his arguments. Although many computer systems are designed based on models, not all of them are. Abstraction is talking about some aspects of what you see before you, and omitting others, and far from `doing violence' to the subject matter, may in some circumstances be logically quite rigorous. And whether the system can rigorously be proven to fulfil its requirements depends on the logical form of those requirements. Some kinds of systems can be proven to fulfil their requirements. The interesting question is: which kinds?

Models and Logic

Many computer scientists believe that when they are describing and specifying a computer system, they are `building a model'. This has always seemed to me an odd use of language. Those who write specifications in logic are writing sentences in a language that purport to describe a feature of the world, involving the computer system they intend to build. They are writing formal sentences (or, rather, predicates with one free variable). Why would this be called `model-building'? The answer lies in the meaning given to those sentences. Behind the formal sentences may lie a formal semantics, which is a specific mathematical construction. A formal specification in logic thus would assert something rigorously about a specific mathematical construction. This mathematical construction is a form of abstraction, and it's tempting to call it a `model'.

For example, Georg Hendrik von Wright analyses the logic of causality in (6). He assumes a version of logical atomism in which there are a finite number of logically independent propositions, each of which may be either true or false independently of the others. A state of the world is completely described by a truth-value assignment to all of the propositions - one says for each one whether it is true or false. Thus there are (2 exp n) states. He then considers that as time progresses states follow each other in sequence. For any time, there is a next state. Thus for m time steps there are 2 exp (m.n) logically possible histories. However, not all of these 2 exp (m.n) logically-possible histories may be possible histories:

this `logical freedom' of development can, for non-logical reasons, be restricted. I shall call these non-logical reasons (restrictions) `causal'. [..] The picture which we have been entertaining concerning the logical build of the world is a version of the position called logical atomism. All of its basic assumptions may be questioned. [..] I shall regard the logico-atomistic structure, which we have just characterized, as being a `fiction' or `model' of what a world may be like, logically speaking.

Von Wright simplifies as he abstracts, and these simplifications are based on assumptions which may be questioned. Smith might say he's `doing violence' to his subject matter. But sometimes mathematical abstraction may avoid questionable assumptions. Penelope Maddy would look at a bowl of five oranges and perceive directly a set corresponding to the natural number 5 (as described in (9)). For a realist philosopher of mathematics such as Maddy, mathematical abstraction at this level is a matter of paying attention to some features of the world and not others. (Of course, a much extended story has to be told for the development of such concepts as higher orders of infinity.) Others may consider the number 5 in a different sort of way, maybe as a denizen of some `abstract world', and imagine that somehow we `map' scenes with oranges in them into the set of natural numbers so that our bowl becomes associated with the number 5.

Compare this with building a model of an aircraft for testing in a wind tunnel. The model is a facsimile, something similar to but distinct from the real thing, which shares some properties, but not necessarily all, with the object or situation it is supposed to imitate. A tenth-scale facsimile of an aircraft is put into a wind tunnel to see how it behaves. The airflow pattern over the scale model is known to share its most significant properties with the airflow over the same shape but ten times the size. But the facsimile doesn't share all properties. Its ratio of size to weight could be different. But that is known not to affect properties of airflow. The art of developing facsimiles for use in computer systems is subtle. Smith uses the example of a patient drug-regimen controller in a hospital. Such a system must be based on a model, a facsimile, of the patient's physical responses so that it may regulate release of the drugs into the bloodstream. The facsimile must be simple, to ease implementation and to make it easier to discover errors in the system, but yet must not be so simple that it avoids significant features of the problem.

So we have a real airplane with a facsimile which shares some but not all of its properties with the `real thing'. The situation looks superficially similar with the bowl of oranges and the number 5. For example, the number 5 doesn't have five orange parts (if you don't like the von Neumann definition of numbers, it doesn't even have five parts). Since the number 5 has some but not all properties of the bowl of five oranges, and vice versa, they must be different sorts of things. Thus there is an analogy with the airplane and its model, which also share some but not all properties. One could be tempted to conclude that the relation of airplane to facsimile is as the bowl of oranges to the number 5, and thus use the same language when talking about both.

Note, however, a crucial difference. For a Maddian realist, the number 5 inheres in the bowl of oranges - it can even be regarded as a part of the bowl in a strict technical sense of the word part (10) - whereas the airplane facsimile could in no way be considered a part of the real airplane. It's a completely distinct entity with no part in common. It matters not so much here whether Maddian realism may be sustained. One may be far from the Maddian crowd and still grant that it makes sense to ask whether the number 5 is part of the bowl of oranges, whereas it makes no sense at all to ask that of the airplane and its facsimile. The relation of airplane to facsimile is not very much like the relation of number to bowl of oranges, whichever philosophy of mathematics one accepts.

Compare now with the situation von Wright constructs. It could be that his simplifications are true abstractions of the Maddian sort, that he's focusing on features of the world which are really there. His simplifications would be literally true, while not describing the whole caboodle. Or it could be that his simplifications are just approximate, that the state of the world that he cares about can't be described by holding some finite number of propositions to be true or false, but it sort of can, just as Euclidean geometry doesn't literally describe the geometry of space, but it sort of does (so it seems to us at the moment). A bit like facsimile airplanes, in fact. But notice that if we perform arithmetic to three significant digits and only consider surfaces of one square meter at one time, Euclidean geometry exactly describes the world. The relativistic corrections disappear - they simply have no bearing on the results when we stay in the realm of thousandth parts. Maybe von Wright could yet perform analogous operations under which his `questionable assumptions' simply become accurate descriptions of a certain situation, and thus his logic becomes a means of inference amongst literally true statements. But this speculation about the logic of abstraction strays from our main topic.

The main point is that extracting the mathematical features of something is not necessarily like making facsimile airplanes. Sometimes it may be, but sometimes it ain't. Nevertheless, we could describe all of this as `building a model' if we liked. In this sense Smith's explanation of system construction is uninterestingly true. The interesting question is what metaphysical status our model-building has. If we are making airplane facsimiles, or simulating the drug-absorption behavior of a human being for a medical system, then it may be appropriate to doubt with Smith that we have exact theories of the relation between our model and the world. But when we are primarily interested in mathematical properties, if we were Maddian realists those mathematical properties might very well inhere in the real-world subject. Then, as we perform mathematical calculations we are making logical inferences as to how the world really is, in respect of its numerical features at least. This form of abstraction would not be `doing violence' to its subject matter - it is perception and inference, and therefore exact knowledge about the world as it is. As Smith admits (below), we would have a `relatively unproblematic' understanding.

The contrast becomes starker when we consider a system made to perform pure mathematical calculations. Its `subject matter' is then mathematics itself - we no longer need to consider the relationship of the mathematics to the bowl of oranges. We may write formal sentences which can be proven to describe correctly the operations performed by a computer chip when doing certain mathematical calculations and which can be proven to perform those calculations as required by the mathematics, all subject to the supposition that the physical chip hardware behaves precisely as specified (11).

Smith's Punch Line

But Smith may have decisively refuted all of my reasoning above. He may have shown Maddian realism false, and abstraction always to deviate from truth. I must consider his actual arguments. I'll start with his `punch line'.
Do the techniques of program verification hold enough promise so that, if these new systems could all be proven correct, we could all sleep more easily at night? [..] And my answer, to give away the punch-line, is no.
That's a radical stance. Let's take it literally. Suppose, as Smith does counterfactually, that all the systems he's talking about could be proven correct. Would you sleep more easily? I sure would. A proof is a watertight logical demonstration, and if there were a proof that a system were correct, and that proof were correct (it wouldn't be a proof if it weren't) then by goodness the system is correct. That's just the meaning of proof. And it isn't relative. Wiles has proven Fermat's Last Theorem. It makes no sense to say `That proof is only good in the US. It doesn't work in Europe'. Now, it is a pipe-dream to wish all Smith's example systems could be proven correct, but if it were true, we would all have cause to rejoice (except, maybe, Smith). So his statement is false when taken literally. But it does warn us to take liberal rather than literal interpretations of what he says.

So much for literal readings. He claims nevertheless that we should all be able to understand his argument (and presumably agree with it):

For fundamental reasons - reasons that anyone can understand - there are inherent limitations to what can be proven about computers and computer programs. Although program verification is an important new technology, [..] it should definitely not be called verification. Just because a program is `proven correct', in other words, you cannot be sure that it will do what you intend.
This last statement, that a correctness proof of a program does not guarantee that it does what we intend, seems to be true for programs in general. This is accepted by almost everybody working in software engineering, including those working on program verification, and we know where the problems lie - there's a large technical literature on the subject. And what one means by the word `verification' seems to be a topic for committee meetings, not for a philosophical paper. Smith has so far explained his purpose by making a false statement followed by others that are agreed to be true by almost everyone. That's not going to buy him much. We'll have to dig deeper to get at his real arguments.

Four Main Points

He delineates four general issues. First, that only simple programs can yet be proven correct, and current technology may not `scale up' to systems with multiple millions of lines of subtle code. Second, that systems which require substantial human interaction are not favored targets for formal verification, since `not much can be `proven', let alone specified formally, about actual human behavior'. Third, `complex computer systems must work at many different levels. It follows they can fail at many different levels, too.' Fourth, he asks, what does `correctness' mean?

Smith's first point is a relative one. Program verification technology now works on fairly complex algorithms and `programs' (whatever you think `programs' may be). He's correct to note the problems of scale; however, the verification technology is constantly pushing the boundaries back and I am not aware of any good argument giving inherent reasons why `really big' systems should not yield to future theorem proving techniques. Arguments from complexity, for example, assume fundamentally similar technology working on statistically similar systems. It seems appropriate to remain agnostic on problems of scale.

His second point could be historically correct if you weren't to regard, say, aircraft as `systems which require substantial human interaction' (fly-by-wire control systems have been one of the major applications of verification since the 1970's). But that would be simply comic. I dispute his further claim that not much can be specified formally about human behavior. For example, portions of the Flight Crew Operating Manual for the Airbus A320 aircraft have been specified formally (12). But of course it is harder to ensure that human behavior actually fulfils its specification (for example, the Habsheim accident in (5)). Of course this is old hat to aircraft accident investigators. The appropriate form of specification of a system involving human interaction is a conditional: if the human operator fulfils hisher specification then the system and its environment satisfies such-and-such predicates. If the specification of the human role is rigorous, there is thus no reason to hold that the specification of a system involving human interaction need be any less rigorous than one involving none. It may be trickier to guarantee that the antecedent of the conditional is always true - but that's not a matter concerning logical rigor.

Elaborating his third point, Smith notes that

Computer systems [..] can be `correct' at one level - say, in terms of hardware - but fail at another (i.e. the systems built on top of the hardware can do the wrong thing even if the chips are fine). Sometimes, when people talk about computers failing, they seem to think only the hardware needs to work. And hardware does from time to time fail, causing the machine to come to a halt, or yielding errant behaviour [..] And the connections between the computers and the world can break [..] But the more important point is that, in order to be reliable, a system has to be correct at every relevant level: the hardware is just the starting place [..] Unfortunately, however, we don't even know what all the relevant levels are.
I think we do. Requirements, `middleware', and hardware. (Middleware consists of what is traditionally called software and also a significant part of hardware design - that part that's like software but just happens to refer to physical things rather than code.) I argued in the Section Faults in Systems of (13) that a computer system has precisely these three parts - is the mereological sum of these three parts, in the terminology of (10) and (14) - and that these three facets of a computer system have their own different criteria for correctness. Calling them `parts' is appropriate, since they fulfil the conditions of being parts (14), whereas Smith's term `level' he leaves undefined. We can agree with much of what Smith says here - for example, that some parts of a system may fail, while others remain intact, or that a failure of a part may cause a failure of the whole. This is so for all functioning things which have parts. We might quibble that until he clarifies what a `level' is, he cannot substantiate his claim that all levels must be `correct' for the whole to be correct. Fault-tolerant systems can tolerate certain kinds of failures - parts of them that no longer fulfil their individual function - and still function as intended. So, whether he can substantiate his claim would depend upon which parts he takes as `levels' and which not.

Smith's fourth point, the question of what correct means, is fundamental. There is an answer for two of the three parts of a system, namely, middleware and hardware. Correctness with regard to middleware is a binary relation. A piece of middleware X is correct precisely in case X fulfils its specification Y. Y may belong to middleware or may be the requirements specification. In middleware, `fulfils' means `logically implies'. X is expressed in a logical language and this statement may be proven logically to imply the statement Y, also expressed in a logical language. (Precise specifications in other formal languages can be translated into the language of formal logic, so this is not a strong restriction. In particular, I note that procedural programs, which belong to middleware, can be regarded equally as specifications of behavior and that there are equivalent specifications which are written in a logical language as I require. Annotating procedural programs with pre- and postconditions is one way for serial programs. For concurrent programs there are more possibilities.) Middleware is the part in which program verifiers work and at which program verification is directed. Hardware, as both Smith and I agree, fails in just the same way that any other piece of engineering hardware may fail. Such failures are relatively well understood, and it's not generally thought that they submit to mathematical proof, although mathematical methods and reasoning are used to study and predict such failures. Requirements are more tricky. They cannot be `verified' in anything like the same sense as middleware, and they don't `fail' in anything like the same way as raw hardware. When building requirements specifications, the intentions of the users have to be measured and some match has to be attempted between user intention and formal statement of requirement. I discussed this in Section Faults in Systems of (13), and in (14) and will not repeat the arguments here. Smith rightly focuses on requirements as the potential weak point between designer/user intention and fulfilment of that intent.

In a later section, Correctness and Relative Consistency, Smith considers general questions concerning the relation between requirements specification and middleware, which he calls specification and program. He suggests that a specification is essentially declarative, describing what proper behavior would be, and the program is procedural, describing how the behavior is to be achieved. I doubt whether this distinction can be sustained. One way of specifying what behaviors are allowed is to provide a step-by-step recipe for generating them, namely a procedure. This happens in number theory for example, where it is known as mathematical induction. Similarly, there are many programming languages which are declarative, for which built-in search mechanisms of which the programmer needs to know little will search out a set of values fulfilling the program. So specifications may be procedural and programs declarative. Furthermore, most programs, even those in Fortran, can be considered a specification of output, so it's not always obvious what's specification and what's program. Smith uses his distinction to make the pertinent point that detecting faults in requirements, even those of mathematical programs, can be hard. This point may be made independently of any potential distinction between declarative and procedural specification. See the op. cit., (13) and (14).

We may logically insulate a system specification from hardware failure by a simple logical device. Suppose that when the hardware is working correctly, the system S must fulfil condition P, and that the middleware has been formally proven to satisfy P. Then the `insulated' specification of S is Q: either the hardware is failed or P. This is more than a cheap trick. Suppose that S satisfies Q, but that P is not fulfilled. Then it follows that one calls a hardware engineer to change a board or the like. It would do no good at all to take the problem to the programming team and ask them to look at it.

So, middleware may be logically verified and a system specification may accomodate hardware failure. Requirements are the logical weak point. There is no similar logical device which we may apply to insulate from requirements `failure' (see (14) for a discussion of what this means). However, it is by no means the case that all systems must lack correctness proofs. There are cases in which the requirements may be rigorously ascertained to be correct: for example, the trivial case in which the requirement is a simple logically true predicate, or the case in which the requirement is a purely mathematical function such as in the small whole number gcd algorithm discussed in (14). There are, of course, still the cases such as Smith mentions - the drug protocol dispenser, the missile early-warning system - in which the question of match between user intentions and system function may not be precisely ascertained. But this is not unique to computer systems - in fact none of the questions of requirements have much to do with whether the system is computational or not. It's just that requirements become much harder accurately to ascertain with computational systems, and user adjustments to system inadequacy are also much harder than they are with, say, bridges.

To summarise, middleware may in principle be proven correct (feasibility aside) for any computational system; whether requirements may be proven correct or not depends on what kind of requirements there are; and hardware may not be proven correct, since proofs aren't the kinds of things that apply to bits and pieces. Some complete systems (for example, those with logically true requirements) can be proven correct, hardware and all; others (with necessarily purely mathematical conditions as requirements) can be proven correct down to the hardware, and may thus be logically insulated from hardware failure by the device of disjoining the phrase `or the hardware is faulty' to the requirements.

The Permeating Talk of Models

Smith hasn't yet come to his point. He wishes to discuss (Section 3) The Permeating Use of Models, and it is these comments about the relation of specifications to the world that I want to show here are misguided. One main thesis is in the first sentence.
When you design and build a computer system, you first formulate a model of the problem you want it to solve, and then construct a computer program in its terms.
We have already considered that this claim may be uninterestingly true. But to argue from this to a conclusion that verification can't work, Smith needs the stronger meaning, the claim that we must base systems on facsimiles or von-Wrightian logical approximations only. I want to show that this stronger claim cannot be sustained.

First, I'll reserve the word `model' for the strong sense which Smith needs, the facsimile or von-Wrightian approximation. One way of describing the construction of a computer system is as follows. One of the first steps is to formulate the requirements. Requirements are a predicate in a language with a precise semantics, which is either a logical language or can be translated exactly into a logical language, which state how the system is supposed to behave. Then one constructs the middleware and then assembles the hardware and has a system. Either the system fulfils its requirements predicate or it does not. Where is the Smithian model? In order for Smith's claim to be valid, we need to find one.

A Smithian model is a facsimile or von-Wrightian approximation of whatever it is supposed to model. It would be ludicrous after Frege to claim that a predicate is a facsimile of anything. Maybe one could claim that the standard semantics for logical predicates is given by (logical) model theory, and the relevant Smithian model is some logical model-theoretic model which fulfils the predicate. This would be hard to sustain. Very few predicates have unique logical models (up to isomorphism), and those that describe computer systems hardly ever do, so the question would arise, which of these models was meant? One could say that the specification was then ambiguous, and try to refine it such that it were categorical (the technical term for only having one model-theoretic model up to isomorphism), but this is not done for good reason. When one specifies the communication in a banking transaction system, one cannot possibly predict which customers will come in which order and make which transactions for which sums, so one wants it to work for many possible sequences of such actions. Thus categoricity of specifications is highly undesirable. This notion of model cannot be what is meant.

But maybe we can't avoid using logical model theory. So Smith:

what model theory does is to tell you how your descriptions, representation, and programs correspond to your [Smithian] model.
Logical model theory is thought by many besides Smith to tell us what statements in a logical language mean. However, this view is misleading. Models are mathematical objects that allow the quantifiers `for all things x: ' and `there exists something x such that:' to consider possible values of x from a restricted domain: say, just the positive whole numbers, if one is talking about number theory; or just the collection of all objects in this room, if one is doing block-worlds planning. Furthermore, model theory acts as an `oracle' for the semantics of predicate logic: if one statement doesn't follow from others, there's a logical model in which the latter are all true but the former is false; and if the statements are inconsistent, there is no such logical model. (These follow from the Completeness Theorem for predicate logic, Gödel 1931.) But we can also interpret `for all things x: ' as meaning, simply `for everything', and `there exists something x ' to mean simply `there is something that'. Then, the question is whether the predicate is true or not. Not true-in-a-logical-model, simply true. Logical models may still be talked about under such an interpretation, but instead of interpreting an unrestricted quantifier over a bounded domain, one applies the well-known syntactic device of restricting the quantifier explicitly to elements of that domain. A position interpreting quantifiers over everything would be held, for example, by proponents of logical atomism, from the early Wittgenstein and Russell to maybe even von Wright's (6). Smith provides no argument against bona fide logical atomism (von Wright without the caveats), or against any similar realist interpretation of quantifiers or logical languages.

Smith's example of the drug regimen, and my example of the airplane scale-model, shows that Smithian-modelling can sometimes be helpful, even necessary, in building a particular system. One needs to have some idea, not necessarily a logically exact idea, of how a patient reacts to drugs, or of how the air flows over a shape, in order to figure out how to build such a system. But the claim we are considering is not that we sometimes use Smithian modelling but that we always do. Smith bases his argument that we cannot verify computer systems on his claim about models, and he could not draw the universal conclusion he does about verification unless all systems are based on Smithian modelling.

But is a Fallacy Real if There's Noone There to See?

We have been unable to find a general method for uncovering Smithian models at the basis of all system-building. But were we to have done so, this would not by itself justify Smith's position, because there's a further difficulty. A claim for all-pervasive Smithian modelling charges headlong into an old philosophical discussion. John Locke's theory of perception is often taken to be that we perceive, not objects themselves, but ideas of the objects that are `inside' our heads somehow. He held that the objects cause these ideas. He was criticised by Bishop Berkeley, who pointed out that this explanation does not suffice. Even if the ideas are caused by the objects, how do we have any guarantee that the ideas correctly represent the objects in any relevant respect? Could it not be that this coffee cup in front of me actually causes the visual `idea' of a cauliflower to be the object of my perception? Locke had not explained how the idea comes to resemble the object in any relevant respect. And Berkeley pointed out that, if we really had no access whatever to the object, but only to our `ideas' of it - facsimiles in our minds - we would have absolutely no means of determining what relation the idea really had to the object in any case.

Berkeley's solution was straightforward. He believed perception was direct, i.e., we have direct access to the objects of perception, and since Locke had argued that these objects were ideas in our minds, he took it that that's what objects are. Mental constructions. But how can we then talk about reality, if everything's just a mental construction (rudely put, a figment of our imagination)? In particular, how can an object persist if noone is perceiving it (the `tree in the quad')? Berkeley, the consummate salesman, didn't lose an opportunity to promote the company product. Everything is perceived all the time in the mind of God. Reality is the sum total of what's in God's immediate perception, and each of ours is a subset of that. That explains not only persistent objects, but also how perception can be mistaken, and how there is a reality to be mistaken about.

The mechanics of Berkeley's solution doesn't appeal to modern philosophical aesthetes, who prefer to explain affairs without invoking God if possible. But his main point: that perception is direct, and the reasons for it: that intermediating perception raises more puzzles than it answers, are the foundation of most modern views such as that of Austin. (It is widely recognised now that Locke's arguments are more involved than my short sketch of them shows, and some have argued that Berkeley erected a straw man. Nevertheless the Lockish argument that I have presented is regarded to have been canonically refuted by Berkeley.)

How does this apply to models? Apply Berkeley's criticism of Locke now, not just to `things', but to the entire relational structure of the world - relations between things, functions of things, relations of relations, and so forth. (If you feel nervous with this, pretend you're a logical atomist, or even a logical-atomist.) Smith's claim is that to build any computer program, we must first devise a facsimile of some feature of the world we wish the program to reckon with. Why? What would compel us to do this? If we had direct epistomological access to the world-feature, we could simply directly specify the relevant characteristics of this feature in the specification. So if we cannot do this, if we must use a facsimile, it must be because we have no such access to the world-as-is.

But then if we have no access to the world-as-is, how then can we determine the relation of facsimile to world-feature (as I shall call it) that we are facsimulating? Since we have no access to the world-feature directly, we cannot say this aspect of the world-feature corresponds to this aspect of the facsimile, as we can of the airplane and its model. If we could, the facsimile would be a heuristic and we would have no need in principle (just occasionally in practice) of the facsimile to describe the world-feature. But Smith cannot allow us to avoid passing via the facsimile. His argument against verification depends on the properties of facsimilitude.

The role of Smithian models corresponds to the role of Lockian `ideas'. We can build programs which calculate according to the facsimile (we can perceive `ideas' directly) but we cannot tell anything about the world-feature (we have no access to the objects that cause the `idea'). Berkeley's criticism applies. We cannot determine what correspondence holds between the world-feature and the facsimile, because to do that we would have to have access to the world-feature. Since we don't, we have thus no means of assessing whether the facsimile resembles the world feature in any useful way at all. But without being able to determine some correspondence, the facsimile venture is hopeless. The correspondence is what makes the facsimile a facsimile: the airplane model is a facsimile of an airplane and not a facsimile of a turnip. Thus for the facsimile approach to work, we must be able somehow to judge the correpondence; this entails that we have some epistomological access directly to the world-feature itself; and this entails that we cannot rule out that we could describe the world-feature directly in the specification of the program and avoid the facsimile. Not in every case maybe, but in some.

Later on, in a section on Computers and Models, Smith agrees with the main point that I have argued his Smithian-model argument commits him to:

What about the [relationship between model and real-world]? The answer, and one of the main points I hope you will take away from this discussion, is that, at this point in intellectual history, we have no theory of this [..] relationship.
This is actually a weaker claim than that we cannot have a theory. But he needs the stronger claim, else he could only argue that hope of verification is illusory pending a theory of the model-world relationship, and that leaves him open to the response that in the case of mathematical software, we have such a theory. A comment about mathematics suggests, though, that he may be prepared to agree:
Mathematics is unique with respect to models, because (at least to a first level of approximation) its subject matter is the world of models and abstract structures, and therefore the world-model relationship is relatively unproblematic.

If we eventually do have a general theory of the model-world relationship, then we will be even be able to eliminate any need for models altogether - simply by composing the relation between specification and model with the relationship between model and real world. How likely that is, I wouldn't like to speculate. In the case of mathematics, it seems that Smith agrees with me that Smithian models are not really needed.

Granted that some systems don't need facsimiles, the interesting question becomes: which sorts of computer system do not need facsimiles? That question I have begun to answer in (14). A partial answer is: some programs which perform calculations in the realm of mathematics, as well as some other boring systems whose specifications contain only safety properties. Given the logical devices above by which we may obtain rigorous specifications also for systems including unreliable hardware and for systems including human interaction, Smith's thumb may well have popped out of the dyke and Smithian models washed ever further away.

A Plea for Non-Violent Abstractionism

It looks as though Smith's argument, that every specification of a system proceeds through a Smithian model, can't be sustained by the routes we've considered. But he has one argument left: a rather different argument about abstraction.
What is important, for our purposes, is that every model deals with its subject matter at some particular level of abstraction, paying attention to certain details, throwing away others, grouping together similar aspects into common categories, and so forth. So the drug model mentioned above would probably pay attention to the patients' weights, but ignore their tastes in music. [..] Models have to ignore things exactly because they view the world at a level of abstraction. [..] it would not be too much to say that every act of conceptualization, analysis, categorization, does a certain amount of violence to its subject matter, in order to get at the underlying regularities that group things together. [..] To capture all this in a word, we will say that models are inherently partial. All thinking, and all computation, are similarly partial. [..] thinking and computation have to be partial: that's how they are able to work.
We may agree with much of this, as a comment on Smithian modelling. However, there is a further argument against verification buried in here: the claim that any form of abstraction `does violence' to its subject matter. This is somewhat different from claiming that everything works through Smithian models, but it serves no less to derive Smith's conclusion that there can be no true verification. Let's look at it.

If one is thinking of Smithian modelling as the relation of the tenth-scale airplane to the full airplane, or the ideal patient model to the real live person, then we have noted that the Smithian model actually has the properties that are being ignored, but the values of these properties may be different in the facsimile from the world-feature being modelled. For instance, the lengths of parts in the model airplane are one tenth as long as those in the real thing. And the material of which they are made is different. But abstraction means that one simply says nothing about such divergent values. It does not follow that one loses rigor by thus remaining silent. If I say that my jacket is red, the truth or falsity of this is unaffected by whether I choose also to tell what size my jacket is, how it's buttoned, or where the pockets are.

The discussion above following Maddy (9) is closely relevant. A Maddian argument holds that the von Neumann set 5 (= the set of all numbers 1-4) is actually there and observable in a basket of 5 eggs. This is not `doing violence to [the] subject matter'. According to a Maddian it is, on the contrary, part of the subject matter itself. And the logical relations of this set 5 to those of other sets are exactly as set theory says they are. If Maddy is right, I could specify `some object x which holds five objects' - trivial to write this predicate in first-order logic with a bit of mereology - and reason with this specification in set theory. And I would be reasoning about actual observable properties of the basket with five eggs. How is what I do `doing violence to its subject matter'? Smith provides no argument for his claim that `abstraction does violence' other than simply asserting it. It seems it needs at least some.

Limiting the Limits of Program Verification

So are there limits to program verification? Yes there are. Is it impossible for roughly Smithian reasons to prove some programs (or systems) correct? Yes, it could well be impossible - for some cases in which Smithian models are used essentially.

But there are likewise some programs, indeed systems, which do not use Smithian models essentially, which one may prove correct. These programs do not require facsimiles to be constructed to aid in their specification: for example, some mathematical calculations designed for a particular processor. A specification of such a system refers directly to, and can be checked against, knowable mathematical features of the world (pick your favorite alternative phrasing if you're not sympathetic to Maddian realism). To argue against this conclusion would require, for example, a refutation of logical atomism which most practising computer scientists would not grant as easily as Smith supposed. And also a refutation of Maddian realism.

We have seen that an argument for pervasive Smithian modelling yields to Berkelian arguments; and that his claim that abstraction `does violence to' its subject matter is prima facie implausible and lacks an argument; and these are the two arguments Smith provides on to base his claim for the hopelessness of verification. Smith's pertinent comments on how hard it is to get requirements right can be derived from considerations about requirements specification, middleware and hardware without yielding any conclusion about models. Thus Smith's arguments fall well short of his target, and we may also remain sceptical of his broad conclusion that verification can never ever succeed in proving that a system fulfils its intentions. It can, sometimes.


Acknowledgements

Thankyou to Dafydd Gibbon for discussing these arguments with me and persuading me of the need explicitly to consider what I've called von-Wrightian models.


References

Back to top

(1): Brian Cantwell Smith, Limits of Correctness in Computers, Report CSLI-85-36, Center for the Study of Language and Information, Stanford University, California, October 1985. Also in (2), 275-293. Back
(2): Timothy R. Colburn, James H. Fetzer and Terry L. Rankin, eds., Program Verification, Kluwer Academic Publishers, 1993.
(3): Peter G. Neumann, ed., Risks to the Public in the Use of Computer and Related Systems. Back
(4): John Rushby, Anomalies in Digital Flight Control Systems (DFCS). Back
(5): Peter Ladkin and others, Incidents and Accidents with Fly-By-Wire Commercial Airplanes. Back
(6): G. H. von Wright, On the Logic and Epistomology of the Causal Relation, in (7), also reprinted in (8). Back
(7): P. Suppes et al., Logic, Methodology and Philosophy of Science, IV, North-Holland, 1973.
(8): Ernest Sosa and Michael Tooley, eds., Causation, Oxford Readings in Philosophy Series, Oxford University Press, 1993.
(9): Penelope Maddy, Realism in Mathematics, Oxford University Press, 1990. Back
(10): Peter Simons Parts : A Study in Ontology, Oxford University Press, 1987. Back
(11): Robert S. Boyer and Yu Yuan, Automated Correctness Proofs of Machine Code Programs for a Commercial Microprocessor, in Deepak Kapur, ed., Automated Deduction - CADE11, Lecture Notes in Computer Science 607, pp416-430, Springer-Verlag 1992. Back
(12): Peter B. Ladkin, Analysis of a Technical Description of the Airbus A320 Braking System, High Integrity Systems 1(4), 331-350, 1995. Back
(13): Peter Ladkin, The X-31 and A320 Warsaw Crashes: Whodunnit?. Back
(14): Peter Ladkin, Correctness in System Engineering. Back