Abstracts of Research Projects, Papers and Other Writing

Foundations of System Correctness and Failure

This research area involves a look at some of the philosophical and logical issues behind system correctness - what it means, for example - and system failure, which is somehow the `converse' of correctness. Reasoning about failure tells us much about what it is for a system to be correct, and aircraft accidents are amongst the most carefully investigated incidents of system failure. Consequently, the writings here are divided into some Plane Prose and Foundations. Both are treated in the book, Success and Failure of Artifacts.


The Success and Failure of Artifacts
Peter Ladkin
Abstract I study some of the philosophical and logical questions in the foundations of engineeering, such as: what is the logical situation of specification, requirements, design, software, and digital hardware, with respect to `the world' - or even with respect to each other, as components of an artifact? What can we prove mathematically, and what features stand no chance of being mathematically proven? What does it mean for a system to fail, and how do we tell? Conversely, what does it mean for a system to be `correct'? How can we explain or reason about failure? What are the features of the world that are relevant in considering these questions? What ontology do we need; what reasoning power do we need; what is the logical form of our assertions about systems? All these are natural questions to a philosophical logician, which have not in my view really begun to be answered. I offer views and arguments for these views in this series of essays, which include many of those appearing as individual reports below. (HTML)

Some Plane Prose

EMI and TWA800: Critique of a Proposal
Peter Ladkin 10 April 1998
Abstract: Elaine Scarry proposed that high intensity radio frequency fields (HIRF), possibly generated by the U.S. Military, be investigated as a possible causal factor in the TWA800 accident. Her argument appears to be very poor. I state it, and critique it. (HTML, 16K)

The Ariane 5 Accident: A Programming Problem?
Peter Ladkin 20 March 1998
Abstract: I consider three papers on the Ariane 5 first-flight accident, by Jézéquel and Meyer suggesting that the problem was one of using the appropriate system design techniques; by Garlington on the culture of flight-control and avionics software engineering; and by Baber on use of pre- and post-conditions in programming. I conclude that Jézéquel and Meyer's argument is fundamentally mistaken; and that although Garlington's (reconstructed) critique of them is broadly correct, a conclusion closer to that of Jézéquel and Meyer is most appropriate, following from a different argument from that which they gave. (HTML, 24K)

Evidence to the Transport Subcommittee on NERC/NSC, Wednesday 11 March, 1998
Peter B. Ladkin 8 March 1998
Abstract: Great Britain is building what was billed as the most advanced En-Route Air Traffic Control system in the world at the National En-Route Center (NERC) in Swanwick, Hampshire, to control traffic in the London Flight Information Region (FIR), which covers southern British airspace. The £350+ million system has run into problems, experiencing successive delivery delays. The contractor has succeeded in getting it to run on some 35 workstations but not the 200 for which it was designed. I was invited to give oral evidence before the Transport Subcommittee of the House of Commons on the issues: (a) how long would an audit of the NERC system, whose purpose would be to determine whether the current NERC system could ever be made to work, take? (b) Sir Ronald Mason's assertion that dual-sourcing, i.e. awarding the New Scottish Center contract (for the other en-route center in Britain, to cover the Scottish FIR) to another systems supplier/integrator is a `basic principle' of safety-critical system development. (HTML, 34K)

Letter to the Transport Subcommittee on NERC/NSC, Monday 17 November, 1997
Peter B. Ladkin 17 November 1997
Abstract: Great Britain is building what was billed as the most advanced En-Route Air Traffic Control system in the world at the National En-Route Center (NERC) in Swanwick, Hampshire, to control traffic in the London Flight Information Region (FIR), which covers southern British airspace. The £350+ million system has run into problems, experiencing successive delivery delays. The contractor has succeeded in getting it to run on some 35 workstations but not the 200 for which it was designed. I wrote to the Transport Subcommittee on 9 November, 1997, detailing my concern with the project, based on public evidence concerning technical problems, and with plans for its completion as briefed by National Air Traffic Services (NATS), the NERC client, in September 1997. I was briefed by specialists on the new Canadian ATC system, CAATS, and the new Australian ATC system, TAAATS, and included their comments in my letter. (HTML, 28K)

The Crash of Flight CI676, a China Airlines Airbus A300, Taipei, Taiwan, Monday 16 February, 1998: What We Know So Far
Peter B. Ladkin Latest version: 19 February 1998
Abstract: The publically-available facts concerning the accident to CI676 are given, and it is suggested where they point - and where they don't point. Specifically, they cannot show a repeat of the Nagoya accident, because of mandatory A300 system modifications since that accident. This article will be continually modified as information is gathered. (HTML, 41K)

Risks of Technological Remedy, Inside Risks, Communications of the ACM 40(11):160, Nov. 1997
Peter B. Ladkin 10 September 1997
Abstract: This guest column was invited by Peter Neumann for his Inside Risks in the November 1997 CACM. I argue by example that because of the resilience of some forms of human error, attempted technological solutions make miss the mark. The first example compares the misidentification of Flight KAL007 in 1983 with that of IR655 in 1988, and the second considers CFIT accidents, and their avoidance through GPWS and EGPWS. (HTML, 6.5K)

The Crash of Flight KE801, a Boeing B747-300, Guam, Wednesday 6 August, 1997:
What We Know So Far

Peter B. Ladkin
11 September 1997
Abstract: The publically-available facts concerning the accident to KE801 are given, along with references to sources, and the features of the accident that arise from these facts are discussed and analysed, along with various pieces of reasoning and commentary from the press. (HTML, 59K)

Analysing the 1993 Warsaw Accident With a WB-Graph
Michael Höhl and Peter B. Ladkin
8 September 1997
Abstract: We analyse the final report of the 1993 Lufthansa A320 accident in Warsaw using the WB-Graph method, and conclude that some fundamental causal factors are missing from the report's conclusions, although mentioned in the body of the report. (HTML, 30K)

Controlled Flight Into Terrain: What is Being Done?
Peter B. Ladkin
21 August 1997
Abstract: Controlled flight into terrain (CFIT) remains the greatest cause of fatalities in commercial air transportation. What is it, and what is being done to reduce its incidence? In the era of the lowest-ever aviation fatality rates, eliminating CFIT altogether poses a new challenge. (HTML, 46K)

Flying An ILS or Localiser Approach - An Example
Peter B. Ladkin
25 August 1997
Abstract: In order to understand how an aircraft flying under instrument flight rules approaches an airport for landing, one must understand standard instrument approach procedures and documentation. I introduce the procedures and charts used when flying an ILS or localiser instrument approach to an airport, as examples of a precision and non-precision approach respectively. I follow the ILS and localiser-only approaches to Won Pat International Airport in Agana, Guam. There are other traditional non-precision approach types on which I do not comment, such as VOR, VOR/DME, NDB and RNAV. There are also approach types using new-generation avionics technology which are in development but which I don't discuss: Microwave Landing System (MLS) and GPS approaches in particular. (HTML, 61K + GIF images)

Traditional Aviation Radio Navigation: An Introduction
Peter B. Ladkin
20 August 1997
Abstract: I describe the basics of aircraft land-based radio navigation with VOR and NBD reception equipment. (HTML, 39K + GIF images)

To Drive or To Fly - Is That Really The Question?
Peter B. Ladkin
24 July 1997
Abstract: Some statistical comparisons were made of fatal accidents while flying on a commercial jet and while driving on rural interstates published during 1989-1991. They largely use data from the mid 70's through the 80's, and show that the risk of dying on a commercial jet flight was uncorrelated with the length of the flight, and that for trips of longer than 303 miles, flying was safer than driving if you were in the statistical group of `safe' drivers. The age and the type of data warrant some comments on the effectiveness of statistical comparisons, in particular how commercial passenger flying may have changed in the 90's. (HTML, 36K)

Electromagnetic Interference with Aircraft Systems: why worry?
Peter B. Ladkin with colleagues
13 July 1997
Abstract: There are worries about suspected electromagnetic interference with aircraft systems from electronic devices used by passengers. Some first-hand incident reports from colleagues are included. The phenomenon seems to be hard to pin down -- colleagues explain why. It may be that the current regulatory situation affects reporting and investigation of suspected incidents. Finally, I suggest some ways in which the regulatory environment could be changed to aid investigation. (HTML)

Formalising Failure Analysis
Thorsten Gerdsmeier, Peter Ladkin and Karsten Loer
4 June 1997
Abstract: We consider systems which are artifacts designed to fulfil a goal. Most computer systems are of this sort. Criteria for success have been well-studied in the domain of specification and verification. Failure has been less well studied. The study of failure is normally exemplified by case histories. In other work, we have devised a method for determining and representing causality in case histories, called the WB-graph method. Methods need tools. We describe a tool for building and analysing WB-graphs built in DATR, a language devised for computational linguistics. (HTML)

Analysing the Cali Accident With a WB-Graph
Thorsten Gerdsmeier, Peter Ladkin and Karsten Loer
14 January 1997
Abstract: We analyse the Cali accident to American Airlines Flight 965, a Boeing 757, on 20 Dec 1995. We take the events and states from the accident report, construct a WB-graph (`Why?...Because...-graph) in both textual and graphical form of the 58 events and states, and compare this representation favorably with the Findings section of the original report. We conclude that the WB-graph is a useful structure for representing explanations of accidents. (HTML)

News and Comment on the AeroPeru B757 Accident
Peter Ladkin
8 November 1996
Abstract: Not only was this accident very disturbing: the third fatal accident to a B757 in 10 months after a 13-year perfect safety record, but news reports and statements in the early days were highly misleading. The aftermath of the Birgenair accident had sensitised me to these unfortunate social processes. The purpose of this essay is thus twofold: technically, to provide continuing timely factual information on , and some analysis of, the AeroPeru 603 accident on October 2, 1996; and, sociologically, to provide a history and commentary of what was said about the crash, when, and by whom.

Some Dubious Theses in the Tense Logic of Accidents
Peter Ladkin
27 September 1996
Abstract: Some authors propose to use forms of linear time temporal logic to describe and explain accidents. Common themes are: (1) that the 'original causes' of an accident precede all other relevant events; (2) how to pick the temporal point of view from which the accident can be explained; (3) that it is a criterion of successful explanation that an assertion of the accident event should be logically derivable from assertions of preceding causally-relevant events; (4) that the causal relation is expressed by the tense-logical 'leads to' operator. Some authors have suggested (5) that an accident is explained by exhibiting the proof of the weak fairness of the accident-action. I have explained elsewhere (ad 2) how to pick a temporal point of view, and (ad 4) that causality is not suitably expressed by 'leads to'. I show here that the other three theses are implausible; that (1) may be rescued only at great cost to intuition and common practice; that (3) may be modified in the direction of a deductive-nomological model of explanation; and that (5), if at all worthy, has a much simpler equivalent. (HTML)

Explaining Failure with Tense Logic
Peter Ladkin
10 September 1996
Abstract: Accidents involve events, and events causing other events, laid out in time until the final disaster. Boolean logic alone is insufficient to describe such failure, although its use in the shape of fault trees has been very successful. Reasoning about objects and their properties is the domain of predicate logic, and reasoning about events and their temporal relations is the province of tense logic. I show how the reasoning involved in accident analysis may be formalised in tense logic, using the past-time operator since. I also show how a fault tree may be derived from a temporal logic description without use of frame axioms. I critique proposals to use purely future operators, and to identify the causality relation with a purely temporal logic operator. Finally, I consider the important concepts of interactive consistency and tight-coupling not to be worth formalising inside a tense logic. ( HTML)

Formalism Helps in Describing Accidents
Peter Ladkin
4 September 1996
Abstract: I analyse the 'probable cause' of the 1979 Chicago DC-10 accident using a minimal formalism, and find an omission. The omission is contained in the body of the report. McDonnell Douglas's statement barely addresses the point contained in the omission. I conclude that formalism helps in accident reporting by enabling simple consistency and omission checks. (HTML)

Comments on Confusing Conversation at Cali
Dafydd Gibbon and Peter Ladkin
7 February 1996
Abstract: There are interesting linguistic aspects of the conversation between Air Traffic Control and American Airlines flight 965 shortly before the crash of AA965 near Buga, Columbia, on 20 December 1995. We discuss two features, lexical ambiguity and direction-naming conventions, as well as two linguistically incongruent features of ATC radio transmissions. (This abstract is almost longer than the article itself.)

The X-31 and A320 Warsaw Crashes: Whodunnit?
Peter Ladkin with links to others
Revised version 28 January 1996
Abstract: Recent discussions in the RISKS newsgroup asked whether there was a software or design problem involved in the crash of the X-31 research plane in January 1995. Discussants also introduced the Warsaw crash of the Airbus A320 in order to talk generally about system faults and mishaps. I proceed with an analysis of the concepts used, give my list of causal factors, and explain how I got them, and (hopefully) clarify what bits systems have, how they have problems, and whether it makes sense to blame the usual suspects for mishaps. I include information from people who really know. (HTML)

Reasons and Causes
Peter Ladkin
31 January 1996
Abstract: I discuss the art of reasoning about reasons and causes in accident investigations, in the belief that some formal understanding may improve our ability to analyse. I discuss a recent paper of Moffett, Hall, Combes, and McDermid in which they suggest a formal theory for reasoning about causes, and apply it to a partial analysis of the Lufthansa A320 accident in Warsaw. (HTML)

Also concerned with aviation are

Foundations

Hazards, Risk and Incoherence
Peter Ladkin
15 June 1998
Abstract: I consider some standard engineering definitions of hazard, accident and risk, and show on a simple example how the definitions of hazard and risk are deficient. (HTML, 35K)

Abstraction and Modelling
Peter Ladkin
16 April 1997
Abstract: Engineers talk of abstractions and models. I define both, consistently with the way that engineers, computer scientists, and formal logicians use the terms. I propose that both abstractions and models have what Searle calls an agentive function, namely that both models and abstractions are models/abstractions of something, for a purpose It follows that both abstraction and modelling are most fruitfully thought of as ternary relations (or, when the purpose is forgotten, binary). A canonical example of abstraction arises in considering logical form. The criterion used to determine logical form are used mutatis mutandis to define abstraction (the purpose of logical form is given: to explicate a sentence's/proposition's role in inference; the purpose for a general abstraction remains to be selected by the abstracting agent). One may therefore consider a generalised, Wittgenstinian notion of logical form as what both abstractions and models have in common with their targets. Abstraction is closely related to the idea of describing, a relation between sentences and states of affairs; in fact it can be considered to be a precise version of description. (HTML)

Logical Form as a Binary Relation
Peter Ladkin
16 April 1997
Abstract: I consider the notion of logical form, and argue that there are considerable difficulties with defining a canonical logical form of sentences and propositions: for given A, finding a unique B such that A has the logical form of B. The purpose of logical form is unproblematically served by allowing for a given A varied B such that A has the logical form of B. I give a criterion for the relation of having the logical form of in terms of the notions of syntactic transformation and inferential equivalence. (HTML)

On Needing Models
Peter Ladkin
22 February 1996
Abstract: In his paper, Limits of Correctness in Computers, Brian Cantwell Smith proposed that program verification couldn't work, because computer systems are based on models of the `real world' and there's no general theory of how these models relate to what they're modelling. I think he's wrong, both in his arguments and his conclusions. This paper sets out my reasons.

Correctness in System Engineering
Peter Ladkin
2 April 1995
I analyse what is meant by correctness and failure of systems, using techniques from philosophical logic. I contradict a well-known claim of Fetzer. I look at the role played by specification, and simple arguments concerning failure, and try to clarify the boundary between system properties that may be proved and those that cannot be, by using two examples, one due to Boyer and Yu.
Draft Paper
HTML - Postscript - Dvi

Commentary and Resources on Commercial Aviation Accidents

This project aims to collect resources and reliable technical comment on aspects of computer-related commercial aviation safety on the World Wide Web. The hypertext compendium below is the result. Accidents which have a computer-related component are commented on, and where possible the text of the final report is made available. Aviation organisations concerned with safety are linked, including both administration and research. Electronic resources such as the RISKS newsgroup are consulted; opinions from experts are solicited and included. Social aspects are not given as much emphasis as technical, although many would argue they play a major role.


Computer-Related Incidents with Commercial Aircraft
Overview by Peter Ladkin with links to others
Ongoing
Abstract: There are now many commercial air transport aircraft whose primary flight controls are operated by computers, to which pilot commands and other data are input. Such aircraft are the Airbus A319/320/321/330/340 and the Boeing B777. The use of computers for such safety-critical tasks as airplane flight control is a suitable topic for computer scientists and others to consider. This is a commented Hypertext compendium of reports, papers and pertinent short observations, many from Peter Neumann's RISKS-Digest, the Forum on Risks to the Public in Computers and Related Systems, an on-line news journal which has established a reputation as the premier electronic source for news and discussion of these topics over the last ten years. I expect this page to grow continually.

Analysis and Generation of Operating Handbooks

This project, joint with Prof. Harold Thimbleby of Middlesex University in London, attempts to produce clear, correct, consistent and complete operating manuals for human-machine interaction tasks. Our basic point of view is that the machine-part of the task behaves as a state machine, and the human-part follows a `user manual', which consists of a procedural description of the state machine in some form or other, according to intended use. Formal specification of such interface can lead to automatic generation of operating manuals guaranteed to fulfil certain formal desiderata such as consistency and completeness. Since the quality of operating manuals in commercial aviation is widely acknowledged to be a problem, we hope this work will support the forward-generation of such manuals. The reverse-engineering of such manuals has also been addressed in one publication, the Analysis of a Technical Description of the Airbus A320 Braking System.


From Logic To Manuals Again
Harold Thimbleby and Peter Ladkin
published in IEE Proceedings - Software Engineering, June 1997
Abstract: A republication of From Logic To Manuals, below, necessitated by the manual processing of the paper at the IEE, which resulted in egregious typographical mistakes in the SEJ 11(6) (November 1996) published version, and a comment from us as to how much easier it would have been to ensure correctness had the journal worked with our HTML version that was output from the program. Ironically, the republication was marred by a mistake made by the (new) typesetters, which changed certain fonts. A corrected republication of the June 1997 IEE Proceedings on Software Engineering was sent to all subscribers. It took us almost more work to get the publishers to publish a correct version as it did to write the original paper, thus proving our point in spades -- unfortunately only to us.
From Logic To Manuals
Harold Thimbleby and Peter Ladkin
5 March 1996, revised 5 September 1996
Abstract: We demonstrate a simple language that combines specifications and manuals. This shows: first, that a manual can be automatically reconstructed from a logic specification that is effectively identical to the original logic (up to ambiguities in natural language); second, that such an automated process can help detect errors. The paper contains examples of use of the language in manual generation. It was created by running a source version (with the bulk of the text as meta-comment, and with input but no output to any examples) through the language processor. The output for the examples was automatically generated and inserted by the processor. Here are the source file and the output version with examples.

Analysis of a Technical Description of the Airbus A320 Braking System
Peter Ladkin
28 October 1994
I analyse the description of the operation of the Airbus A320 braking systems contained in the Flight Crew Operating Manual. I use the predicate-action diagrams of Lamport to express and to complete the description, and give reasons why such a more rigorous expression is preferable.
Here is a Postscript paper which appeared in High Integrity Systems 1(4), and also a short Postscript version for those who want an overview.

A proper explanation when you need one
Harold Thimbleby and Peter Ladkin
10 May 1995
Abstract: Quality program design has received considerable attention from the software engineering community. Quality user manual design has received considerable attention from the human computer interaction community. Yet manuals and systems are often independently conceived, and thus do not well complement each other. This paper shows one method of easily obtaining correct and complete user manuals guaranteed to correspond with the system they document.
Postscript - DVI
Appeared in People and Computers X, Proceedings of the BCS Conference on Human-Computer Interaction, HCI'95, ed. M. A. R. Kirby, A. J. Dix and J. E. Finlay, Cambridge University Press 1995.

Discussions

I have also written essays on topical themes, discussed with others. Some such discussions are collected here.


University Education in the US, UK and Germany: A Quick Comparison
Peter Ladkin
11 December 1997
Abstract: A point-by-point comparison (somewhat superficial) of university teaching, learning, and assessment in these three countries. This précis was written as notes for an extra-curricular lecture to Bielefeld students on December 16 1997 during the demonstration concerning university reform. (HTML)

On Keeping Your Mouth Shut: Discussing Aircraft Accidents in Public
Dick Mills, Robert Dorsett and Peter Ladkin
9-12 September 1996, revised 22 October
Abstract: Dick Mills proposed in RISKS-18.42 that speculation and discussion of aircraft accidents in public as they happened had negative consequences, including continuing distress to sufferers, and suggested it should be voluntarily limited to the time after the accident report is published. Robert Dorsett cited some examples of timely discussion which, although not always accurate, had beneficial effects, and Peter Ladkin gave six reasons why he thought Mill's proposed restiction would not be beneficial. Start here.

Future University Computing Resources
Peter Ladkin
7 November 1995
Abstract: I consider what kinds of computing resources a University will be providing to students and the community. I hope that some of the ideas will have wider application than just to Universities. This is an HTML script. I'd be very glad to receive your comments.

A Debate on Social Issues with Use of the Internet
Harold Thimbleby, Peter Ladkin, Brian Randell and others
23 September 1995
Abstract: Harold Thimbleby gave a talk to the British Association for the Advancement of Science 1995 Annual Meeting on what he regards as some current unhealthy developments on the Internet. The talk got extensive coverage in the British press. Brian Randell strenuously disagreed at the talk with both Harold's presentation and his views. Brian's views did not get similar media coverage. But here on the Web, all sides get debated. Start here.

Analysis of Communication in Parallel and Distributed Environments

This project, joint work with Dr. Barbara Simons of IBM Santa Teresa Labs, analyses the communication of between parallel loop processes. Aims are to provide combinatorial criteria for successful parallel programs and processes - those that run without deadlock - and algorithms to serialise clusters of such processes, the important `recombination' step in parallel compilation, after decomposition and processor assignment. Work so far is collected in one monograph, which will be extended and completed in early 1997.


Static Analysis of Communicating Processes
Peter Ladkin and Barbara Simons
22 April 1995

Abstract: We define the Message Flow Graph, a graph that shows the point-to-point communications between code for concurrent processes. Trouble is, communication usually isn't point-to-point -- on different executions of the same send, the message may be received by execution of a different receive statement in the target process. For simple non-terminating processes, so-called loop processes, we show how to build a Message Flow Graph from the source code, and when one can be built. If the processes deadlock, such a graph may not be built, so our construction and associated mathematics may be used for compile-time deadlock analysis. Two mathematical conditions, the Numerical Matching Condition and the Sequence Condition are particularly important, and needed to prove the algorithm correct. We show how to perform the analysis for communications of many sorts: synchronous (rendezvous), asynchronous, and multiway synchronous - the basic analysis is performed for two-process synchronous communication and then extended.

This 139-page draft version of the manuscript does not yet include certain of our theorems on polynomial-time deadlock detection, which will be added shortly. Read it at your peril - but please tell us what you think! The monograph has been accepted for publication in the series Lecture Notes in Computer Science from Springer-Verlag.

Postscript


Static Deadlock Analysis for CSP-Type Communications
Peter Ladkin and Barbara Simons
April 1995

Abstract: We present two tests for analyzing deadlock for a class of communicating sequential processes. The tests can be used for deadlock detection in parallel and distributed programs at compile-time, or for debuggin purposes at run-time. They can also be used in conjunction with an algorithm we have for constructing valid execution traces for this class.
This paper is a resumé of Chapter 5 of the monograph for Responsive Computer Systems, ed. D.S. Fussell and M. Malek, Kluwer, 1995.

Work With and On Lamport's TLA, the Temporal Logic of Actions

This project concerns the development and application of Lamport's TLA and the specification language TLA+. It includes both algorithm verifications, and applications such as specification and analysis of aircraft systems and joint human-machine tasks in aviation. I work with various colleagues using TLA, including Leslie Lamport of Digital Systems Research Center in Palo Alto, California and Jean-Charles Grégoire of Laboratoire INRS-Télécommunications in Montréal.


Using the Temporal Logic of Actions: A Tutorial on TLA Verification
Peter Ladkin,
17 June 1997
Abstract: This tutorial on TLA verification starts by motivating the syntax and the logic of TLA by means of a natural deduction system -- for hierarchical proofs in TLA are (inverse) natural deductions -- and works through a TLA specification and verification using the buffer example from Formal But Lively Buffers.., RVS-RR-96-07. The explanations are need-driven - TLA concepts are introduced only where necessary for working through the example - and there's a little hand-waving here and there for the same reason.
Postscript, gzip-ed, 169K - DVI, gzip-ed, 59K

Beschreibung eines vagen Echtzeit-Hybrid-Systems in TLA+
Lutz Sommerfeld, Peter Ladkin,
17 June 1997
Abstract: Diese Spezifikation beschreibt ein technischen Gerät, für das keine physikalisch oder mathematisch genaue Beschreibung vorliegt. Es liegt lediglich eine vage, zum Teil umgangssprachliche Beschreibung der Zusammenhänge vor, da die genaue Proze&slig;parameter für die Durchf&uump;hrung der Kultivierung unbekannt sind und deren Bestimmung unnötig ist. Darin unterscheidet sich die Aufgabenstellung vom ,, Steam Boiler Control Problem``. Die vorgestellte Lösung erfolgt auf einer möglichst hohen Abstraktionsebene, so da&slig; die Proze&slig;-Umwelt-Modellierung als auch doe Theorien zur logischen Form von artifacts berücksichtigt werden konnten. Als Spezifikationssprache wurde TLA+ gewählt.
Postscript

Formal but Lively Buffers in TLA+
Peter Ladkin
7 January 1996 - New Version
Abstract: I specify abstract and concrete buffers in TLA+, the specification language of Lamport's Temporal Logic of Actions, and provide a fully formal, complete proof in TLA that the one implements the other. The proof is written in Lamport's hierarchical style. The paper may be read as both a tutorial and a commentary on the use of TLA.
Postscript - DVI

The LaTeX source (200K uncompressed) could be useful as one person's view on how to write `literate proofs' in TLA using Lamport's proof style pf.sty. The source also needs the Lamport style files tla.sty and abbrev.sty to compile.

Accompanying the paper are copies of an older version of the proof to
depth 4: Postscript - DVI
depth 6: Postscript - DVI
depth 8: Postscript - DVI
obtained from the original using pf.sty commands, which show the higher-level structure of the proof.


Lazy Caching in TLA
Peter Ladkin, Leslie Lamport, Bryan Olivier, and Denis Roegel
18 January 1996 - Final Version
Assertional methods, formalized by TLA, are used to specify and verify the correctness of a cache coherency algorithm of Afek, Brown, and Merritt. The algorithm is described informally in
Verifying Sequential Consistent Memory Problem Definition
Rob Gerth
April 1993
Postscript
which should be read first.
Postscript - DVI - LaTeX

Lazy Cache Implements Complete Cache
Peter Ladkin
7 January 1996
The formal TLA proof that the Lazy Caching Algorithm implements a simplified algorithm called the Complete Cache, in which all caches contain entries for all memory addresses, there is no separate memory and there are no memory refresh operations. This proof is a step in proving that Lazy Caching is sequentially consistent. The choice of the invariant for the safety proof is the main issue. Other parts of the Lazy Cache proof will appear here as they become available.
Postscript - DVI

The analysis of an operating manual in
Analysis of a Technical Description of the Airbus A320 Braking System
is also accompished using TLA Predicate-Action Diagrams.

Formal Definition of Message Sequence Charts

Interest is growing in the telecommunications community in the use of Message Sequence Charts (MSCs), an ITU-T standard (Z.120) language. This joint work with Prof. Stefan Leue of the University of Waterloo in Canada defines a straightforward interpretation of both `basic' MSCs and `high-level' MSCs into state machines. Various semantic problems with the MSC standard arise, which can be seen graphically (literally!) in our interpretation of MSCs into the automated verification tool Promela/XSpin from Gerard Holzmann's group at Lucent Technology's Bell Laboratories.


Implementing and Verifying Message Sequence Chart Specifications Using Promela/XSpin
Stefan Leue and Peter Ladkin
11 September 1996
Abstract: We discuss a translation of Message Sequence Charts (MSCs) into the language {\em Promela} (we call this translation an `{\em implementation}') that is consistent with the formal semantics we have previously defined for Message Flow Graphs and Message Sequence Charts, which handled the syntactic features with mathematical import from ITU-T recommendation Z.120. We report on experiments executing the Promela code using the {\em Xspin} simulator and validator. In previous work we found that potential process {\em divergence} and {\em non-local choice} situations impose problems on implementations of MSCs, and we discuss how these impact our Promela translation and suggest solutions. Finally, we show how to model-check liveness requirements imposed on MSC specifications. We use the Promela models obtained from our implementation, describe how to use control state propositions based on these models, use Linear Time Temporal Logic formulas to specify the liveness properties, and demonstrate the use of XSpin as a model checker for these properties. (compress-ed Postscript, 1.5Kb) To appear in the Proceedings of the 2nd SPIN Workshop, Rutgers, N.J., DIMACS Series, American Mathematical Society Press, 1996.

Interpreting Message Flow Graphs
Peter Ladkin and Stefan Leue
28 October 1994
Abstract: We interpret Message Flow Graphs as a finite state machines, in which states are the global states of the system. One important application of MFGs is as Message Sequence Charts, MSCs. Liveness properties of MSCs are underdefined by the Z.120 standard. We show how to compose MFGs (insofar as sense can be made of this operation without considerable qualification) and what it means, and we show how to use temporal logic to specify liveness properties precisely.
Formal Aspects of Computing 7(5):473-509
Postscript

Four Issues Concerning the Semantics of Message Flow Graphs
Peter Ladkin and Stefan Leue
Abstract: We discuss four issues concerning the semantics of Message Flow Graphs (MFGs). MFGs are extensively used as pictures of message-pasing behavior. One type of MFG, Message Sequence Chart (MSC) is ITU Standard Z.120. We require that a system described by an MFG have global states with respect to its message-passing behavior, with transitions between these states effected by atomic message-passing actions. Under this assumption, we argue (a) that the collection of global message states defined by an MFG is finite (whether for synchronous, or partially-asynchronous message-passing); (b)that the unrestricted use of `conditions' requires processes to keep control history variables of potentially unbounded size; (c) that allowing `crossing' messages of the same type implies certain properties of the environment that are neither explicit not desirable, and (d) that liveness properties of MFGs are more easily expressed by temporal logic formulas over the control states than by Büchi acceptance conditions over the same set of states.
Appeared in Formal Description Techniques VII, ed. D. Hogrefe and S. Leue, IFIP Series, Chapman and Hall, 1995.
This paper is currently being rewritten to address High-Level MSCs
[ Postscript ]

Constraint Satisfaction and Temporal Reasoning

I have had ongoing interest for a decade in temporal reasoning using intervals, and in constraint satisfaction problems in general. Here are the results.


A Note on a Note on a Lemma of Ladkin
Peter Ladkin
13 September 1996, revised 14 October 1996
Abstract: Antony Galton published a recent paper correcting a Lemma of mine - well, sort of, because the Lemma was not false. However, it was useless as stated. In this note, I put the record straight(er), and offer a few comments on the social background.

Simple Reasoning With Time-Dependent Propositions
Maroua Bouzid and Peter Ladkin
4 November 1995
Simple practical reasoning with propositions whose truth values depend on time is a matter of logical engineering. Here's one approach, along with some algorithms for implementing it. We also consider reified and non-reified logics, and show that a reified logic is more appropriate than its non-reified equivalent, when time references are interpreted as union-of-convex intervals.
To appear in the Journal of the IGPL.
Postscript - DVI

Fast Algebraic Methods for Interval Constraint Problems
Peter Ladkin and Alexander Reinefeld
21 August 1996
We implement a constraint satisfaction technique for qualititive relations on intervals over dense linear orderings (sometimes called "time intervals"), and show how to solve them fast, where the bottlenecks are, and what the problem space looks like.
This is an Invited Paper for the Annals of Mathematics and Artificial Intelligence, based on a shorter paper in Springer LNCS 737, Artificial Intelligence and Symbolic Mathematical Computing, ed. Calmet and Campbell, 1993, with some new results on speed-up of the composition operation.
Postscript - DVI - LaTeX (needs prepictex.tex, pictex.tex, and postpictex.tex)

On Binary Constraint Problems
Peter Ladkin and Roger Maddux
25 January 1995
The concepts of binary constraint satisfaction problems can be naturally generalized to the relation algebras of Tarski. We do that. Among other things, we give an example of a $4$-by-$4$ matrix of infinite relations on which no iterative local path-consistency algorithm terminates; and we give a class of examples over a fixed finite algebra on which all iterative local algorithms, whether parallel or sequential, must take quadratic time. Specific relation algebras arising from interval constraint problems are also studied: the Interval Algebra, the Point Algebra, and the Containment Algebra.
Appeared in the Journal of the ACM 41(3), May 1994, 435-469.
Postscript - DVI - LaTeX (needs prepictex.tex, pictex.tex, and postpictex.tex)

An Algebraic Approach to General Boolean Constraint Problems
Hans-Werner Güsgen and Peter Ladkin
23 April 1995
The work of Ladkin and Maddux concerned binary CSPs. We show in this paper how to treat general CSPs also by algebraic logic, this time using Tarski's cylindric algebra. We formulate all the usual concepts of CSPs in this framework, including k-consistency, derived constraints, and backtrack-freeness, give an algorithm scheme for k-consistency, and prove its correctness.
Postscript - DVI

Integrating Metric and Qualitative Temporal Reasoning
Henry Kautz and Peter Ladkin
July 1991
We show how metric and Allen-style constraint networks can be integrated in a constraint-based reasoning system. We use a simple but powerful logical language for expressing both quantitative and qualitative information; develop translation algorithms between the metric and Allen sublanguages that entail minimal loss of information; and explain a constraint-propagation procedure for solving problems expressed in a combination of metric and Allen constraints.
Appeared in the Proceedings of AAAI-91 (Anaheim, CA)
MIT/AAAI Press, July 1991.

Postscript - DVI

My Thesis

The Logic of Time Representation
Peter Ladkin
November 1987

Every so often, I get requests for my thesis or for some Kestrel Institute Technical Reports on work reported in my thesis, so here it is. It includes work published in papers in AAAI-86, AAAI-87, IJCAI-87, ICSE-87 and AAAI-88 (a longer version with proofs), plus a little more. It's about 130 pages long.

Postscript - DVI


Specification and Verification of the Radio Link Protocol in TLA


This project is joint with Laboratoire INRS-Télécommunications at the Université de Québec à Montréal. I am working primarily with Prof. Jean-Charles Grégoire on a hierarchical specification and verification of the RLP Protocol in TLA. RLP is a modified sliding-window protocol to be used in the TDMA cellular phone systems jointly being developed by Nortel, Eriksson and Nokia. A hierarchical specification is being fine-tuned and the proofs of correctness are being completed.

Formal Analysis of Aviation Discourse


Pilot-controller communication is one of the most formal natural languages, designed for unambiguous and clear communication of aviation control tasks and intentions. Nevertheless, communication may go awry, as in the unfortuntate American Airlines accident of 20 December 1995 on approach to Cali, Colombia. This project, joint with Prof. Dafydd Gibbon of the University of Bielefeld Faculty of Linguistics and Literature, aims to analyse formally aspects of the language and language use, with the goal of contributing both to aviation safety and to formal linguistics. This project is in very early stages.

One result so far is
Comments on Confusing Conversation at Cali