(or, Massively Parallel Requirements Engineering)
University of Cambridge Computer Laboratory
New Museums Site, Pembroke Street, Cambridge CB2 3QG, UK
Ross.Anderson@cl.cam.ac.uk
Abstract:
Collaborative software projects such as Linux and Apache have shown
that a large, complex system can be built and maintained by many
developers working in a highly parallel, relatively unstructured way.
In this note, I report an experiment to see whether a high quality
system specification can also be produced by a large number of people
working in parallel with a minimum of communication.
(This paper appeared as an invited talk at the 1999 Computer Security Applications Conference.
You can download pdfs of the a two column
version that appeared in the proceedings, and if your prefer larger type
the original single
column version.)
Experienced software engineers know that perhaps 30% of the cost of a
software product goes into specifying it, 10% into coding, and the
remaining 60% on maintenance. This has profound effects on computer
science. For example, when designing new programming languages the
motive nowadays is mostly not to make coding easier, but to cut the
costs of maintenance. There has also been massive interest in open
source software products such as Linux and Apache, whose maintenance
is undertaken by thousands of programmers working worldwide in a
voluntary and cooperative way.
Open source software is not entirely a recent invention; in the early
days of computing most system software vendors published their source
code. This openness started to recede in the early 1980s when pressure
of litigation led IBM to adopt an `object-code-only’ policy for its
mainframe software, despite bitter criticism from its user
community. The pendulum now seems to be swinging back, with Linux and
Apache gaining huge market share.
In his influential paper `The Cathedral and the Bazaar’ [1],
Eric Raymond compares the hierarchical organisation of large software
projects in industry (`the cathedral’) with the more open,
unstructured approach of cooperative developers (`the bazaar’). He
makes a number of telling observations about the efficiency of the
latter, such as that “Given enough eyeballs, all bugs are shallow”.
His more recent paper, `The Magic Cauldron’ [2], explores the
economic incentives that for-profit publishers have found to publish
their source code, and concludes that IBM’s critics were right: where
reliability is paramount, open source is best, as users will cooperate
in finding and removing bugs.
There is a corollary to this argument, which I explore in this paper:
the next priority after cutting the costs of maintenance should be
cutting the costs of specification.
Specification is not only the second most expensive item in the system
development life cycle, but is also where the most expensive things go
wrong. The seminal study by Curtis, Krasner and Iscoe of large
software project disasters found that failure to understand the
requirements was mostly to blame [3]: a thin spread of
application domain knowledge typically led to fluctuating and
conflicting requirements which in turn caused a breakdown in
communication. They suggested that the solution was to find an
`exceptional designer’ with a deep understanding of the problem who
would assume overall responsibility.
But there are many cases where an established expert is not available,
such as when designing a new application from scratch or when building
a competitor to a closed, proprietary system whose behaviour can only
be observed at a distance.
There are also some particular domains in which specification is well
known to be hard. Security is one example; the literature has many
examples of systems which protected the wrong thing, or protected the
right thing but using the wrong mechanisms. Most real life security
failures result from the opportunistic exploitation of elementary
design flaws rather than `high-tech’ attacks such as
cryptanalysis [4]. The list of possible attacks on a typical
system is long, and people doing initial security designs are very
likely to overlook some of them. Even in a closed environment, the use
of multiple independent experts is recommended [5].
Security conspicuously satisfies the five tests which Raymond
suggested would identify the products most likely to benefit from an
open source approach [2]. It is based on common engineering
knowledge rather than proprietary techniques; it is sensitive to
failure; it needs peer review for verification; it is business
critical; and its economics include strong network effects. Its own
traditional wisdom, going back at least to Auguste Kerkhoffs in 1883,
is that cryptographic systems should be designed in such a way that
they are not compromised if the opponent learns the technique being
used. In other words, the security should reside in the choice of key
rather than in obscure design features [6].
It therefore seemed worthwhile to see if a high quality security
specification could be designed in a highly parallel way, by getting a
lot of different people to contribute drafts in the hope that most of
the possible attacks would be considered in at least one of them.
The opportunity to test this idea was provided by the fact that I
teach courses in cryptography and computer security to second and
third year undergraduates at Cambridge. By the third year, students
should be able to analyse a protection problem systematically by
listing the threats, devising a security policy and then recommending
mechanisms that will enforce it. (The syllabus and lecture notes are
available online at [7].)
By a security policy, we mean a high level specification which sets
out the threats to which a system is assumed to be exposed and the
assurance properties which are to be provided in response. Like most
specifications, it is a means of communication between the users (who
understand the environment) and the system engineers (who will have to
implement the encryption, access control, logging or other mechanisms).
So it must be clearly comprehensible to both communities; it should
also be concise.
The students see, as textbook examples of security policy:
- the Bell-LaPadula model, which is commonly used by governments
to protect classified information and which states that information
can only flow up the classification hierarchy, and never down. Thus a
civil servant cleared to `Secret’ can read files at `Secret’ or below,
but not `Top Secret’, while a process running at `Secret’ can write at
the same level or above, but never down to `Unclassified’; - The Clark-Wilson model, which provides a reasonably formal
description of the double-entry bookkeeping systems used by large
organisations to detect fraud by insiders; - The Chinese Wall model, which models conflicts of interest in
professional practice. Thus an advertising account executive who has
worked on one bank’s strategy will be prevented from seeing the files
on any other banking client for a fixed period of time afterwards; - The British Medical Association model, which describes how flows
of personal health information must be restricted so as to respect the
established ethical norms for patient privacy. Only people involved
directly in a patient’s care should be allowed to access their medical
records, unless the patient gives consent or the records are
de-identified effectively.
The first three of these are documented in [8] and the fourth
in [9]. Further examples of security policy models are always
welcome, as they help teach the lesson that `security’ means radically
different things in different applications. However, developing a
security policy is usually hard work, involving extensive consultation
with domain experts and successive refinement until a model emerges
that is compact, concise and agreed by all parties.
Exceptions include designing a policy for a new application, and for a
competitor to a closed system. In such cases, the best we can do may
be to think long and hard, and hope that we will not miss anything
important.
I therefore set the following exam question to my third year students:
You have been hired by a company which is bidding to take
over the National Lottery when Camelot’s franchise expires,
and your responsibility is the security policy. State the
security policy you would recommend and outline the
mechanisms you would implement to enforce it.
For the benefit of overseas readers, I will now give a simplified
description of our national lottery. (British readers can skip the
next two paragraphs.)
The UK’s national lottery is operated by a consortium of companies
called Camelot which holds a seven year licence from the government.
This licence is up for renewal, which makes the question topical; and
presumably Camelot will refuse to share its experience with potential
competitors. A large number of franchised retail outlets sell tickets.
The customer marks six out of 49 numbers on a form which he hands with
his money to the operator; she passes it through a machine that scans
it and prints a ticket containing the choice of numbers plus some
further coded information to authenticate it.
Twice a week there is a draw on TV at which a machine selects seven
numbered balls from 49 in a drum. The customers who have predicted the
first six share a jackpot of several million pounds; the odds should
be (49 choose 6) or 13,983,816 to one against, meaning that with much
of the population playing there are several winners in a typical draw.
(Occasionally there are no winners and the jackpot is `rolled over’ to
the next draw, giving a pot of many millions of pounds which whips
the popular press to a frenzy.) There are also smaller cash prizes for
people who guessed only some of the numbers. Half the takings go on
prize money; the other half gets shared between Camelot, the taxman
and various charitable good causes.
The model answer I had prepared had a primary threat model that
attackers, possibly in cahoots with insiders, would try to place bets
once the result of the draw is known, whether by altering bet records
or forging tickets. The secondary threats were that bets would be
placed that had not been paid for, and that attackers might operate
bogus vending stations which would pay small claims but disappear if a
client won a big prize.
The security policy that follows logically from this is that bets
should be registered online with a server which is secured prior to
the draw, both against tampering and against the extraction of
sufficient information to forge a winning ticket; that there should be
credit limits for genuine vendors; and that there should be ways of
identifying bogus vendors. Once the security policy has been developed
in enough detail, designing enforcement mechanisms should not be too
hard for someone skilled in the art – though there are some
subtleties, as we shall see below.
The exam was set on the first of June 1999 [10], and when the
scripts were delivered that evening, I was eager to find out what the
students might have come up with.
Thirty four candidates answered the question, and five of their papers
were good enough to be kept as model answers. All of these candidates
had original ideas which are incorporated in this paper, as did a
furthe