by Edsger W. Dijkstra
In the world around us we encounter two radically different views of programming:
View A: Programming in essence is very easy.
View B: Programming is intrinsically very difficult.
One can dismiss this discrepancy by concluding that, apparently, in the two views the same word “programming” is used in two quite different meanings, and then return to the order of the day. Whether view A or view B is the predominant one, however, has a deep influence, not only on the personnel policy of computer using organizations and on the curriculum policy of our educational institutes, but even on the direction of development and research in computing science itself. It seems, therefore, worthwhile to explore the nature of the difference between the two meanings and to identify, if possible, the underlying assumptions that would make each of them appropriate. To do so is the purpose of this paper.
In this exploration I have what could be regarded as a handicap: in the controversy I am not neutral at all. I am a strong supporter of view B and regard view A as the underlying cause of many mistakes. On the other hand I don’t think that having an opinion disqualifies me as an author, certainly if I warn my readers in advance and do not feign a fake neutrality. As our analysis proceeds we shall discover how these different views of programming (which is a human activity!) are related to different types of Man. This, all by itself, is already a valuable insight, as it explains the nearly religious fervour with which the battle between the defenders of the opposing views —creeds?— is sometimes fought.
* *
*
The early history of automatic computing makes view A only too understandable. Before we had computers, programming was no problem at all. Then came the first machines: compared with the machines we have now they were mere toys, and, compared with what we try to do now, they were used for “micro-applications” only. If at that stage programming was a problem, it was only a mild one. Add to this the sources of difficulties that at that time absorbed —or should we say in retrospect: usurped?— the major part of our attention:
1) arithmetic units were slow with respect to what we wanted to do: that shoe pinched nearly always, and in the name of program efficiency all possible coding tricks were permitted (and very few were not applied)
2) design and construction of arithmetic units were such a novel and, therefore, difficult task that if a next anomaly in the instruction code could save a number of flip-flops, the flip-flops were usually saved —also, of course, because we had so little programming experience that we could not recognize “anomalies in the instruction code” too well—; as a result there was, besides pressure to apply coding tricks, also a great opportunity for doing so
3) stores were always too small, a pinching shoe that, together with the general unreliability of the first hardware, prohibited more sophisticated ways of machine usage.
In that time programming presented itself primarily as battle against the machine’s limitations, a battle that was to be won by a cunning, be it not very systematic, exploitation of each machine’s specific properties: it was the heyday of the virtuoso coder.
In the next to ten to fifteen years processing units became a thousand times faster, stores became a thousand times larger, and high-level programming languages came into general use. And it was during that period, when on the one hand programming was still firmly associated with the pinching shoe, while on the other hand the shoe was felt to pinch less and less, that it was expected that with another five years of technical progress the problems of programming would have disappeared. It was during that period that view A was born. It was at the end of that period that, inspired by view A, COBOL was designed with the avowed intention that it should