In "No Silver Bullet: Essence and Accident of Software Engineering,...
> How much of what software engineers now do is still devoted to ...
Fred Brooks (born in 1931) is an American computer scientist. Brook...
Germ theory, developed in the late 19th century, marks a pivotal sh...
Brook’s is alluding to Moore’s law. Moore's Law, coined by Gordon M...
Aristotle differentiated between the "essential" qualities that mak...
Fred Brooks would have spent a considerable amount of time programm...
Time-sharing systems marked a significant shift from the era of bat...
InterLisp, was an integrated programming environment (IPE) built ar...
The Ada programming language, named after English mathematician Ada...
MIPS stands for Million instructions per second
Edward Feigenbaum was a pioneering figure in the field of artificia...
VLSI stands for Very Large Scale Integration, a process by which in...
In the context of their work with some of the earliest computing sy...
No Silver Bullet
—Essence and Accident in Software Engineering
Frederick P. Brooks, Jr.
University of North Carolina at Chapel Hill
There is no single development, in either technology or management
technique, which by itself promises even one order-of-magnitude
improvement within a decade in productivity, in reliability, in simplicity.
Abstract
1
All software construction involves essential tasks, the fashioning of the complex
conceptual structures that compose the abstract software entity, and accidental tasks, the
representation of these abstract entities in programming languages and the mapping of
these onto machine languages within space and speed constraints. Most of the big past
gains in software productivity have come from removing artificial barriers that have
made the accidental tasks inordinately hard, such as severe hardware constraints,
awkward programming languages, lack of machine time. How much of what software
engineers now do is still devoted to the accidental, as opposed to the essential? Unless it
is more than 9/10 of all effort, shrinking all the accidental activities to zero time will not
give an order of magnitude improvement.
Therefore it appears that the time has come to address the essential parts of the
software task, those concerned with fashioning abstract conceptual structures of great
complexity. I suggest:
Exploiting the mass market to avoid constructing what can be bought.
Using rapid prototyping as part of a planned iteration in establishing software
requirements.
Growing software organically, adding more and more function to systems as they are
run, used, and tested.
Identifying and developing the great conceptual designers of the rising generation.
Introduction
Of all the monsters who fill the nightmares of our folklore, none terrify more than
werewolves, because they transform unexpectedly from the familiar into horrors. For
these, we seek bullets of silver than can magically lay them to rest.
1
Reproduced from: Frederick P. Brooks, The Mythical Man-Month, Anniversary edition with 4 new
chapters, Addison-Wesley (1995), itself reprinted from the Proceedings of the IFIP Tenth World
Computing Conference, H.-J. Kugler, ed., Elsevier Science B.V., Amsterdam, NL (1986) pp. 1069-76.
F. Brooks: No Silver Bullet—Essence and accident in software engineering (1986) 2
The familiar software project has something of this character (at least as seen by the
non-technical manager), usually innocent and straightforward, but capable of becoming a
monster of missed schedules, blown budgets, and flawed products. So we hear desperate
cries for a silver bullet, something to make software costs drop as rapidly as computer
hardware costs do.
But, as we look to the horizon of a decade hence, we see no silver bullet. There is no
single development, in either technology or management technique, which by itself
promises even one order of magnitude improvement in productivity, in reliability, in
simplicity. In this chapter we shall try to see why, but examining both the nature of the
software problem and the properties of the bullets proposed.
Skepticism is not pessimism, however. Although we see no startling breakthroughs,
and indeed, believe such to be inconsistent with the nature of software, many
encouraging innovations are under way. A disciplined, consistent effort to develop,
propagate, and exploit them should indeed yield an order-of-magnitude improvement.
There is no royal road, but there is a road.
The first step toward the management of disease was replacement of demon theories
and humours theories by the germ theory. That very step, the beginning of hope, in itself
dashed all hopes of magical solutions. It told workers the progress would be made
stepwise, at great effort, and that a persistent, unremitting care would have to be paid to a
discipline of cleanliness. So it is with software engineering today.
Does It Have To Be Hard? – Essential Difficulties
Not only are there no silver bullets now in view, the very nature of software makes it
unlikely that there will be anyno inventions that will do for software productivity,
reliability, and simplicity what electronics, transistors, and large-scale integration did for
computer hardware. We cannot expect ever to see twofold gains every two years.
First, we must observe that the anomaly is not that software progress is so slow but
that computer hardware progress is so fast. No other technology since civilization began
has seen six orders of magnitude price-performance gain in 30 years. In no other
technology can one choose to take the gain in either improved performance or in reduced
costs. These gains flow from the transformation of computer manufacture from an
assembly industry into a process industry.
Second, to see what rate of progress we can expect in software technology, let us
examine its difficulties. Following Aristotle, I divide them into essencethe difficulties
inherent in the nature of the softwareand accidentsthose difficulties that today attend
its production but that are not inherent.
The accidents I discuss in the next section. First let us consider the essence.
The essence of a software entity is a construct of interlocking concepts: data sets,
relationships among data items, algorithms, and invocations of functions. This essence is
abstract, in that the conceptual construct is the same under many different
representations. It is nonetheless highly precise and richly detailed.
I believe the hard part of building software to be the specification, design, and testing
of this conceptual construct, not the labor of representing it and testing the fidelity of the
representation. We still make syntax errors, to be sure; but they are fuzz compared to the
conceptual errors in most systems.
F. Brooks: No Silver Bullet—Essence and accident in software engineering (1986) 3
If this is true, building software will always be hard. There is inherently no silver
bullet.
Let us consider the inherent properties of this irreducible essence of modern software
systems: complexity, conformity, changeability, and invisibility.
Complexity. Software entities are more complex for their size than perhaps any other
human construct, because no two parts are alike (at least above the statement level). If
they are, we make the two similar parts into one, a subroutine, open or closed. In this
respect software systems differ profoundly from computers, buildings, or automobiles,
where repeated elements abound.
Digital computers are themselves more complex than most things people build; they
have very large numbers of states. This makes conceiving, describing, and testing them
hard. Software systems have orders of magnitude more states than computers do.
Likewise, a scaling-up of a software entity is not merely a repetition of the same
elements in larger size; it is necessarily an increase in the number of different elements.
In most cases, the elements interact with each other in some nonlinear fashion, and the
complexity of the whole increases much more than linearly.
The complexity of software is in essential property, not an accidental one. Hence
descriptions of a software entity that abstract away its complexity often abstract away its
essence. Mathematics and the physical sciences made great strides for three centuries by
constructing simplified models of complex phenomena, deriving properties from the
models, and verifying those properties experimentally. This worked because the
complexities ignored in the models were not the essential properties of the phenomena. It
does not work when the complexities are the essence.
Many of the classical problems of developing software products derived from this
essential complexity and its nonlinear increased with size. From the complexity comes
the difficulty of communication among team members, which leads to product flaws, cost
overruns, schedule delays. From the complexity comes the difficulty of enumerating,
much less understanding, all the possible states of the program, and from that comes the
unreliability. From the complexity of the functions comes the difficulty of invoking those
functions, which makes programs hard to use. From complexity of structure comes the
difficulty of extending programs to new functions without creating side effects. From the
complexity of structure comes the unvisualized state that that constitute security
trapdoors.
Not only technical problems but management problems as well come from the
complexity. This complexity makes overview hard, thus impeding conceptual integrity.
It makes it hard to find and control all the loose ends. It creates the tremendous learning
and understanding burden that makes personnel turnover a disaster.
Conformity. Software people are not alone in facing complexity. Physics deals with
terribly complex objects even at the “fundamental” particle level. The physicist labors
on, however, in a firm faith that there are unifying principles to be found, whether in
quarks or in unified field theories. Einstein repeatedly argued that there must be
simplified explanations of nature, because God is not capricious or arbitrary.
No such faith comforts the software engineer. Much of the complexity he must
master is arbitrary complexity, forced without rhyme or reason by the many human
F. Brooks: No Silver Bullet—Essence and accident in software engineering (1986) 4
institutions and systems to which his interfaces must confirm. These differ from interface
to interface, and from time to time, not because of necessity but only because they were
designed by different people, rather than by God.
In many cases the software must confirm because it has most recently come to the
scene. In others it must conform because it is perceived as the most conformable. But in
all cases, much complexity comes from conformation to other interfaces; this cannot be
simplified out by any redesign of the software alone.
Changeability. The software entity is constantly subject to pressures for change. Of
course, so are buildings, cars, and computers. But manufactured things are infrequently
changed after manufacture; they are superseded by later models, or essential changes are
incorporated in later serial-number copies of the same basic design. Callbacks of
automobiles are really quite infrequent; field changes of computers somewhat less so.
Both are much less frequent than modifications to fielded software.
Partly this is because the software in a system embodies its function, and the function
is the part that most feels the pressures of change. Partly it is because software can be
changed more easilyit is pure thought-stuff, infinitely malleable. Buildings do in fact
get changed, but the high costs of change, understood by all, serve to dampen the whim
of the changers.
All successful software gets changed. Two processes are at work. As a software
product is found to be useful, people try it in new cases at the edge of, or beyond, the
original domain. The pressures for extended function come chiefly from users who like
the basic function and invent new uses for it.
Second, successful software also survives beyond the normal life of the machine
vehicle for which it is first written. If not new computers, then at least new disks, new
displays, new printers come along; and the software must be conformed to its new
vehicles of opportunity.
In short, the software product is embedded in a cultural matrix of applications, users,
laws, and machine vehicles. These all change continually, and their changes inexorably
force change upon the software product.
Invisibility. Software is invisible and unvisualizable. Geometric abstractions are
powerful tools. The floor plan of a building helps both architect and client evaluate
spaces, traffic flows, and views. Contradictions become obvious, omissions can be
caught. Scale drawings of mechanical parts and stick-figure models of molecules,
although abstractions, serve the same purpose. A geometric reality is captured in a
geometric abstraction.
The reality of software is not inherently embedded in space. Hence it has no ready
geometric representation in the way that land has maps, silicon chips have diagrams,
computers have connectivity schematics. As soon as we attempt to diagram software
structure, we find it to constitute not one, but several, general directed graphs,
superimposed one upon another. The several graphs may represent the flow of control,
the flow of data, patterns of dependency, time sequence, name-space relationships. These
are usually not even planar, much less hierarchical. Indeed, one of the ways of
F. Brooks: No Silver Bullet—Essence and accident in software engineering (1986) 5
establishing conceptual control over such structure is to enforce link cutting until one or
more of the graphs becomes hierarchical.
2
In spite of progress in restricting and simplifying the structures of software, they
remain inherently unvisualizable, thus depriving the mind of some of its most powerful
conceptual tools. This lack not only impedes the process of design within one mind, it
severely hinders communication among minds.
Past Breakthroughs Solved Accidental Difficulties
If we examine the three steps in software technology that have been most fruitful in the
past, we discover that each attacked a different major difficulty in building software, but
they have been the accidental, not the essential, difficulties. We can also see the natural
limits to the extrapolation of each such attack.
High-level languages. Surely the most powerful stroke for software productivity,
reliability, and simplicity has been the progressive use of high-level languages for
programming. Most observers credit that development with at least a factor of five in
productivity, and with concomitant gains in reliability, simplicity, and comprehensibility.
What does a high-level language accomplish? It frees a program from much of its
accidental complexity. An abstract program consists of conceptual constructs:
operations, data types, sequences, and communication. The concrete machine program is
concerned with bits, registers, conditions, branches, channels, disks, and such. To the
extent that the high-level language embodies the constructs wanted in the abstract
program and avoids all lower ones, it eliminates a whole level of complexity that was
never inherent in the program at all.
The most a high-level language can do is to furnish all the constructs the programmer
imagines in the abstract program. To be sure, the level of our sophistication in thinking
about data structures, data types, and operations is steadily rising, but at an ever-
decreasing rate. And language development approaches closer and closer to the
sophistication of users.
Moreover, at some point the elaboration of a high-level language becomes a burden
that increases, not reduces, the intellectual task of the user who rarely uses the esoteric
constructs.
Time-sharing. Most observers credit time-sharing with a major improvement in the
productivity of programmers and in the quality of their product, although not so large as
that brought by high-level languages.
Time-sharing attacks a distinctly different difficulty. Time-sharing preserves
immediacy, and hence enables us to maintain an overview of complexity. The slow
turnaround of batch programming means that we inevitably forget the minutiae, if not the
very thrust, of what we were thinking when we stopped programming and called for
compilation and execution. This interruption of consciousness is costly in time, for we
must refresh. The most serious effect may well be the decay of grasp of all that is going
on in a complex system.
2
Parnas, D.L., “Designing software for ease of extension and contraction,” IEEE Trans. on SE, 5, 2
(March, 1979), pp. 12-138.
F. Brooks: No Silver Bullet—Essence and accident in software engineering (1986) 6
Slow turn-around, like machine-language complexities, is an accidental rather than an
essential difficulty of the software process. The limits of the contribution of time-sharing
derive directly. The principle effect is to shorten system response time. As it goes to
zero, at some point it passes the human threshold of noticeability, about 100 milliseconds.
Beyond that no benefits are to be expected.
Unified programming environments. Unix and Interlisp, the first integrated
programming environments to come into widespread use, are perceived to have improved
productivity by integral factors. Why?
They attack the accidental difficulties of using programs together, by providing
integrated libraries, unified file formats, and piles and filters. As a result, conceptual
structures that in principle could always call, feed, and use one another can indeed easily
do so in practice.
This breakthrough in turn stimulated the development of whole toolbenches, since
each new tool could be applied to any programs by using the standard formats.
Because of these successes, environments are the subject of much of today’s software
engineering research. We will look at their promise and limitations in the next section.
Hopes for the Silver
Now lets us consider the technical developments that are most often advanced as
potential silver bullets. What problems do they address? Are they the problems of
essence, or are they remainders of our accidental difficulties? Do they offer
revolutionary advances, or incremental ones?
Ada and other high-level language advances. One of the most touted recent
developments is the programming language Ada, a general-purpose, high-level language
of the 1980s. Ada indeed not only reflects evolutionary improvements in language
concepts but embodies features to encourage modern design and modularization
concepts. Perhaps the Ada philosophy is more of an advance than the Ada language, for
it is the philosophy of modularization, of abstract data types, of hierarchical structuring.
Ada is perhaps over-rich, the natural product of the process by which requirements were
laid on its design. That is not fatal, for subset working vocabularies can solve the
learning problem, and hardware advances will give us the cheap MIPS to pay for the
compiling costs. Advancing the structuring of software systems is indeed a very good
use for the increased MIPS our dollars will buy. Operating systems, loudly decried in the
1960s for their memory and cycle costs, have proved to be an excellent form in which to
use some of the MIPS and cheap memory bytes of the past hardware surge.
Nevertheless, Ada will not prove to be the silver bullet that slays the software
productivity monster. It is, after all, just another high-level language, and the biggest
payoff from such languages came from the first transition, up from the accidental
complexities of the machine into the more abstract statement of step-by-step solutions.
Once those accidents have been removed, the remaining ones are smaller, and the payoff
from their removal will surely be less.
I predict that a decade from now, when the effectiveness of Ada is assessed, it will be
seen to have made a substantial difference, but not because of any particular language
feature, nor indeed because of all of them combined. Neither will the new Ada
F. Brooks: No Silver Bullet—Essence and accident in software engineering (1986) 7
environment prove to be the cause of the improvements. Ada’s greatest contribution will
be that switching to it occasioned training programmers in modern software design
techniques.
Object-oriented programming. Many students of the art hold out more hope for object-
oriented programming than for any of the other technical fads of the day.
3
I am among
them. Mark Sherman of Dartmouth notes that we must be careful to distinguish two
separate ideas that go under that name: abstract data types and hierarchical types, also
called classes. The concept of the abstract data type is that an object’s type should be
defined by a name, a set of proper values, and a set of proper operations, rather than its
storage structure, which should be hidden. Examples are Ada packages (with private
types) or Modula’s modules.
Hierarchical types, such as Simula-67’s classes, allow the definition of general
interfaces that can be further refined by providing subordinate types. The two concepts
are orthogonalthere may be hierarchies without hiding and hiding without hierarchies.
Both concepts represent real advances in the art of building software.
Each removes one more accidental difficulty from the process, allowing the designer
to express the essence of his design without having to express large amounts of syntactic
material that add no new information content. For both abstract types and hierarchical
types, the result is to remove a higher-order sort of accidental difficulty and allow a
higher-order expression of design.
Nevertheless, such advances can do no more than to remove all the accidental
difficulties from the expression of the design. The complexity of the design itself is
essential; and such attacks make no change whatever in that. An order-of-magnitude gain
can be made by object-oriented programming only if the unnecessary underbrush of type
specification remaining today in our programming language is itself responsible for nine-
tenths of the work involved in designing a program product. I doubt it.
Artificial intelligence. Many people expect advances in artificial intelligence to provide
the revolutionary breakthrough that will give order-of-magnitude gains in software
productivity and quality.
4
I do not. To see why, we must dissect what is meant by
“artificial intelligence” and then see how it applies.
Parnas has clarified the terminological chaos:
Two quite different definitions of AI are in common use today. AI-1: The use of
computers to solve problems that previously could only be solved by applying human
intelligence. AI-2: The use of a specific set of programming techniques knows as
heuristic or rule-based programming. In this approach human experts are studies to
determine what heuristics or rules of thumb they use in solving problems. . . . The
program is designed to solve a problem the way that humans seem to solve it.
3
Booch, G., “Object-oriented design,” in Software Engineering with Ada. Menlo Park, Calif.: Benjamin
Cummings, 1983.
4
Mostow, J., ed., Special Issue on Artifical Intelligence and Software Engineering, IEEE Trans. on SE, 11,
11 (nov. 1985).
F. Brooks: No Silver Bullet—Essence and accident in software engineering (1986) 8
The first definition has a sliding meaning. . . . Something can fit the definition of AI-1
today but, once we see how the program works and understand the problem, we will
not think of it as AI anymore. . . . Unfortunately I cannot identify a body of technology
that is unique to this field. . . . Most of the work is problem-specific, and some
abstraction or creativity is require to see how to transfer it.
5
I agree completely with this critique. The techniques used for speech recognition
seem to have little in common with those used for image recognition, and both are
different from those used in expert systems. I have a hard time seeing how image
recognition, for example, will make any appreciable difference in programming practice.
The same is try of speech recognition. The hard thing about building software is deciding
what to say, not saying it. No facilitation of expression can give more than marginal
gains.
Expert systems technology, AI-2, deserves a section of its own.
Expert systems. The most advanced part of the artificial intelligence art, and the most
widely applies, is the technology for building expert systems. Many software scientists
are hard at work applying this technology to the software-building environment.
6
What is
the concept, and what are the prospects?
An expert system is a program containing a generalized inference engine and a rule
base, designed to take input data and assumptions and explore the logical consequences
through the inferences derivable from the rule base, yielding conclusions and advice, and
offering to explain its results by retracing its reasoning for the user. The inference
engines typically can deal with fuzzy or probabilistic data and rules in addition to purely
deterministic logic.
Such systems offer some clear advantages over programmed algorithms for arriving
at the same solutions to the same problems:
Inference engine technology is developed in an application-independent way, and then
applied to many uses. One can justify much more effort on the inference engines.
Indeed, that technology is well advanced.
The changeable parts of the application-peculiar materials are encoded in the rule base
in a uniform fashion, and tools are provided for developing, changing, testing, and
documenting the rule base. This regularizes much of the complexity of the application
itself.
Edward Feigenbaum says that the power of such systems does not come from ever-
fancier inference mechanisms, but rather from ever-richer knowledge bases that reflect
the real world more accurately. I believe the most important advance offered by the
technology is the separation of the application complexity from the program itself.
How can this be applied to the software task? In many ways: suggesting interface
rules, advising on testing strategies, remembering but-type frequencies, offering
optimization hints, etc.
5
Parnas, D.L., “Software aspects of strategic defense systems,” Communications of the ACM, 28, 12 (Dec.,
1985), pp. 1326-1335. Also in American Scientist, 73, 5 (Sept.-Oct., 1985), pp. 432-440.
6
Balzer, R., “A 15-year perspective on automatic programming,” in Mostow, op. cit.
F. Brooks: No Silver Bullet—Essence and accident in software engineering (1986) 9
Consider an imaginary testing advisor, for example. In its most rudimentary form,
the diagnostic expert system is very like a pilot’s checklist, fundamentally offering
suggestions as to possible causes of difficulty. As the rule base is developed, the
suggestions become more specific, taking more sophisticated account of the trouble
symptoms reported. One can visualize a debugging assistant that offers very generalized
suggestions at first, but as more and more system structure is embodied in the rule base,
comes more and more particular in the hypotheses is generates and the tests it
recommends. Such an expert system may depart most radically from the conventional
ones in that its rule base should probably be hierarchically modularized in the same way
the corresponding software product is, so that as the product is modularly modified, the
diagnostic rule base can be modularly modified as well.
The work required to generate the diagnostic rules is work that will have to be done
anyway in generating the set of test cases for the modules and for the system. If it is done
in a suitably general manner, with a uniform structure for rules and a good inference
engine available, it may actually reduce the total labor of generating bring-up test cases,
as well as helping in lifelong maintenance and modification testing. In the same way, we
can postulate other advisors probably many of them and probably simple ones for the
other parts of the software construction task.
Many difficulties stand in the way of early realization of useful expert advisors to the
program developer. A crucial part of our imaginary scenario is the development of easy
ways to get from program structure specification to the automatic or semi-automatic
generation of diagnostic rules. Even more difficult and important is the twofold task of
knowledge acquisition: finding articulate, self-analytical experts who know why they do
things; and developing efficient techniques for extracting what they know and distilling it
into rule bases. The essential prerequisite for building an expert system is to have an
expert.
The most powerful contribution of expert systems will surely be to put at the service
of the inexperienced programmer the experience and accumulated wisdom of the best
programmers. This is no small contribution. The gap between the best software
engineering practice and the average practice is very wideperhaps wider than in any
other engineering discipline. A tool that disseminates good practice would be important.
“Automatic” programming. For almost 40 years, people have been anticipating and
writing about “automatic programming”, the generation of a program for solving a
problem from a statement of the problem specifications. Some people today write as if
they expected this technology to provide the next breakthrough.
7
Parnas implies that the term is used for glamour and not semantic content, asserting,
In short, automatic programming always has been a euphemism for programming
with a higher-level language than was presently available to the programmer.
8
He argues, in essence, that in most cases it is the solution method, not the problem,
whose specification has to be given.
7
Mostow, op. cit.
8
Parnas, 1985, op. cit.
F. Brooks: No Silver Bullet—Essence and accident in software engineering (1986) 10
Exceptions can be found. The technique of building generators is very powerful, and
it is routinely used to good advantage in programs for sorting. Some systems for
integrating differential equations have also permitted direct specification of the problem.
The system assessed the parameters, chose from a library of methods of solution, and
generated the programs.
These applications have very favorable properties:
The problems are readily characterized by relatively few parameters.
There are many known methods of solution to provide a library of alternatives.
Extensive analysis has led to explicit rules for selecting solution techniques, given
problem parameters.
It is hard to see how such techniques generalize to the wider world of the ordinary
software system, where cases with such neat properties are the exception. It is hard even
to imagine how this breakthrough in generalization could conceivably occur.
Graphical programming. A favorite subject for PH.D. dissertations in software
engineering is graphical, or visual, programming, the application of computer graphics to
software design.
9
Sometimes the promise of such an approach is postulated from the
analogy with VLSI chip design, where computer graphics plays so fruitful a role.
Sometimes the approach is justified by considering flowcharts as the ideal program
design medium, and providing powerful facilities for constructing them.
Nothing even convincing, much less exciting, has yet emerged from such efforts. I
am persuaded that nothing will.
In the first place, as I have argued elsewhere, the flow chart is a very poor abstraction
of software structure.
10
Indeed, it is best viewed as Burks, von Neumann, and
Goldstine’s attempt to provide a desperately needed high-level control language for their
proposed computer. In the pitiful, multipage, connection-boxed form to which the flow
chart has today been elaborated, it has proved to be essentially useless as a design-tool
programmers draw flow charts after, not before, writing the programs they describe.
Second, the screens of today are too small, in pixels, to show both the scope and the
resolution of any serious detailed software diagram. The so-called “desktop metaphor”
of today’s workstation is instead an “airplane-seat” metaphor. Anyone who has shuffled a
lapful of papers while seated in a coach between two portly passengers will recognize the
differenceone can see only a very few things at once. The true desktop provides
overview of and random access to a score of pages. Moreover, when fits of creativity run
strong, more than one programmer or writer has been known to abandon the desktop for
the more spacious floor. The hardware technology will have to advance quite
substantially before the scope of our scopes is sufficient to the software design task.
More fundamentally, as I have argued above, software is very difficult to visualize.
Whether we diagram control flow, variable scope nesting, variable cross-references, data
blow, hierarchical data structures, or whatever, we feel only one dimension of the
intricately interlocked software elephant. If we superimpose all the diagrams generated
by the many relevant views, it is difficult to extract any global overview. The VLSI
9
Raeder, G., “A survey of current graphical programming techniques,” in R. B. Grafton and T. Ichikawa,
eds., Special Issue on Visual Programming, Computer, 18, 8 (Aug., 1985), pp. 11-25
10
Brooks 1995, op. cit., chapter 15.
F. Brooks: No Silver Bullet—Essence and accident in software engineering (1986) 11
analogy is fundamentally misleadinga chip design is a layered two-dimensional object
whose geometry reflects its essence. A software system is not.
Program verification. Much of the effort in modern programming goes into the testing
and repair of bugs. Is there perhaps a silver bullet to be found by eliminating the errors at
the source, in the system design phase? Can both productivity and product reliability be
radically enhance by following the profoundly different strategy of proving designs
correct before the immense effort is poured into implementing and testing them?
I do not believe we will find the magic here. Program verification is a very powerful
concept, and it will be very important for such things as secure operating system kernels.
The technology does not promise, however, to save labor. Verifications are so much
work that only a few substantial programs have ever been verified.
Program verification does not mean error-proof programs. There is no magic here,
either. Mathematical proofs also can be faulty. So whereas verification might reduce the
program-testing load, it cannot eliminate it.
More seriously, even perfect program verification can only establish that a program
meets its specification. The hardest part of the software task is arriving at a complete and
consistent specification, and much of the essence of building a program is in fact the
debugging of the specification.
Environments and tools. How much more gain can be expected from the exploding
researches into better programming environments? One’s instinctive reaction is that the
big-payoff problems were the first attacked, and have been solved: hierarchical file
systems, uniform file formats so as to have uniform program interfaces, and generalized
tools. Language-specific smart editors are developments not yet widely used in practice,
but the most they promise is freedom from syntactic errors and simple semantic errors.
Perhaps the biggest gain yet to be realized in the programming environment is the use
of integrated database systems to keep track of the myriads of details that must be
recalled accurately by the individual programmer and kept current in a group of
collaborators on a single system.
Surely this work is worthwhile, and surely it will bear some fruit in both productivity
and reliability. But by its very nature, the return from now on must be marginal.
Workstations. What gains are to be expected for the software art from the certain and
rapid increase in the power and memory capacity of the individual workstation? Well,
how many MIPS can one use fruitfully? The composition and editing of programs and
documents is fully supported by today’s speeds. Compiling could stand a boost, but a
factor of 10 in machine speed would surely leave think-time the dominant activity in the
programmer’s day. Indeed, it appears to be so now.
More powerful workstations we surely welcome. Magical enhancements from them
we cannot expect.
F. Brooks: No Silver Bullet—Essence and accident in software engineering (1986) 12
Promising Attacks on the Conceptual Essence
Even though no technological breakthrough promises to give the soft of magical
results with which we are so familiar in the hardware area, there is both an abundance of
good working going on now, and the promise of steady, if unspectacular progress.
All of the technological attacks on the accidents of the software process are
fundamentally limited by the productivity equation:
Time of task =
(Frequency)
i
x (Time)
i
If, as I believe, the conceptual components of the task are now taking most of the
time, then no amount of activity on the task components that are merely the expression of
the concepts can give large productivity gains.
Hence we must consider those attacks that address the essence of the software
problem, the formulation of these complex conceptual structures. Fortunately, some of
these are very promising.
Buy versus build. The most radical possible solution for constructing software is not to
construct it at all.
Every day this becomes easier, as more and more vendors offer more and better
software products for a dizzying variety of applications. While we software engineers
have labored on production methodology, the personal computer revolution has created
not one, but m any, mass markets for software. Every newsstand carried monthly
magazines which, sorted by machine type, advertise and review dozens of products at
prices from a few dollars to a few hundred dollars. More specialized sources offer very
powerful products for the workstation and other Unix markets. Even software tolls and
environments can be bought off-the-shelf. I have elsewhere proposed a market place for
individual modules.
Any such product is cheaper to buy than to build afresh. Even at a cost of $100,000,
a purchased piece of software is costing only about as much as one programmer-year.
And delivery is immediate! Immediate at least for products that really exist, products
whose developer can refer the prospect to a happy user. Moreover, such products tend to
be much better documented and somewhat better maintained than homegrown software.
The development of the mass market is, I believe, the most profound long-run trend
in software engineering. The cost of software has always been development cost, not
replication cost. Sharing that cost among even a few users radically cuts the per-user
cost. Another way of looking at it is that the use of n copies of a software system
effectively multiplies the productivity of its developers by n. That is an enhancement of
the productivity of the discipline and of the nation.
The key issue, of course, is applicability. Can I use an available off-the-shelf package
to do my task? A surprising thing has happened here. During the 1950s and 1960s, study
after study showed that users would not use off-the-shelf packages for payroll, inventory
control, accounts receivable, etc. The requirements were too specialized, the case-to-case
variation too high. During the 1980s, we find such packages in high demand and
widespread use. What has changed?
Not really the packages. They may be somewhat more generalized and somewhat
more customizable than formerly, but not much. Not really the applications, either. If
F. Brooks: No Silver Bullet—Essence and accident in software engineering (1986) 13
anything, the business and scientific needs of today are more diverse, more complicated
than those of 20 years ago.
The big change has been in the hardware/software cost ratio. The buyer of a $2-
million machine in 1960 felt that he could afford $250,000 more for a customized payroll
program, one that slipped easily and nondisruptively into the computer-hostile social
environment. Buyers of $50,000 office machines today cannot conceivably afford
customized payroll programs; so they adapt their payroll procedures to the packages
available. Computers are now so commonplace, if not yet so beloved, that the adaptations
are accepted as a matter of course.
There are dramatic exceptions to my argument that the generalization of the software
packages has changed little over the years: electronic spreadsheets and simple database
systems. These powerful tools, so obvious in retrospect and yet so late appearing, lend
themselves to myriad uses, some quite unorthodox. Articles and even books now abound
on how to tackle unexpected tasks with the spreadsheet. Large numbers of applications
that would formerly have been written as custom programs in Cobol or Report Program
Generator are now routinely done with these tools.
Many users now operate their own computers day in and day out on varied
applications without ever writing a program. Indeed, many of these users cannot write
new programs for their machines, but they are nevertheless adept at solving new
problems with them.
I believe the single most powerful software productivity strategy for man
organizations to day is to equip the computer-naïve intellectual workers on the firing line
with personal computers and good generalized writing, drawing, file and spreadsheet
programs, and turn them loose. The same strategy, with simple programming
capabilities, will also work for hundreds of laboratory scientists.
Requirements refinement and rapid prototyping. The hardest single part of building a
software system is deciding precisely what to build. No other part of the conceptual work
is to difficult as establishing the detailed technical requirements, including all the
interfaces to people, to machines, and to other software systems. No other part of the
work so cripples the resulting system if done wrong. No other part is more difficult go
rectify later.
Therefore the most important function that software builders do for their clients is the
iterative extraction and refinement of the product requirements. For the truth is, the
clients do not know what they want. They usually do not know what questions must be
answered, and they almost never have thought of the problem in the detail that must be
specified. Even the simple answer”Make the new software system work like our old
manual information-processing systemis in fact too simple. Clients never want
exactly that. Complex software systems are, moreover, things that act, that move, that
work. The dynamics of that action are hard to imagine. So in planning any software
activity, it is necessary to allow for an extensive iteration between the client and the
designer as part of the system definition.
I would go a step further and assert that it is really impossible for clients, even those
working with software engineers, to specify completely, precisely, and correctly the exact
requirements of a modern software product before having built and tried some versions of
the product they are specifying.
F. Brooks: No Silver Bullet—Essence and accident in software engineering (1986) 14
Therefore one of the most promising of the current technological efforts, and one
which attacks the essence, not the accidents, of the software problem, is the development
of approaches and tools for rapid prototyping of systems as part of the iterative
specification of requirements.
A prototype software system is one that simulates the important interfaces and
performs the main functions of the intended system, while not being necessarily bound by
the same hardware speed, size, or cost constraints. Prototypes typically perform the
mainline tasks of the application, but make no attempt to handle the exceptions, respond
correctly to invalid inputs, abort cleanly, etc. The purpose of the prototype is to make
real the conceptual structure specified, so that the client can test it for consistency and
usability
Much of present-day software acquisition procedures rests upon the assumption that
one can specify a satisfactory system in advance, get bids for its construction, have it
built, and install it. I think this assumption is fundamentally wrong, and that many
software acquisition problems spring from that fallacy. Hence they cannot be fixed
without fundamental revision, one that provides for iterative development and
specification of prototypes and products.
Incremental developmentgrow, not build, software. I still remember the jolt I felt in
1958 when I first heard a friend talk about building a program, as opposed to writing one.
In a flash be broadened my whole view of the software process. The metaphor shift was
powerful, and accurate. Today we understand how like other building processes the
construction of software is, and we freely use other elements of the metaphor, such as
specifications, assembly of components, and scaffolding.
The building metaphor has outlived its usefulness. It is time to change again. If, as I
believe, the conceptual structures we construct today are too complicated to be accurately
specified in advance, and too complex to be built faultlessly, then we must take a
radically different approach.
Let us turn to nature and study complexity in living things, instead of just the dead
works of man. Here we find constructs whose complexities thrill us with awe. The brain
alone is intricate beyond mapping, powerful beyond imitation, rich in diversity, self-
protecting, and self-renewing. The secret is that it is grown, not built.
So it must be with our software systems. Some years ago Harlan Mills proposed that
any software system should be grown by incremental development.
11
That is, the system
should first be made to run, even though it does nothing useful except call the proper set
of dummy subprograms. Then, bit-by-bit it is fleshed out, with the subprograms in turn
being developed into actions or calls to empty stubs in the level below.
I have seen the most dramatic results since I began urging this technique on the
project builders in my software engineering laboratory class. Nothing in the past decade
has so radically changed my own practice, or its effectiveness. The approach necessitates
top-down design, for it is a top-down growing of the software. It allows easy
backtracking. It lends itself to early prototypes. Each added function and new provision
for more complex data or circumstances grown organically out of what is already there.
11
Mills, H. D., “Top-down programming in large systems,” Debugging Techniques in Large Systems, R.
Rustin, ed., Englewood Cliffs, N.J., Prentice-Hall, 1971.
F. Brooks: No Silver Bullet—Essence and accident in software engineering (1986) 15
The morale effects are startling. Enthusiasm jumps when there is a running system,
even a simple one. Efforts redouble when the first picture from a new graphics software
system appears on the screen, even if it is only a rectangle. One always has, at every
stage in the process, a working system. I find that teams can grow much more complex
entities in four months than they can build.
The same benefits can be realized on large projects as on my small ones.
12
Great designers. The central question of how to improve the software art centers, as it
always, on people.
We can get good designs by following good practices instead of poor ones. Good
design practices can be taught. Programmers are among the most intelligent part of the
population, so they can learn good practice. Thus a major thrust in the United States is to
promulgate good modern practice. New curricula, new literature, new organizations such
as the Software Engineering Institute, all have come into being in order to raise the level
of our practice from poor to good. This is entirely proper.
Nevertheless, I do not believe we can make the next step upward in the same way.
Whereas the difference between poor conceptual designs and good ones may lie in the
soundness of design method, the difference between good designs and great ones surely
does not. Great designs come from great designers. Software construction is a creative
process. Sound methodology can empower and liberate the creative mind; it cannot
enflame or inspire the drudge.
The differences are not minorit is rather like Salieri and Mozart. Study after study
shows that the very best designers produce structures that are faster, smaller, simpler,
cleaner, and produced with less effort. The differences between the great and the average
approach an order of magnitude.
A little retrospection shows that although many fine, useful software systems have
been designed by committees and built by multipart projects, those software systems that
have excited passionate fans are those that are the products of one or a few designing
minds, great designers. Consider Unix, APL, Pascal, Modula, the Smalltalk interface,
even Fortran; and contrast with Cobol, PL/I, Algol, MVS/370, and MS-DOS (fig. 1)
Yes No
Unix Cobol
APL PL/1
Pascal Algol
Modula MVS/370
Smalltalk MS-DOS
Fortran
Fig. 1 Exciting products
Hence, although I strongly support the technology transfer and curriculum
development efforts now underway, I think the most important single effort we can
mount is to develop ways to grow great designers.
12
Boehm, B. W., “A spiral model of software development and enhancement,” Computer, 20, 5 (May,
1985), pp. 43-57.
F. Brooks: No Silver Bullet—Essence and accident in software engineering (1986) 16
No software organization can ignore this challenge. Good managers, scarce though
they be, are no scarcer than good designers. Great designers and great managers are both
very rare. Most organizations spend considerable effort in finding and cultivating the
management prospects; I know of none that spends equal effort in finding and developing
the great designers upon whom the technical excellence of the products will ultimately
depend.
My first proposal is that each software organization must determine and proclaim that
great designers are as important to its success as great managers are, and that they can be
expected to be similarly nurtured and rewarded. Not only salary, but the perquisites of
recognitionoffice size, furnishings, personal technical equipment, travel funds, staff
supportmust be fully equivalent.
How to grow great designers? Space does not permit a lengthy discussion, but some
steps are obvious:
Systematically identify top designers as early as possible. The best are often not the
most experienced.
Assign a career mentor to be responsible for the development of the prospect, and keep
a careful career file.
Devise and maintain a career development plan for each prospect, including carefully
selected apprenticeships with top designers, episodes of advanced formal education,
and short courses, all interspersed with solo design and technical leadership
assignments.
Provide opportunities for growing designers to interact with and stimulate each other.

Discussion

Brook’s is alluding to Moore’s law. Moore's Law, coined by Gordon Moore in 1965, observed that the number of transistors on a microchip doubles approximately every two years, while the cost of computers is halved. This prediction not only described the exponential increase in computing power but also set expectations for rapid technological advancement. It became a driving force for the industry, pushing for continual, predictable improvements in processing speed and efficiency. In software engineering, however, Brooks highlights that such predictable and consistent advancements in productivity have not been mirrored, suggesting that unlike hardware, software complexity does not succumb to a formulaic progression, hence, no "Moore's Law" for software development. Aristotle differentiated between the "essential" qualities that make something what it fundamentally is (its essence) and "accidental" qualities that it might have but do not define its essence (i.e. a property that it happens to have but that it could lack). Fred Brooks would have spent a considerable amount of time programming in assembly for the IBM 360, thus witnessing a phenomenal evolution with the emergence of high-level languages. By the time he wrote this paper, the initial versions of C++ and Objective-C had recently been released. Here is a rough timeline of some key high-level languages and when they were first introduced: - 1957: Fortran - 1959: COBOL - 1959: LISP - 1964: BASIC - 1964: PL/I - 1969: B - 1970: Pascal - 1972: C - 1972: Smalltalk - 1972: Prolog - 1983: Ada - 1983: C++ - 1984: Objective-C Germ theory, developed in the late 19th century, marks a pivotal shift in medical science from speculative causes of disease to evidence-based understandings. Pioneered by scientists like Louis Pasteur and Robert Koch, it replaced theories like the miasma theory of disease- which held that diseases such as chlamydia were caused by miasma, a noxious form of “bad air” - with the knowledge that microorganisms are responsible for many illnesses. Edward Feigenbaum was a pioneering figure in the field of artificial intelligence (AI), known for his work on expert systems. He co-developed the first expert system, DENDRAL (1965), which was a program designed to apply knowledge of organic chemistry to identify the structure of chemical compounds. His work helped to demonstrate the potential for AI applications in real-world problems, leading to the development of other expert systems designed to replicate the decision-making abilities of human experts. InterLisp, was an integrated programming environment (IPE) built around a version of Lisp. Unlike many languages that offered only the core language features, InterLisp provided a suite of development tools integrated into its environment. These included an editor, debugger, and interactive execution capabilities, which supported a rapid, iterative programming style. UNIX, on the other hand, is an operating system rather than a language, but it has been historically significant for its comprehensive suite of software development tools, like compilers, text editors, and debuggers, along with its powerful shell scripting capabilities. The combination of these tools within the UNIX environment allowed it to act as an integrated environment for software development. MIPS stands for Million instructions per second VLSI stands for Very Large Scale Integration, a process by which integrated circuits (ICs) are created by combining millions of transistors into a single chip. The term emerged in the 1970s and represents a significant leap forward from earlier integration levels, such as Small Scale Integration (SSI) and Medium Scale Integration (MSI). VLSI chip design involves the use of computer-aided design (CAD) tools to create the schematic diagrams and layouts of these complex circuits. Given the intricacy and the miniaturization of the components involved, computer graphics play a crucial role in the design and visualization of VLSI circuits. ![](https://i.imgur.com/kVYswHD.png) Fred Brooks (born in 1931) is an American computer scientist. Brooks, best known as the author of “The Mythical Man-Month”, earned a Physics degree from Duke University and then joined the newly created degree in computer science at Harvard University where he earned a PhD in 1956. At Harvard he was a student of Howard Aiken, who during WWII developed the Harvard Mark I, the first automatic digital calculator built in the United States. After graduation Brooks was recruited by IBM. There he would eventually manage the development of OS/360 , the operating system of IBM System/360, a family of mainframe computers introduced in 1964. The Mythical Man-Month describes a lot of the lessons Brooks learned during this project, which was perhaps the largest operating system project of its time. ![](https://i.imgur.com/WOecNKz.jpg) *Fred Brooks* In "No Silver Bullet: Essence and Accident of Software Engineering," Frederick P. Brooks, Jr. delivers a critical examination of the persistent challenges in software development. Confronting the era’s optimism for quick advancements, he presents a stark view: the complexity of software engineering cannot be easily conquered. Distinguishing between the immutable complexities ('essence') and those incidental ('accidents'), Brooks' analysis confronts the industry’s pursuit of a panacea. The paper, still relevant decades after its publication, warns of oversimplifying the intricate nature of software engineering, underlining that true progress hinges on grappling with its fundamental difficulties. Celebrated for its enduring insights, Brooks’ work remains a touchstone in the field, inviting continued reflection on the intricate realities of software development. Time-sharing systems marked a significant shift from the era of batch processing, fundamentally changing the way computers were used. Batch processing required programmers to submit jobs to operators who would queue them for sequential execution. This process could take hours or even days before the programmer received the output, with no opportunity for real-time interaction or error correction during execution. Time-sharing, introduced in the late 1950s and becoming more widespread in the 1960s, allowed multiple users to interact with a computer concurrently, sharing processor time so that the system could handle multiple interactive jobs at once. This capability dramatically improved productivity by reducing the wait times associated with batch processing. It facilitated an interactive environment where programmers could directly engage with their running programs, make quick alterations, and immediately see the results. This direct interaction was pivotal in managing the complexity of software development as it allowed for faster iteration, debugging, and understanding of complex systems. Time-sharing effectively transformed computers from being mere computational devices to interactive tools, enhancing the problem-solving capabilities of programmers and making the development process more efficient and intuitive. In the context of their work with some of the earliest computing systems like the ENIAC and EDVAC, Goldstine, von Neumann and Burks presented flowcharts as a formal method to describe the sequence of operations for solving computational problems. They wrote a report entitled "Planning and Coding of Problems for an Electronic Computing Instrument" which can be found [here](https://www.ias.edu/sites/default/files/library/pdfs/ecp/planningcodingof0103inst.pdf) ![](https://i.imgur.com/hD8Pok9.png) The Ada programming language, named after English mathematician Ada Lovelace, was designed to support large, long-lived applications where safety and security are critical. Its features include strong typing, modularity, run-time checking, parallel processing, exception handling, and generics. Ada's support for real-time systems, concurrent programming, and its emphasis on software engineering made it a rather innovative language when it was introduced it in the 1980s. ``` with Ada.Text_IO; use Ada.Text_IO; with Ada.Integer_Text_IO; use Ada.Integer_Text_IO; procedure Fibonacci is A, B, Temp : Integer := 0; N : Integer := 10; -- Set N to the number of Fibonacci numbers to print begin A := 0; B := 1; for I in 1 .. N loop Put (A'Img); Temp := A + B; A := B; B := Temp; end loop; New_Line; end Fibonacci; ``` > How much of what software engineers now do is still devoted to the accidental, as opposed to the essential? Unless it is more than 9/10 of all effort, shrinking all the accidental activities to zero time will not give an order of magnitude improvement. This part is really unclear what takes "more than 9/10" efforts. I asked ChatGPT and it says if the problems is comprise of less than 90% of effort in the accidental problem to solve than shrinking the accidental actitivities to 0 will not give an big improvement. Case: if problem require a lot of 'accidental activities' - say building an application on the newly created IBM machine - then it would takes more than 90% effort -> shrinking the accidental activities require to do the task will greatly improves the product. Other wise, say building an android application, shrinking the time to code or faster the code will not made great improvement because we're not solving the real problem at hand.