http://journals.cambridge.orgDownloaded: 21 Apr 2014 IP address: 163.1.72.155
human control to machine control to be quite sudden and
rapid: the possibilities of computing overhang and recursive
self-improvement.
What is computing overhang? Suppose that computing
power continues to double according to Moore’s law, but
figuring out the algorithms for human-like general intelli-
gence proves to be fiendishly difficult. When the software
for general intelligence is finally realized, there could exist
a ‘computing overhang’: tremendous amounts of cheap
computing power available to run human-level artificial intel-
ligences (AIs). AIs could be copied across the hardware
base, causing the AI popul ation to quickly surpass the
human population. These digital minds might run thou-
sands or millions of times faster than human minds. AIs
might have further advantages, such as superior communi-
cation speed, transparency and self-editability, goal coordi-
nation, and improved rationality.
And what is recursive self-improvement? We can predict
that advanced AIs will have instrumental goals to preserve
themselves, acquire resources, and self-improve, because
those goals are useful intermediaries to the achievement of
almost any set of final goals. Thus, wh en we build an AI
that is as skilled as we are at the task of designing AI systems,
we may thereby initiate a rapid, AI-motivated cascade of self-
improvement cycles. Now when the AI improves itself, it
improves the intelligence that does the improving, quickly
leaving the human level of intelligence far behind.
A superintelligent AI might thus quickly become superior
to humanity in harvesting resources, manufacturing, scienti-
fic discovery, social aptitude, and strategic action, among
other abilities. We might not be in a position to negotiate
with it or its descendants, just as chimpanzees are not in a
position to negotiate with humans.
At the same time, the convergent instrumental goal of
acquiring resources poses a threat to humanity, for it
means that a superintelligent machine with almost any final
goal (say, of solving the Riemann hypothesis) would want
to take the resources we depend on for its own use. Such
Muehlhauser and Bostrom Why We Need Friendly AI
†
42