The Dartmouth Summer Research Project on Artificial Intelligence wa...
In 2006, 50 years after the Dartmouth summer research project, John...
Even in 1956 (10 years before the observation of Moore's law), thes...
From John McCarthy in 2006: “The original idea of the proposal was...
Claude Elwood Shannon was an American mathematician, electrical eng...
Marvin Lee Minsky was an American cognitive scientist concerned lar...
Is there any way I could get hold of Phd thesis of Marvin minsky ti...
Nathaniel Rochester designed the IBM 701, wrote the first assembler...
John McCarthy was an American computer scientist and cognitive scie...
Articles
12 AI MAGAZINE
The 1956 Dartmouth summer research project on
artificial intelligence was initiated by this August
31, 1955 proposal, authored by John McCarthy,
Marvin Minsky, Nathaniel Rochester, and Claude
Shannon. The original typescript consisted of 17
pages plus a title page. Copies of the typescript are
housed in the archives at Dartmouth College and
Stanford University. The first 5 papers state the
proposal, and the remaining pages give qualifica-
tions and interests of the four who proposed the
study. In the interest of brevity, this article repro-
duces only the proposal itself, along with the short
autobiographical statements of the proposers.
W
e propose that a 2 month, 10 man
study of artificial intelligence be car-
ried out during the summer of 1956
at Dartmouth College in Hanover, New Hamp-
shire. The study is to proceed on the basis of
the conjecture that every aspect of learning or
any other feature of intelligence can in princi-
ple be so precisely described that a machine
can be made to simulate it. An attempt will be
made to find how to make machines use lan-
guage, form abstractions and concepts, solve
kinds of problems now reserved for humans,
and improve themselves. We think that a sig-
nificant advance can be made in one or more
of these problems if a carefully selected group
of scientists work on it together for a summer.
The following are some aspects of the artifi-
cial intelligence problem:
1. Automatic Computers
If a machine can do a job, then an automatic
calculator can be programmed to simulate the
machine. The speeds and memory capacities of
present computers may be insufficient to sim-
ulate many of the higher functions of the
human brain, but the major obstacle is not lack
of machine capacity, but our inability to write
programs taking full advantage of what we
have.
2. How Can a Computer be
Programmed to Use a Language
It may be speculated that a large part of human
thought consists of manipulating words
according to rules of reasoning and rules of
conjecture. From this point of view, forming a
generalization consists of admitting a new
A Proposal for the
Dartmouth Summer
Research Project on
Artificial Intelligence
August 31, 1955
John McCarthy, Marvin L. Minsky,
Nathaniel Rochester,
and Claude E. Shannon
AI Magazine Volume 27 Number 4 (2006) (© AAAI)
Articles
WINTER 2006 13
Photo courtesy Dartmouth College.
Page 1 of the Original Proposal.
Articles
14 AI MAGAZINE
word and some rules whereby sentences con-
taining it imply and are implied by others. This
idea has never been very precisely formulated
nor have examples been worked out.
3. Neuron Nets
How can a set of (hypothetical) neurons be
arranged so as to form concepts. Considerable
theoretical and experimental work has been
done on this problem by Uttley, Rashevsky and
his group, Farley and Clark, Pitts and McCul-
loch, Minsky, Rochester and Holland, and oth-
ers. Partial results have been obtained but the
problem needs more theoretical work.
4. Theory of the Size of a Calculation
If we are given a well-defined problem (one for
which it is possible to test mechanically
whether or not a proposed answer is a valid
answer) one way of solving it is to try all possi-
ble answers in order. This method is inefficient,
and to exclude it one must have some criterion
for efficiency of calculation. Some considera-
tion will show that to get a measure of the effi-
ciency of a calculation it is necessary to have
on hand a method of measuring the complex-
ity of calculating devices which in turn can be
done if one has a theory of the complexity of
functions. Some partial results on this problem
have been obtained by Shannon, and also by
McCarthy.
5. Self-lmprovement
Probably a truly intelligent machine will carry
out activities which may best be described as
self-improvement. Some schemes for doing
this have been proposed and are worth further
study. It seems likely that this question can be
studied abstractly as well.
6. Abstractions
A number of types of “abstraction” can be dis-
tinctly defined and several others less distinct-
ly. A direct attempt to classify these and to
describe machine methods of forming abstrac-
tions from sensory and other data would seem
worthwhile.
7. Randomness and Creativity
A fairly attractive and yet clearly incomplete
conjecture is that the difference between cre-
ative thinking and unimaginative competent
thinking lies in the injection of a some random-
ness. The randomness must be guided by intu-
ition to be efficient. In other words, the educat-
ed guess or the hunch include controlled
randomness in otherwise orderly thinking.
The Proposers
Claude E. Shannon
Claude E. Shannon, Mathematician, Bell Tele-
phone Laboratories. Shannon de veloped the
statistical theory of information, the applica-
tion of propositional calculus to switching cir-
cuits, and has results on the efficient synthesis
of switching circuits, the design of machines
that learn, cryptography, and the theory of Tur-
ing machines. He and J. McCarthy are coedit-
ing an Annals of Mathematics study on “The
Theory of Automata”.
Marvin L. Minsky
Marvin L. Minsky, Harvard Junior Fellow in
Mathematics and Neurology. Minsky has built
a machine for simulating learning by nerve
nets and has written a Princeton Ph.D thesis in
mathematics entitled, “Neural Nets and the
Brain Model Problem” which includes results
in learning theory and the theory of random
neural nets.
Nathaniel Rochester
Nathaniel Rochester, Manager of Information
Research, IBM Corporation, Poughkeepsie,
New York. Rochester was concerned with the
development of radar for seven years and com-
puting machinery for seven years. He and
another engineer were jointly responsible for
the design of the IBM Type 701 which is a large
scale automatic computer in wide use today.
He worked out some of the automatic program-
ming techniques which are in wide use today
and has been concerned with problems of how
to get machines to do tasks which previously
could be done only by people. He has also
worked on simulation of nerve nets with par-
ticular emphasis on using computers to test
theories in neurophysiology.
John McCarthy
John McCarthy, Assistant Professor of Mathe-
matics, Dartmouth College. McCarthy has
worked on a number of questions connected
with the mathematical nature of the thought
process including the theory of Turing ma -
chines, the speed of computers, the relation of
a brain model to its environment, and the use
of languages by machines. Some results of this
work are included in the forthcoming “Annals
Study” edited by Shannon and McCarthy.
McCarthy’s other work has been in the field of
differential equations.

Discussion

Is there any way I could get hold of Phd thesis of Marvin minsky titled: Neural nets and the brain model problem. Even in 1956 (10 years before the observation of Moore's law), these researchers were seeing that electronic capacity/functionality were doubling approximately every eighteen months, and that this rate of improvement was not slowing down. This conference was one of the first serious attempts to consider the consequences of this exponential growth. Claude Elwood Shannon was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory". Shannon is noted for having founded information theory with a landmark paper, A Mathematical Theory of Communication, that he published in 1948. He is, perhaps, equally well known for founding digital circuit design theory in 1937, when—as a 21-year-old master's degree student at the Massachusetts Institute of Technology (MIT)—he wrote his thesis demonstrating that electrical applications of Boolean algebra could construct any logical, numerical relationship. Shannon contributed to the field of cryptanalysis for national defense during World War II, including his fundamental work on codebreaking and secure telecommunications. https://en.wikipedia.org/wiki/Claude_Shannon ![Imgur](https://i.imgur.com/vUenlpo.png) Nathaniel Rochester designed the IBM 701, wrote the first assembler and participated in the founding of the field of artificial intelligence. https://en.wikipedia.org/wiki/Nathaniel_Rochester_(computer_scientist) In 2006, 50 years after the Dartmouth summer research project, John McCarthy reflected on its significance: “What came out of Dartmouth? I think the main thing was the concept of artificial intelligence as a branch of science. Just this inspired many people to pursue AI goals in their own ways. My hope for a breakthrough towards human-level AI was not realized at Dartmouth, and while AI has advanced enormously in the last 50 years, I think new ideas are still required for the breakthrough.” - John McCarthy, 2006. http://www-formal.stanford.edu/jmc/slides/dartmouth/dartmouth/node1.html From John McCarthy in 2006: “The original idea of the proposal was that the participants would spend two months at Dartmouth working collectively on AI, and we hoped would make substantial advances.It didn't work that way for three reasons. First the Rockefeller Foundation only gave us half the money we asked for. Second, and this is the main reason, the participants all had their own research agendas and weren't much deflected from them. Therefore, the participants came to Dartmouth at varied times and for varying lengths of time. Two people who might have played important roles at Dartmouth were Alan Turing, who first uderstood that programming computers was the main way to realize AI, and John von Neumann. Turing had died in 1954, and by the summer of 1956 von Neumann was already ill from the cancer that killed him early in 1957.“ The Dartmouth Summer Research Project on Artificial Intelligence was a 1956 summer workshop at Dartmouth that is considered to be the seminal event for artificial intelligence as a field. The project lasted approximately 6 to 8 weeks, and was essentially an extended brainstorming session. 11 mathematicians and scientists were originally planned to be attendees, the list of attendees according to the notes of Solomonoff (one of the attendees) are below: ![Imgur](https://i.imgur.com/lPY0puW.png) John McCarthy was an American computer scientist and cognitive scientist. McCarthy was one of the founders of the discipline of artificial intelligence. He coined the term "artificial intelligence" (AI), developed the Lisp programming language family, significantly influenced the design of the ALGOL programming language, popularized timesharing, and was very influential in the early development of AI. McCarthy received many accolades and honors, such as the Turing Award for his contributions to the topic of AI, the United States National Medal of Science, and the Kyoto Prize. https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist) Marvin Lee Minsky was an American cognitive scientist concerned largely with research of artificial intelligence, co-founder of the Massachusetts Institute of Technology's AI laboratory, and author of several texts concerning AI and philosophy. Minsky wrote the book Perceptrons (with Seymour Papert), which became the foundational work in the analysis of artificial neural networks. This book is the center of a controversy in the history of AI, as some claim it to have had great importance in discouraging research of neural networks in the 1970s, and contributing to the so-called "AI winter". He also founded several other famous AI models. His book A framework for representing knowledge created a new paradigm in programming. While his Perceptrons is now more a historical than practical book, the theory of frames is in wide use. Minsky has also written on the possibility that extraterrestrial life may think like humans, permitting communication. https://en.wikipedia.org/wiki/Marvin_Minsky