### TL;DR In this paper Giorgio Parisi attempts to bring togethe...
Giorgio Parisi is an Italian theoretical physicist and Nobel laurea...
The **Standard Model** of particle physics is the theory describing...
**Perturbation Theory** refers to methods used for finding an appro...
> ***The aim of a theory of complex systems is to find the laws whi...
A problem is called NP (nondeterministic polynomial) if its solutio...
E. coli was one of the first organisms to have its genome sequenced...
The Human Genome Project was a 13-year-long international scientifi...
> ***"The problem that biology must confront is how to move forward...
4 2 Physics World September 1993
Physics began with the study of simple models that became more
complicated as they became more realistic. Biology has followed the
opposite path but the two disciplines are now converging in the
study of complex systems
Statistical physics
and biology
The relationship between
biology and physics has
often been close and, at
times, uneasy. During this
century many physicists
have moved to work in
biology. Amongst the ^^^^^^^^^^^^^^^^^
most famous are Francis ^^^^^^^^^^^^^^^^B
Crick (the joint discoverer
of the DNA double helix with Jim Watson), and Max
Delbriick and Salvatore Luria (Nobel prize-winners for
their work on mutations). However, after these scientists
changed their research field, they worked in the same way
as other biologists and used their physics training to a
reduced extent.
An intermediate discipline is biophysics, but here physics
is often used as a tool to serve biology (a similar situation
exists in biochemistry). In both cases, chemistry and
physics provide explanations of what is happening at the
lowest level, that of molecules and forces, but these are
used in a biological framework.
The situation is now changing and scientists are using
physics methods and developments in theoretical physics,
such as statistical mechanics, to study certain fundamental
problems in biology. This phenomenon arises from reasons
common to both disciplines.
Current physics
One cycle in the history of physics has finished and a new
one with different problems is emerging. One of the key
problems in physics has been to discover the fundamental
laws of nature, that is the elementary constituents of matter
and the forces between them. Twenty years ago the
structure of the components of the nucleus (protons and
neutrons) and the origin of nuclear forces were unknown.
Intense debates took place about whether quarks were the
constituents of the proton and there was no clear idea of
the nature of the forces between these hypothetical quarks.
Now almost everything is known about quarks and their
interactions. The laws of
physics,
from the atomic nucleus
to the galaxy, appear to be firmly worked out and most
scientists do not expect the future to hold many surprises.
However, at scales much smaller than the atomic nucleus
and as large as the entire Universe, many things are still not
understood. In some cases we are still in almost total
ignorance. In physics possibly the greatest mystery
still
to be
unravelled is the origin of gravitational forces and their
behaviour over very short distances. This is a difficult
problem, since the crucial experiments may involve
particles with energies many billions of times greater than
currently produced in the laboratory. But, within the range
that affects normal human activities, from the physics of
elementary particles to the study of stellar evolution, we
GIORGIO PARISI
have a satisfactory formula-
tion of the laws.
However, a knowledge of
the laws that govern the
behaviour of the constitu-
ent elements of the system
^^^^^^^^^^^^^^^^ does not necessarily imply
^^^^^^^^^^^^^^^â„¢ an understanding of the
overall behaviour. For ex-
ample, it is not easy to deduce from the forces that act
between molecules of water why ice is lighter than water.
The answer to such questions can be obtained from
statistical mechanics. This discipline, which arose in the
late 19th century from the work of Boltzmann and Gibbs,
studies systems of many particles using probability
methods rather than by determining the trajectories of
the individual particles.
Statistical mechanics has provided the fullest possible
understanding of the emergence of collective behaviour.
We cannot say whether a few atoms of water will form a
solid or a liquid and what the transition temperature is.
Such statements only become precise when many atoms are
being considered (or, more accurately, when the number of
atoms tends to infinity). Phase transitions therefore emerge
as the effect of the collective behaviour of many
components - for example, the transition in which a
normal metal suddenly becomes a superconductor - as the
temperature is lowered. Statistical mechanics has been used
to study
a
variety of phase transitions from which a range of
collective behaviours emerge.
The predictive capabilities of statistical mechanics has
increased greatly over the last 20 years, both through
refinements of the theoretical analyses and through the use
of computers. In particular, interesting results have been
obtained from the study of systems in which the laws have
been selected by chance - so-called disordered systems.
Effects of computers
The computer has produced great changes in theoretical
physics. Current computers can perform about a billion
operations a second on seven figure numbers, and tasks
that were once considered impossible have now become
routine. If one wanted to calculate theoretically the
liquefaction temperature of a gas (argon, for example)
and one already knew the form of the forces between the
atoms, then one had to make very rough approximations
and only under the above conditions could a prediction of
the liquefaction temperature be produced by simple
calculations. However, the prediction would not agree
with the experimental data (typically the error was about
10%).
The only way to improve the agreement between the
predictions and experiment was to remove the approxima-
tions gradually. This theoretical approach, known as
Physics World September 1993 43
perturbation theory, produced highly complex expressions
and tedious calculations which could be done by hand or,
if necessary, by computer.
The new approach, which spread
in
the 1970s, together
with the intensive use of computers, was to cease making
approximations
on
the motion
of
particles and instead
to
calculate their trajectories exactly.
For
example,
we can
simulate the
behaviour o f a certain
number of argon atoms (let's say
8000) inside
a box of
variable
dimensions
and
observe what
happens
-
whether
the
argon
behaves
as a
liquid
or
as
a
gas.
The computational load of s u ch
a
simulation
is
enormous.
The
trajectories
of
8000 particles
have
to be
calculated and these
have
to be
followed
for
long
enough
for
them
to
overcome
the dependence
on the
initial
configuration. Such an approach
could
not
even
be
imagined
without
a
computer.
One
of the
reasons
for the
interest aroused
by the
simula-
tions is that systems that do not
exist
in
nature,
but
which
are
simpler from a theoretical point
of view,
can
also
be
simulated.
Simulations thus become
the
prime laboratory
to
examine
new theories that could
not be
verified directly
in
the real, and
too complex, world. Obviously
the final
aim is to
apply such
ideas
to
real cases
but
this may
be
done only after
the
theory has been sufficiently strengthened and has gained in
confidence from being matched against simulations.
Complex systems
Computers have played
a
fundamental role
in the
development
of
modern theories
of
disordered systems
and complex systems. For complex systems the situation
is
delicate since this
is a
newer field with interdisciplinary
characteristics
- it
has connections with biology, informa-
tion technology, system theory
and
ecology.
It is
also
a
fashionable field, despite the fact that anyone who tries
to
define "complex" finds it hard to do. Sometimes its sense
of complicated is emphasised, meaning that it is composed
of many elements
(a
nuclear power station
is a
complex
system insofar
as it is
composed
of
many thousands
of
different pieces).
At
other times
the
sense
of
incompre-
hensible is stressed (the atmosphere is a complex system in
that one cannot make long-range forecasts). Often every
speaker at conferences
on
complex systems uses the word
complex with a different emphasis (see Livi
et
al.
and Peliti
and Vulpiani
in
Further reading).
The
real problems
emerge when, having declared that
a
given system
is
complex, one wishes to use this statement to obtain results
and not restrict oneself
to
saying that the system is complex
and therefore no prediction is possible.
The aim of
a
theory of complex systems is to find the laws
which govern the overall behaviour of such systems. These
are phenomenological laws which cannot
be
easily
deduced from
the
laws that govern
the
individual
components.
For
example,
the
behaviour
of
individual
neurons is probably well understood but it is far from clear
Arthur Lask/Sclance Photo Library
Computer graphic
of
myoglobin, the oxygen-storage protein
found
in
muscle. Biology must now move forward from
a
knowledge of the basic biochemical constituents
of
org a nis ms to
an understanding
of
such systems' overall behaviour
how
ten
billion neurons, linked
by a
hundred trillion
synapses, form a brain that thinks. As we have already seen,
the emergence
of
collective behaviours
is a
phenomenon
that has already been much studied
by
physics
in
other
contexts:
the
interaction
of
large numbers
of
atoms
and
molecules is responsible for phase transitions (of the water-
ice kind). Nevertheless,
for
complex systems,
the
overall
behaviour of the system
is
not as
simple as that of
water,
which,
at
a given temperature, may
be in
one,
or at most two, states (if one
ignores the critical point).
If we assume that
a
complex
system must display complex
behaviour, then most
of the
systems studied
in the
past
by
physicists have displayed simple
behaviour
and
cannot
be re-
garded
as
complex. However,
physicists, with
the aim of un-
derstanding
the
behaviour
of
some disordered systems,
for
example
the
spin glasses (alloys
of gold
and
iron that display
anomalous magnetic behaviour
at
low
temperature), have
re-
cently begun
to
obtain results
on
the
properties
of
relatively
simple systems which display
complex behaviour (see Mezard
et al.
in
Further reading).
The
techniques used during this
research were more general than
one might expect given their
rather esoteric origin
and are
now being applied to the study of neural networks.
Frequently
the
study
of
complex systems
has
been
advanced either
by
analysis
or by
computer simulation.
Moreover, these
two
approaches
can be
combined
synergistically. Computers enable simulations
of
complex
systems
of
moderate complexity
to be
carried
out so
that
quantitative
and
qualitative laws
can be
extracted from
these systems before tackling truly complex systems.
The
mixture
of
analytical results from simpler systems
and
numerical simulations performed on systems of intermed-
iate complexity is very effective.
A profound understanding of the behaviour
of
complex
systems would be extremely important. Attention has been
focused
on
systems composed
of
many elements
of
different types which interact on the basis of more
or
less
complicated laws and
in
which there are many feed-back
circuits that stabilise the collective behaviour. In such cases
a traditional reductionist point
of
view appears
to
lead
nowhere. An overall approach,
in
which the nature
of
the
interactions between
the
constituents
is
ignored, also
appears to be useless
in
that the nature of the constituents
is crucial
in
determining the overall behaviour.
The emerging theory
of
complex systems takes
an
intermediate point of
view.
It
starts from the behaviour of
the individual components,
as in a
reductionist approach,
but incorporates
the
idea that
the
minute details
of
the
properties
of
the components
are
irrelevant and that
the
collective behaviour does not change if the laws governing
the conduct of the components are changed slightly. The
ideal would be to classify the types of collective behaviour
and
to
see what position
a
system would occupy
in
this
classification
if the
components were changed
(see
Rammal
et
al.
and
Parisi 1992b
in
Further reading).
In
4 4 Physics World September 1993
other words, collective behaviours should
be
structurally
stable and therefore susceptible to classification, unfortu-
nately
a
much more complicated one than that made by
Rene Thorn of the Institut Hautes Etudes Scientifiques,
Paris,
in
the book Structural Stability and Morphogenesis
(1975,
Benjamin, Reading, MA).
Spin glasses: complex system exemplified
The theoretical study is not carried out on an individual
system,
but
instead
we
study
a
whole class
of
systems
simultaneously, which differ from each other in
a
random
component. This approach may be understood more easily
if I give a concrete example. Imagine a group of people in a
room who know and either like
or
dislike each other.
Suppose we divide them
at
random into two groups and
then ask each person whether they want to move from one
group
to
another (the person will answer yes
or no
according
to
their preferences).
If a
person says yes they
change groups immediately. After the first round, some
people will still not be satisfied. Some of those who moved
(or who did not move after the first round) might wish to
reconsider their own position after the moves made
by
others. A second round of changes is made and so on until
there are no further requests from individuals. In a further
stage, we could try moving not just individuals but whole
groups
of
peopl e , since
a
change
of
grou p may only
b e
attractive if made in the company of others, until a state of
general satisfaction is achieved.
The fact that after
a
finite number
of
rounds
all the
requests from individuals have been satisfied depends
crucially
on the
hypothesis that we have assumed that
relationships of like or dislike are symmetrical, i.e. that if
Caio likes Tizio then Tizio likes Caio. We could have
asymmetric situations: Caio likes Tizio but Tizio cannot
bear Caio.
If
such relationships
are
c o m m o n then
the
procedure above will never achieve
a
stable state
-
Caio
pursues Tizio, who flees, and the two never stop. It is only
when the relationships are symmetrical that the desires of
the individuals are all directed towards the same objective
-
the optimisation of general satisfaction.
Obviously, the final configuration will depend upon the
relationships between the various people and the initial
configuration, and it cannot therefore be calculated directly
without such information. Nevertheless, we can calculate
approximately the general properties
of
this process (for
example, how many rounds will be needed before every-
body
is
happy)
on the
basis
of the
distribution
of
friendships and dislikes being random. More precisely let
us suppose that we know the probability distribution
of
friendships. For example, the probability that two people
selected at random will be friends is p, where p is a number
between
0 and 1.
We need
to
establish whether this
relationship model is true
or if
the real situation
is
more
complicated (for example, we could introduce degrees of
dislike and suppose that people
in
real life who live near
each other have stronger relationships, whereas those who
live far apart are more indifferent). Once we have achieved
an accurate model of the probability distribution of likes
and dislikes, we can make predictions for the system. These
will
be
purely
in
terms
of
probability and will become
increasingly accurate (that
is
with
a
decreasing relative
error) as the system becomes larger and the number of
components tends to infinity.
In this situation, probability is used differently from how
we are used to. Whilst, classically, it was supposed that the
dynamics by which systems evolved were so complicated
that one could assume that the configurations were entirely
random (like a coin being tossed), in the case that we are
studying, the laws that govern the system themselves are
chosen at random before the dynamics are studied.
The symmetrical system corresponds exactly (from
a
mathematical point
of
view)
to the
spin glasses
of
physicists.
If
we were
to
replace the words "like"
and
"dislike"
by
"ferromagnetic"
and
"antiferromagnetic"
and we identify the two groups with spins oriented
in
different directions we would produce a precise description
of spin glasses.
At
low temperatures
a
physical system
develops
so as to
minimise
the
temperature
and
this
dynamic
is
very similar
to the
optimisation process
described above, provided that physical energy
is
identi-
fied with general dissatisfaction.
In
the last decade, spin
glasses have been studied intensively and
we
can now
answer many of the questions that have been raised.
Possibly
the
most interesting characteristics
of
these
systems are that many different equilibrium states exist (if
we move just one person at a time) and that it is difficult to
find the optimum configuration, in respect of a change of
an arbitrarily high number of people. Spin glasses are one
of the best known examples of those optimisation problems
(termed "NP completeness problems"
for
reasons that
need not be discussed, see Garey and Johnson in Further
reading) whose optimum solution can be found only after a
high number of passes (which increases exponentially with
the number of elements in the system).
The presence of many different equilibrium states may be
regarded
as one of
the most typical characteristics
of
complex systems (at least in one meaning of the word).
A
system which does not change with time
is
not complex
whilst
one
that
can
assume many different forms
is
certainly complex. The unexpected discovery made in the
last 10 years is that the complexity emerges naturally from
the disorder of the laws of motion. To obtain
a
complex
system
we do not
have
to
force ourselves
to
choose
particular laws. They can be chosen
at
random but with
well determined probability distributions. This result has
opened
up a
new perspective
in the
study
of
complex
systems. It is a new and fascinating field in which results are
obtained slowly, partly because of the newness of the field
and partly because of the need
to
use new mathematical
tools.
For example, the replica theory, which has been used
to obtain the most interesting results, has no mathematical
proof at present.
Detail of spin glasses
Spin glasses may be regarded
as a n
overall optimisation
process in which the minimising function corresponds to
the system energy.
I
will now lay out in precise form some
of the results that have been obtained on spin glasses (see
Mezard et al. in Further reading).
More precisely we may consider a system described by N
variables,
s, (i
varies from
1 to N),
which can have two
possible values (1 or -1) corresponding to the two possible
spin orientations (using the language of the analogy above,
they correspond to the two possible changes of
group).
For
every possible pair (i, k) a variable J
ik
is introduced, which
is chosen at random (1 or -1). The energy E is given by the
sum
for all the
pairs
of
Ja^
f
ik
-
If
Jik
is
negative,
the
minimisation of the contribution to the energy E of the pair
(i,
k) is obtained by choosing the same values for the two
spins,
5, and s
k
.
If,
on the other hand,
J
ik
is positive, the
minimisation of the contribution of the pair (i,
k) t o
the
energy
E
is achieved by selecting opposite values for the
spins
Si
and sk.
Different choices of J
ik
correspond to different systems.
Physics World September 1993 45
The theory of spin glasses provides accurate estimates on
the behaviour of
systems
within the limits where N tends to
infinity. Many of the properties become independent of the
choice of these variables for practically
all
realisations of the
system. In particular, the minimum energy can be
approximated by -0.7633 N
3/2
.
The process of successive minimisation described above
corresponds to choosing sequentially for each i, the
variable s
t
so as to minimise its contribution to the
energy. If one starts from a
random configuration this evolu-
tion results in a final energy
proportional to -0.73 N
3/2
and
therefore markedly greater than
the minimum. A procedure of
sequentially searching for the
minimum, carried out on one
variable after another, cannot
therefore attain the overall mini-
mum but merely a local mini-
mum, that is a configuration
whose energy will not decline by
changing one individual spin at a
time.
The fact that the evolution
of the system becomes stalled in a
local minimum is also connected
with the existence of many local
minima. The total number of
different local minima increases
as 2
03N
(almost in line with the
total number of configurations,
which rises as 2
N
).
The calculation of the energy
minimum is not simple and is
based upon several hypotheses
which, though reasonable, have
not been proven rigorously. To
calculate the minimum energy
one must calculate simulta-
neously the number of configura-
tions with energy close to the minimum, and the
correlations existing between the difference in energy and
the distance between two configurations defined as the
percentage of different spins. A detailed analysis shows that
the configurations with energy close to the minimum can
be classified in a tree classification similar to that used in
biology. This totally unexpected result demonstrates the
enormous richness and complexity of such an apparently
simple system. A more detailed description would take us
too far from die subject of
this
article, but see Parisi 1992b
in Further reading.
The dilemma of biology
Biology faces several problems which result from spec-
tacular recent successes. Developments in molecular
biology and genetic engineering have provided a detailed
understanding of the basic biochemical mechanisms. In
many cases we know the identity of the molecules on the
cell membrane that receive a message (in the form of a
chemical transmitter) sent by other cells, and how diis
message is transmitted to die cell nucleus via die activation
of a chain of chemical reactions. We can often identify the
genes that code for a particular character or which control
the growth of an organism or die development of limbs.
The human genome project has been launched widi die
aim of determining the sequence of human DNA. This
colossal project will cost roughly one billion dollars.
Electron micrograph of human nerve cells (neurons). Each
neuron consists of a cell body (green) with branching
processes extending from it. The arrangement of neurons in
the brain is an excellent example of a disordered system
The difficulties of exploiting these results to die full lie in
die enormous differences that exist between die knowledge
of the underlying biochemical reactions and the under-
standing of the overall behaviour of living organisms.
Consider one of die simplest living organisms,
Escherichia
coli,
a small bacterium litde more than a micron in length
that is present in large numbers in die human intestine. E.
coli
contains about 3000 different types of protein which
interact extensively with each odier. Some proteins play
essential roles in die cell's meta-
bolism, other proteins regulate
die production of diese proteins
eidier by inhibiting dieir synthesis
(suppressors) or stimulating it
(operons). The synthesis of oper-
on or suppressor proteins is itself
controlled by other proteins.
There is litde doubt mat widiin a
few years the complete list of the
proteins of
21.
coli will
be available.
It might even be possible to know
die functions of each of these
proteins and the mechanisms
diat control dieir syndiesis. We
could envisage that in a few
decades we might have an
enormous computer program
which could successfully simu-
late die behaviour of a real E. coli
cell in terms of the total amount
and spatial distribution of each
kind of chemical.
However, it is not clear that
such knowledge would be suffi-
cient to understand how die living
organism functioned. We could
eventually understand die various
feed-back circuits but it does not
seem that such an approach
would enable us to grasp the
underlying reasons by which die system behaves overall as
a living organism. In other words, even if we succeed in
modelling a unicellular living organism with a system of
differential equations widi N variables (die appropriate
value of N for a single cell is not clear, but would be
between 10
4
, die number of different chemical substances,
and 10
12
, die number of atoms), we would still have to
deduce die overall characteristics of die behaviour of die
system widi more sophisticated techniques dian numerical
simulations. It is die same problem as in statistical
mechanics, where a knowledge of die laws of motion
does not directly imply an understanding of die collective
behaviour. One could dierefore postulate diat once die
techniques of molecular biology had reached a sufficiently
detailed level of knowledge of die molecular phenomena,
an understanding of
the
collective behaviour (and therefore
of life) should be obtained by using techniques similar to
the ba
sicbiochemical
mechanisms.
The same problem presents itself at a higher level in die
study of die vertebrate brain. We might soon know all the
functional details of die behaviour of individual neurons
but this information alone would not enable us to
understand how several billion such neurons, linked in a
disordered manner, form a brain that can diink.
A similar approach could be taken for many biological
systems, for example the study of die dynamics of protein
synthesis in die cell, ontogenesis, natural evolution, and
hormone balance in mammals (see Kauffman in Further
4 6 Physics World September 1993
reading). A characteristic of all these systems is that they
comprise many different types of elements which interact
amongst themselves on the basis of more or less complex
laws.
Consider all the effects that a hormone can have on
the production of other hormones. Furthermore, such
systems have many feed-back
circuits that stabilise the collec-
tive behaviour (the production of
a given hormone is set by
homeostatic mechanisms). In
such cases, the traditional reduc-
tionist point of view does not
appear to lead anywhere. The
number of hormones, for exam-
ple,
is so high that the interac-
tions of each hormone with the
others cannot be determined
fully and therefore a precise
model of the system cannot be
constructed. An overall approach,
in which the nature of the inter-
actions between the components
is ignored, would also not seem
to be appropriate. The hormonal
system behaves in a different way
from a cell (a cell divides in two,
the former does not) and the
differences in the nature of the
components are crucial to deter-
mining the differences in overall
behaviour.
The problem that biology must
confront is how to move forward
from a knowledge of the beha-
viour of the basic constituents
(proteins, neurons etc) to dedu-
cing the system's overall beha-
viour. Fundamentally, this is the
same kind of problem that confronts statistical mechanics
and this is why attempts are being made to adapt to
biological systems the same techniques that were developed
to study physical systems composed of many components
of different types with laws chosen at random. The theories
of the complex behaviour of disordered systems could thus
be used to study biological complexity (see Anderson, and
Kauffman and Levin in Further reading).
Chance and necessity
A proposal to study the laws that govern the interactions
between the various components can make a biologist
nervous. A first reaction is for them to regard the entire
programme as idiocy put forward
by
someone with no clear
idea of the nature of biology. The main objection is that
existing biological systems result from a natural selection
over many millions of years and that the components of a
living organism have therefore been selected carefully so
that it functions. It is not clear whether the probability
methods of statistical mechanics (in the sense that the laws
of motion have been chosen by chance) could be
successfully applied to living organisms, in that compo-
nents that have been selected for a purpose have not been
chosen by chance.
The objection is not as strong as it may first appear. To
state that something has been chosen by chance does not
mean that it has been chosen
completely
by chance but on
the basis of laws that are in part deterministic and part
random. The real problem is to understand whether the
v a uowsetl/science ^noto Library
Electron micrograph of Escherichia
coli.
Even if we eventually
manage to model this unicellular organism numerically, more
sophisticated techniques will be necessary to deduce overall
system behaviour
chance element in the structure of a living organism is
crucial to its proper behaviour.
Much depends upon the underlying structure of the
living organism, although a major disagreement exists over
this point between the various approaches. The cell is often
seen as a large computer with
DNA representing the program
(software) and the proteins repre-
senting the electrical circuits
(hardware). If this idea was not
wide of the mark there would be
no point in using statistical
mechanics to study biology, just
as there is no point in using it to
study a real computer. A compu-
ter is built to a design and the
connections are not made at
random but in accordance with a
firm layout. A living organism is
not made in a totally random
fashion but equally it is not
designed on paper. They have
evolved via a process of random
mutation and selection.
These two aspects are crucial to
the study of protein dynamics. On
the one hand it is clear that
proteins have a well defined pur-
pose and have been designed to
achieve it. However, proteins have
initially been generated in a ran-
dom manner and perhaps some of
the physical properties of proteins
(particularly those which have not
been selected against) still reflect
the properties of a polypeptide
chain with elements chosen at
random along the chain.
This marriage of determinism and chance can be found
if we study the development of a single individual. For
example, the brains of two twins may appear completely
identical if not examined under the microscope. However,
the positions and connections of the neurons are
completely different. Individual neurons are created in
one part of the cranium, migrate to their final position and
send out nerve fibres that attach themselves to the first
target that they reach. Without specific signals on the
individual cells such a process is inevitably unstable and
therefore the slightest disturbance leads to systems with
completely differing results. The metaphor of the compu-
ter does not seem adequate in that the description of the
fine detail (the arrangement and connection of the
individual elements) is not laid down in the initial
design. Moreover, the number of bits of information
required to code the connections in a mammal brain is
of the order of 10
15
, far greater than the 10
9
bits of
information contained within DNA.
The arrangement of
the
neurons and their connections in
the brain is an excellent example of
a
disordered system, in
which there is a deterministic, genetically controlled
component (all that is the same in the brains of two twins,
i.e. the external form, weight, possibly the hormonal
balances) and a chance element which differs from twin to
twin. Our attitude to the methodology that should be used to
achieve an understanding of the behsviour of the brain
changes completely depending upon whether we consider
the variable
(and therefore chance) part
to
be
a
non-essential,
non-functional accident or if we think that some character-
Physics World September 1993
47
istics
of
the variable part
are
crucial
for
proper function.
The need
to use
techniques from
the
statistical
mechanics
of
disordered systems also stems from
the
fact
that biological systems interact extensively with the outside
world and that this interaction has both
a
deterministic and
a chance component. For example,
the
faces
of
the people
that we know have
a
constant component (they are faces)
and
a
variable (chance)
one, the
characteristics
of
each
individual.
It has
perhaps
not
been fortuitous that
the
greatest success
in
biology
of the use of the
statistical
techniques
of
disordered systems
has
been
the
study
of
neuronal networks
and
their capacity
to
function
as
memories,
to
memorise
and to
subsequendy recognise
several types
of
input (see Hopfield; Amit; Parisi 1992a
in
Further reading).
The
chance nature
of
the events being
memorised reflects the random nature
of
the growth
of
the
synapses between
the
various neurons
and
therefore
a
disordered synaptic structure.
In
this field, various models
of associative memory have been constructed, whose
dynamics
are
well understood theoretically,
and a
suffi-
cient level of
analysis
has been reached to begin to compare
the predictions
of the
theory (which
has
been made
increasingly realistic) with
the
experimental data from
recording
the
activity
of
individual neurons.
Convergence
The attempt
to
bring together physics
and
biology
described here entails
a
change
in the
outlook
of
bodi
die
physicist
and the
biologist.
As a
result
of
training,
the
theoretical physicist tends
to be
more concerned with
general principles
(for
example,
in
seeking
to
understand
how a system that only distantly resembles E.
colt
could
be
regarded
as
living), whereas
die
biologist remains wedded
to what
is;
that
is, to
understand
the
real
E.
coli,
not a
hypodietical system that could
not be
produced
in
nature.
Physics
has a
strong simplifying tradition
and
tends
to
concentrate
on
certain aspects whilst neglecting odier
elements, even essential ones. Modern physics
was
born
widi Galileo, who founded mechanics by ignoring friction,
despite
the
fact diat friction
is an
essential part
of
everyday
life.
The object that
is not
subject
to
any forces and which
moves
in a
uniform straight line (as
in
Newton's First Law)
is
a
purely abstract concept (leaving aside billiard balls).
Physics
was
born widi
a
backward step, with
the
renunciation
of the aim of
understanding everything
about what
is, and
with
die
proposal
to
study just
a
small,
and
initially tiny, corner
of
nature. Physicists were
well aware diat diey were studying
an
idealised, simplified
world. Niccolo Tartaglia,
the
16th-century Italian math-
ematician
who
published
die
first book
on
projectiles,
wrote: "Here
we
will study die motion of objects subject
to
the force
of
gravity, ignoring friction:
and if
real cannon
balls do not follow these laws, so much die worse
for
them;
it means
we
will talk
no
more
of
diem." This step
backwards, diis break widi
die
tradition
of
attempting
to
understand
die
real
in its
entirety,
has
enabled physics
to
provide
a
solid base
for
all
die
subsequent structures
to be
built
on.
The same kind
of
backward step
was
made when
die
techniques
of
statistical mechanics were introduced
to
study neuronal networks (see Hopfield in Further reading).
The physico-madiematical techniques then available only
enabled systems
to be
studied where
die
interaction
was
symmetrical
-
where
die
influence
of
neuron A
on
neuron
B was equal
to
diat of neuron B
on
neuron A.
In
diis case,
die system behaves extremely simply
and
neidier oscilla-
tions nor chaotic behaviour are possible. This symmetrical
hypodiesis
is
false from
a
physiological point
of
view.
Nevertheless,
its
introduction allowed
die
problem
to be
seen
in a
form that could
be
studied
in
detail.
Subsequendy,
by
using
die
advances made
in
such study,
die symmetry hypothesis could be removed
and a
relatively
realistic model
of
neuronal networks built.
The tendency
of
physicists
is to
simplify clashes with
die
biological tradition
of
studying
die
living organism
as it is
observed,
not
how
it
could
or
should be. Without certain
evidence
of
exobiology,
we
only have diis Earth
at our
disposal
and
dius
one of
die physicist's objectives, diat
of
identifying
the
constant characters
in all
possible living
forms,
is
empty
in die
eyes
of
a biologist, who only knows
the few kingdoms that exist on Earth. Physics is an axiomatic
science (axioms selected by experiments),
in
which
all die
laws
are
deducible, even
if
widi difficulty, from
a
few basic
principles, whilst biology is a historical science, in which die
products of history on this planet are studied.
These different concepts
of
science make
die
collabora-
tion of physics and biology problematic but not impossible.
The
new way of
using physics
in
biology
is
making
die
laborious first steps
and it
will
be
many years before
we
know
if it
will
be
successful.
The
advances
are
slow,
because most problems, even after they have been cast
in
madiematically precise terms,
are
technically difficult
and
have still
not
been resolved dieoretically.
I
am
convinced diat
the
introduction
of
probability
techniques to study living matter will be crucial
in
die near
future, particularly
in
diose systems
in
which
die
existence
of
a
random element
is
essential.
The
true unknown
is
whedier diis phenomenon will only occur
in
some fields,
or whedier
die
techniques
of
mathematical physics based
on
the
study
of
disordered systems will become
the
conceptual framework
for
understanding
die
dynamics
of
living organisms, particularly
at die
system level.
Further reading
D
J
Amit 1989 Modelling Brain Functions (CUP, Cambridge)
P WAnderson 1983 Suggested models
for
prebiotic evolution:
the
use
of
chaos
Proc.
Nat.
Acad.
Sci. USA 80
3386-3390
M
R
Garey and
D S
Johnson
1969
Computers
and
Intractability:
A
Guide
to the
Theory of NP-Completeness (Freeman,
New
York)
J
J
Hopfield
1982
Neural networks
and
physical systems with
emergent collective computational abilities
Proc.
Nat.
Acad.
Sci.
USA
79
2554-58
S Kauffinan
1992
Origins
of
Order:
Self-Organisation
and
Selection
in Evolution (OUP, Oxford)
S Kauffinan
and S
Levin
1987
Toward
a
general theory
of
adaptation
on a
rugged fitness landscape
J.
Theor. Biol.
128 11-22
R Livi,
S
Ruffo,
S
Ciliberto
and M
Buiatti
(ed) 1988
Chaos
and
Complexity (World Scientific, Singapore)
M Mezard,
G
Parisi
and M
AVirasoro 1987 Spin Glass
Theory
and
Beyond (World Scientific, Singapore)
G Parisi 1992a
On the
classification
of
learning machines Network
3 259-265
G Parisi 1992b Order,
Disorder
and
Simulations (World Scientific,
Singapore)
L Peliti
and A
Vulpiani
(ed) 1988
Measures
of
Complexity
(Springer, Berlin)
R Rammal,
G
Toulouse
and M
AVirasoro
1986
Ultramerricity
for physicists
Rev. Mod.
Phys.
58
765-795
Giorgio Parisi
is in the
Department
of
Physics, University
of
Rome,
La Sapienza, P. Aido Mora No 5,00185 Rome, Italy.
An
extended version
of
thi s article appeared
as "La
nuova flsica
statistica
e la
bioiogia"
in
Sistemi Intelligent! August
1992
pp247-62.

Discussion

E. coli was one of the first organisms to have its genome sequenced; the complete genome of E. coli K12 was published by Science in 1997. You can learn about it here: [The Complete Genome Sequence of Escherichia coli K-12](https://www.science.org/lookup/doi/10.1126/science.277.5331.1453) !["e coli gene"](https://www.science.org/cms/10.1126/science.277.5331.1453/asset/e220b39e-ec5b-465d-bb1a-c7e4803d9453/assets/graphic/se3275565001.jpeg) *The overall structure of the E. coli genome* > ***"The problem that biology must confront is how to move forward from a knowledge of the behaviour of the basic constituents (proteins, neurons etc) to deducing the system's overall behaviour. Fundamentally, this is the same kind of problem that confronts statistical mechanics and this is why attempts are being made to adapt to biological systems the same techniques that were developed to study physical systems composed of many components of different types with laws chosen at random. The theories of the complex behaviour of disordered systems could thus be used to study biological complexity"*** The **Standard Model** of particle physics is the theory describing three of the four known fundamental forces: - the electromagnetic - weak interactions - strong interactions in the Universe. It also provides a framework for classifying all known elementary particles. You can learn more here: [CERN - The Standard Model](https://home.cern/science/physics/standard-model) !["standard model"](https://upload.wikimedia.org/wikipedia/commons/0/00/Standard_Model_of_Elementary_Particles.svg) A problem is called NP (nondeterministic polynomial) if its solution can be guessed and verified in polynomial time; nondeterministic means that no particular rule is followed to make the guess. If a problem is NP and all other NP problems are polynomial-time reducible to it, the problem is NP-complete. No polynomial time algorithm has yet been discovered for any NP complete problem, nor has anybody yet been able to prove that no polynomial-time algorithm exists for any of them. You can learn more here: [NP-complete Problem](https://www.britannica.com/science/NP-complete-problem) **Perturbation Theory** refers to methods used for finding an approximate solutions to complex problems, by starting from the exact solution of a related, simpler problem. You can learn more about perturbation theory here: [Quantum Chemistry - Perturbation Theory ](https://www.youtube.com/watch?v=Scv--cfY0xw) Giorgio Parisi is an Italian theoretical physicist and Nobel laureate, whose research has focused on quantum field theory, statistical mechanics and complex systems. He was awarded the 2021 Nobel Prize in Physics jointly with Klaus Hasselmann and Syukuro Manabe for groundbreaking contributions to theory of complex systems, in particular ***"for the discovery of the interplay of disorder and fluctuations in physical systems from atomic to planetary scales."*** You can learn more here: [The Nobel Prize - Giorgio Parisi Facts](https://www.nobelprize.org/prizes/physics/2021/parisi/facts/) !["parisi"](https://i.imgur.com/nceewsw.jpg) The Human Genome Project was a 13-year-long international scientific research project with the goal of determining the base pairs that make up human DNA, and of identifying and mapping all of the genes of the human genome from both a physical and a functional. It was initiated in 1990 - just 2 years before the publication of this paper - and completed in 2003. Learn more here: [The Human Genome Project](https://www.genome.gov/human-genome-project) > ***The aim of a theory of complex systems is to find the laws which govern the overall behaviour of such systems. These are phenomenological laws which cannot be easily deduced from the laws that govern the individual components.*** ### TL;DR In this paper Giorgio Parisi attempts to bring together physics and biology. He wants to change the outlook of both the physicist - who is more concerned with general principles - and the biologist - who remains interested in what really is. Physics began with the study of simple models that became more complicated as they became more realistic whereas biology has followed the opposite path. Physics has a strong simplifying tradition and tends to concentrate on certain aspects whilst neglecting other elements. Physics was born with a backward step, with the renunciation of the aim of understanding everything about what is, and with the proposal to study just a small, and initially tiny, corner of nature. Galileo founded mechanics by ignoring friction, despite the fact that friction is an essential part of everyday life. G. Parisi believes that the two disciplines are now converging in the study of complex systems. He believes that the introduction of probability techniques to study living matter will be crucial and inevitable.