ACM SIGCOMM -6- Computer Communication Review
strategies, must be implemented in the host rather than
in the network. Initially, to programmers who were not
familiar with protocol implementation, the effort of
doing this seemed somewhat daunting. Implementors
tried such things as moving the transport protocols to a
front end processor, with the idea that the protocols
would be implemented only once, rather than again for
every type of host. However, this required the invention
of a host to front end protocol which some thought
almost as complicated to implement as the original
transport protocol. As experience with protocols
increases, the anxieties associated with implementing a
protocol suite within the host seem to be decreasing,
and implementations are now available for a wide
variety of machines, including personal computers and
other machines with very limited computing resources.
A related problem arising from the use of host-resident
mechanisms is that poor implementation of the
mechanism may hurt the network as well as the host.
This problem was tolerated, because the initial
experiments involved a limited number of host
implementations which could be controlled. However,
as the use of Internet has grown, this problem has
occasionally surfaced in a serious way. In this respect,
the goal of robustness, which led to the method of fate-
sharing, which led to host-resident algorithms, contri-
butes to a loss o f robustness if the host mis-behaves.
The last goal was accountability. In fact, accounting
was discussed in the first paper by Cerf and Kahn as an
important function of the protocols and gateways.
However, at the present time, the Internet architecture
contains few tools for accounting for packet flows. This
problem is only now being studied, as the scope of the
architecture is being expanded to include non-military
consumers who are seriously concerned with under-
standing and monitoring the usage of the resources
within the internet.
8. Architecture and Implementation
The p revious discussio n clearly suggests that one o f the
goals of the Internet architecture was to provide wide
flexibility in the service offered. Different transport
protocols could be used to provide different types of
service, and different networks could be incorporated.
Put another way, the architecture tried very hard not to
constrain the range of service which the Internet could
be engineered to provide. This, in turn, means that to
understand the service which can be offered by a
particular implementation of an Internet, one must look
not to the architecture, but to the actual engineering of
the software within the particular hosts and gateways,
and to the particular networks which have been
incorporated. I will use the term "realization" to
describe a particular set of networks, gateways and
hosts which have been connected together in the context
of the Internet architecture. Realizations can differ by
orders of magnitude in the service which they offer.
Realizations have been built out of 1200 bit per second
phone lines, and out of networks only with speeds
greater than 1 megabit per second. Clearly, the
throughput expectations which one can have of these
realizations differ by orders of magnitude. Similarly,
some Internet realizations have delays measured in tens
of milliseconds, where others have delays measured in
seconds. Certain applications such as real time speech
work fundamentally differently across these two
realizations. Some Internets have been engineered so
that there is great redundancy in the gateways and paths.
These Internets are survivable, because resources exist
which can be reconfigured after failure. Other Internet
realizations, to reduce cost, have single points of
connectivity through the realization, so that a failure
may partitio n the Internet into two halves.
The Internet architecture tolerates this variety of
realization by design. However, it leaves the designer of
a particular realization with a great deal of engineering
to do. One of the major struggles of this architectural
develo p ment was to understa nd ho w to give guid anc e to
the designer of a realization, guidance which would
relate the engineering of the realization to the types of
service which would result. For example, the designer
must answer the following sort of question. What sort of
bandwidths must be in the underlying networks, if the
overall service is to deliver a throughput of a certain
rate? Given a certain model of possible failures within
this realization, what sorts of redundancy ought to be
engineered into the realization?
Most of the known network design aids did not seem
helpful in answering these sorts of questions. Protocol
verifiers, for example, assist in confirming that
protocols meet specifications. However, these tools
almost never deal with performance issues, which are
essential to the idea of the type of service. Instead, they
deal with the much more restricted idea of logical
correctness of the protocol with respect to specification.
While tools to verify logical correctness are useful, both
at the specification and implementation stage, they do
not help with the severe problems that often arise
related to performance. A typical implementation
experience is that even after logical correctness has
been demonstrated, design faults are discovered that
may cause a performance degradation of an order of
magnitude. Exploration of this problem has led to the
conclusion that the difficulty usually arises, not in the
protocol itself, but in the operating system on which the
protocol runs. This being the case, it is difficult to
address the problem within the context of the