If the owner of the service pays for resources con-
sumed by the service, information can also be encoded
in the amount of time used or whatever. This can be
avoided if the customer pays for resources.
5. If the system has interlocks which prevent files
from being open for writing and reading at the same
time, the service can leak data if it is merely allowed to
read files which can be written by its owner. The inter-
locks allow a file to simulate a shared Boolean variable
which one program can set and the other can test.
Given a procedure open (file, error) which does goto
error if the file is already open, the following pro-
cedures will perform this simulation:
procedure settrue
(file);
begin
loop 1: open (file, loop 1)
end;
procedure set false
(file); begin close (file)
end;
Boolean procedure
value
(file); begin
value
:=
true;
open (file, loop 2);
value
:= false; close (file); loop 2:
end
Using these procedures and three files called data,
sendclock, and receiveclock, a service can send a
stream of bits to another concurrently running program.
Referencing the files as though they were variables of
this rather odd kind, then, we can describe the sequence
of events for transmitting a single bit:
sender: data := bit being sent; sendclock :=
true
receiver: wait for sendclock = true; received bit := data;
receive clock :=
true;
sender: wait for receive clock = true; sendclock := false;
receiver: wait for sendclock = false; receiveclock := false;
sender: wait for receiveclock = false;
6. By varying its ratio of computing to input/output or
its paging rate, the service can transmit information
which a concurrently running process can receive by
observing the performance of the system. The com-
munication channel thus established is a noisy one, but
the techniques of information theory can be used to
devise an encoding which will allow the information to
get through reliably no matter how small the effects of
the service on system performance are, provided they
are not zero. The data rate of this channel may be very
low, of course.
Confinement Rules
We begin by observing that a confined program must
be memoryless. In other words, it must not be able to
preserve information within itself from one call to
another. Most existing systems make it quite easy to
enforce this restriction. In ALGOL, for example, a pro-
cedure which has no own variables and which refer-
ences no global variables will be memoryless.
Taking this for granted, it is easy to state a rule
which is sufficient to ensure confinement.
Total isolation:
A confined program shall make no calls
on any other program.
By a "call" we mean any transfer of control which is
caused by the confined program. Unfortunately, this
rule is quite impractical. Ex~Lmple 5 above shows that
supervisor calls must be fl~rbidden, since quite in-
nocuous looking ones can result in leaks. Example 6
shows that even the implicit supervisor calls resulting
from input/output, paging or time-slicing can cause
trouble.
To improve on this situation, we must make a dis-
tinction between programs which are confined and those
which are not. Recall that being confined is not an
intrinsic property of a program but a constraint which
can be imposed on it when it runs. Hence if every
program called by a confined program is also confined,
there can be no leakage. This is still not good enough,
since the supervisor, for example, cannot be confined.
It is at least conceivable, however, that it is
trusted,
i.e.
that the customer believes it will not leak his data or
help any confined program which calls it to do so.
Granting this, we can formulate a new confinement rule.
Transitivity:
If a confined program calls another pro-
gram which is not trusted, the called program must also
be confined.
Examples 5 and 6 show that it is hard to write a trust-
worthy supervisor, since some of the paths by which
information can leak out from a supervisor are quite
subtle and obscure. The remainder of this note argues
that it is possible.
A trustworthy program must guard against any
possible leakage of data. In the case of a supervisor, the
number of possible channels for such leakage is sur-
prisingly large, but certainly not infinite. It is necessary
to enumerate them all and then to block each one.
There is not likely to be any rigorous way of identifying
every channel in any system of even moderate com-
plexity. The six examples above, however, were chosen
to illustrate the full range of possibilities which I have
been able to think of, and for the present plausibility
argument they will be regarded as a sufficient enumera-
tion. The channels used there fall into three categories:
Storage
of various kinds maintained by the super-
visor which can be written by the service and read by
an unconfined program, either shortly after it is written
or at some later time.
Legitimate
channels used by the confined service,
such as the bill.
Covert
channels, i.e. those not intended for informa-
tion transfer at all, such as the service program's effect
on the system load.
All the examples except 4 and 6 use storage channels.
Fairly straightforward techniques can be used by the
supervisor to prevent the misuse of storage as an escape
route for a confined program. (For instance, example 5
can be taken care of by making a copy of a file which is
being read by a confined program if someone else tries
to write it; the confined program can then continue to
read the copy, and the writer can proceed without
receiving any indication that the file was being read.)
The main difficulty, as example 5 illustrates, is to
614
Communications October 1973
of Volume 16
the
ACM Number 10