[cap-talk] Description of s13 - in context, confinement addendum
Jonathan S. Shapiro
shap at eros-os.com
Wed Jul 18 10:26:43 EDT 2007
Okay. A couple of minor editorial comments for the benefit of future
writing, but I now think that I understand the scheme. Some of my
editorial comments probably reflect how I read more than how you should
write. Take them to the extend that you find them helpful, and leave the
On Tue, 2007-07-17 at 23:47 -0700, Jed Donnelley wrote:
> Firstly I think you need to understand the environment at LLNL in
I truly enjoyed reading that, and I am interested, but I found it a
distracting in present context. We are not presently trying to examine
this system in the context of its invention. We are presently trying to
examine it in the context of a modern, distributed capability system. To
that extent, the environment at LLNL isn't really germane.
I suppose that I am a depressingly linear being.
> C. Our systems were separated by an air gap from any
> other hardware or software...
> So ... When you suggested that we should have done end-to-end
> encryption, you didn't understand the environment of the time.
> At the time there was no value to us in end-to-end encryption.
> It wouldn't have protected us from any legitimate threat and
> of course it would have come at a significant cost.
For purposes of our current discussion, I would characterize this as
link-layer security achieved by physical means. When you have total
control over everything read and written on the wire, link layer and
transport layer encryption are largely unnecessary. In my own comments
about transport layer security, I would have done better to emphasize
that transport layer security should be solved as a separate problem,
rather than saying that transport layer encryption (a particular
solution) should be assumed.
> What did concern us was that with the password capabilities
> that we were using, every time one of those dumps was sitting
> in one of those boxes, anybody could pick up the dump and
> with a little work extract the password capability for the
> "home" directory for the user.
Yes. And I find this point useful. Whether the problem is paper in boxes
or tapes getting mis-shipped, the essence of the issue is that
a) we need a backup mechanism that goes to some media
b) offline media gets stolen or misplaced, and
c) the combination of (a) and (b) can lead to security issues.
My personal view today is that anybody who fails to encrypt tapes
(media) that hold critical data should be fired. This reduces the
problem to preserving the recovery key, which is more manageable
primarily because it can be handled at human scale where mass archives
cannot. Still, there is a major issue to be addressed here.
Dumps are a different problem, because they are not useful unless the
appear in clear text. But they seem to be a very unusual example -- I
cannot think of a *second* example requiring offline clear text of raw
memory. Thankfully (at least in my opinion) memories today are too large
for dumps to be useful.
> I remember having an interesting debate with Andy Tannenbaum
> (Amoeba, Minix: http://en.wikipedia.org/wiki/Andrew_S._Tanenbaum )
> when I was in London to give the Managing Domains paper.
> Andy argued strongly that there was no need to protect
> capabilities specially in the memory of processes because
> the processes memory space already needed to be protected.
> If some of the memory of processes was being exposed
> improperly, then that was the problem that should be
> addressed, not giving extra protection to the more sensitive
I tend to agree with Andy on this issue, but part of the reason may be
that I matured (well, I allege that I matured :-) professionally after
dumps had gone the way of the diplodocus (sorry). I *do* recall fast
line printers. I suspect that the alt.sex.posters newsgroup extended
life of the band-driven line printer business for several years.
[For those of you who don't recall alt.sex.posters, suffice it to say
that the adoption of commodity bitmap displays could be reliably tracked
by the growing weekly bandwidth of the alt.sex.pictures newsgroups, and
the parallel bandwidth decline in alt.sex.posters.]
Ancient history aside, can you think of a second example in the "dump"
> The private encryption key and the intermediate results can be
> protected in several ways. For example, in a multiprogrammed OS
> component the transformations can be performed by the OS kernel in
> response to a virtual user instruction (only the kernel knows the
> process's private decryption key). In a smaller single-domain system
> (e.g., a microprocessor system), it might prove effective to have the
> transformations performed in a hardware device that alone knows the
> system's private decryption key.
> The above seems to me to make the questions you ask fairly
> clearly answered, including:
Quite the contrary. The paragraph above was the root cause of most of my
confusion, precisely because it does *not* answer my question. It uses
descriptive words like "may" and "can". The description is open ended,
in that it fails to describe what the requirements of a suitable
implementation are. It does not adopt any specific position even as an
illustrative example. As a result, it doesn't preclude any
implementation having private keys. There is nothing in this
description, for example, which would preclude private keys held by
their respective domains, and the diagram layout encouraged me in this
> >2. Which parts of which keys are held by whom? Which parts by the
> >respective domains and which parts by their supervisors? Without this,
> >it is not possible to assess what protection is actually achieved.
> The private key is only available to the supervisor.
Good. That *is* a definitive statement of an acceptable implementation.
It definitely wasn't clear to me that this was so from the text.
I found your email description of the OS instructions clear. The
description of #s13 would have benefitted from their inclusion as well.
Perhaps they were included implicitly from the larger document context
that I did not have time to review.
It is also now clear to me who is responsible for what, with the caveat
that in an efficient implementation the supervisor will probably have to
apply public keys as well, because the transforms are expensive enough
to warrant partial precomputation.
> >4. The notation of the diagram is confusing. I understand the
> >annotations on the arcs and the "in memory" notations (though the "in
> >memory" notations appear poorly chosen to me).
> In what sense? I don't see how there could be any other notation
> used. Perhaps you are referring to the label "In memory"?..
No no. That label and its associated text was clear. The problem was
that I had to go back and forth several times between the text and the
diagram to understand the head/tail annotations. Nothing in the diagram
alone suggested a transformative operator to me.
I suspect this is more a layout problem than anything else, and the
graphics options at the time you drew that diagram were, at best,
limited. I remember in the 1980s developing a quad-tree subdivision
algorithm in order to do depth-respecting rendering on a pen plotter for
some diagram I wanted. Nowadays we take that sort of stuff completely
> >5. Issues of transmission are being horribly conflated with issues of
> >in-memory capability protection.
> I'm not sure how you got that conflation out of the paper.
I think this relates to my great confusion about the entry/exit
transforms. They had the look of the sort of transforms that
applications might use to implement link encryption at the application
layer, and I misread them entirely.
> >If these assumptions are correct...
> They aren't. I hope you'll take the time to reconsider the
> mechanism with the correct assumptions.
Gladly. That is why I asked about them! I will respond to the rest
> What interested me about this whole discussion is that
> one could even get the effective value of a descriptor
> based capability system (both capability protection
> as I believe you've described and confinement, along
> the lines of DVH, RATS, etc.) with capabilities pretty
> much as data...
I disagree. You are getting *some* of the properties of a descriptor
system, but not all.
> Of course the protection in such a scheme depends on the
> safety of the cryptographic operations - which a classic
> c-list system does not.
I confess to a blind spot here. If cryptography fails, I will be much
more worried about the banking system than I am about my personal or
business data. On the other hand, that might not be true if I worked in
certain sections of LLNL.
More information about the cap-talk