Against Opaque Boxes Mark S. Miller (
Wed, 03 Nov 1999 11:27:52 -0800

I doubt very much that I would feel safer running EROS on "tamper resistant hardware" than on our conventional notion of a non-tamper-resistant hardware platform. Since you apparently can't say very much about the kind of tamper resistance you have in mind, I'll speak for the "opaque box" model Drexler & I describe in . I realize my criticisms may not apply to the system you have in mind, and that you may not be able to say so. Nevertheless, my points are fairly generic and apply to all the tamper resistant systems I am aware of. For contrast, and for reasons I will explain, I will refer to conventional non-tamper-resistant systems as "transparent boxes".

The Danger: How do we know the manufacturer has not installed a trap door?

This is the "over a barrel" problem that Ben referred to before. Both opaque and transparent boxes have some of this problem. However, every level of TCB I am standing on in the transparent box world can be multiply sourced. Every level above the hardware is noticeably open to inspection, given that I choose open sourced & widely reviewed software for my TCB. There is the Ken Thompson attack -- how do I know that the binary I'm running corresponds to the widely reviewed source -- but I can compile it myself through compilers obtained from a number of vendors. How do I know that the platform I'm running the compiler on hasn't been compromised? I don't, but I have many choices of platform on which I can run the compiler. In any case, where things get interesting, and interestingly different between our two scenarios, is at the hardware level.

Let's say Intel is manufacturing compromised chips. A first question that may arise is, could they put in a trapdoor that they (or bad guys collaborating with them) can use, despite the fact that they have no influence on the OSes that we (the good guys) use? Furthermore, can this trapdoor escape detection by good-guy OS kernel hackers (such as yourself)? Unfortunately, the answer (as I suspect you already know) is yes. An example of such an attack from Norm:

The bad guys generate a random 128 bit string. From earlier discussions, some of us believe this to be long enough to evade brute force search. Cutting this in two, we have two 64-bit double precision floating point numbers, A and B. The bad guys (Intel themselves, or a conspiracy within Intel) arrange that when the CPU is asked to divide A by B, it also puts the CPU into supervisor mode. Since divide is just a normal user mode instruction, the bad guys know that, no matter what OSes people write for their chip, such as EROS, they can still subvert their security.

What defense do we have against this problem? The first order answer is "not much, it sucks". However, Intel has no monopoly on the x86 platform, and I am more confident that Intel, AMD, and SoftPC are not in a conspiracy together than that I am that Intel isn't subverted. So the second order answer is that we might lose most of the world in a horrible blight, but not all of it. Plagues are bad, but extinction is worse.

Actually, SoftPC, though a competitive x86 platform, is software, so mischief is more easily detectable. The mischief to worry about here would still be of the underlying hardware, such as the PowerPC chip, so this opens up all CPU manufacturers as sources of competitive supply and separate trust. Instruction-set independent software does likewise.

The third order answer is that people still pop the tops off chips and put them under electron microscopes. If bad guys at Intel put this trapdoor in, there's always the possibility someone could find it.

                                   The Opaque Box Alternative

The purpose of opaque boxes, in the Drexler & MarkM proposal and in the literature I have seen, is to protect the integrity of the hardware from someone who has physical access to that hardware. First, let me say that I think this is of tremendous practical benefit within companies, where none of the employees of the company are the company, the company's computers (especially servers serving the outside world) are supposed to act in the interests of the company, but only employees of the company, not the company itself, are given physical access to this computer. (The company is also vulnerable to software written by the company's programmers, but this latter is, at high cost, reviewable by others within the company.) This is apparently hard to solve with conventional check and balances: 80% of bank theft are inside jobs, and bank have, shall we say, a large incentive to think about this issue.

However, computers increasingly pervade our personal lives as well, and companies are getting smaller. So let's examine personal scenarios, as Jonathan does as well. If I run EROS on an opaque box sealed by Intel, I am just as vulnerable to trapdoors by Intel as I would be in a transparent-box scenario. However, if my box & TCB engages in Jonathan's suggested handshake before distributing capabilities, then my box can only communicate with other boxes trusted by my box. Would a box sealed by Intel trust one sealed by AMD? How about one sealed Joe's garage and CPU shop?

If I can't talk to you because my box doesn't trust that your box has prevented you from compromising it, who is my box protecting? Perhaps my actual trust that you haven't compromised your transparent box exceeds my trust that Intel hasn't put in a trapdoor. Or (the conventional distributed capability assumption) maybe the rights I am giving to software on your box is rights I am willing to let you use for other purposes, otherwise I wouldn't be giving them to software running on hardware that you control.

Notice that if they are running EROS on transparent but not subverted hardware, they have little incentive to take any action that compromise their own software unknowingly. I find it hard to imagine tricking someone into compromising their hardware unknowingly.

I submit that I am safer by separately trusting individuals with subvertable access to their own separately trustable hardware than I am trusting a monopolistic opaque box manufacturer, or a mutually-trusting consortium of such manufacturers. The coordinating activity to bring about such a consortium destroys the confidence we may have had that there is no comprehensive conspiracy. I would find a world of EROS on transparent boxes best, but I'll take the current Windows world over one in which EROS runs on opaque boxes, and only communicates to boxes that it trusts.

A world in which our security requires us all to trust one central organization, whether a world govt, a CA-root, or such a consortium, is a world vulnerable to extinction rather then merely plagues.

Again, my statements about opaque boxes may not apply to what Jonathan has in mind, or they may. Until more information is public, we work with what we can.