[cap-talk] Virtual Machine Based Rootkits
jed at nersc.gov
Thu Sep 14 19:56:16 CDT 2006
At 10:48 PM 9/13/2006, Bill Frantz wrote:
>donnelley1 at webstart.com (Jed at Webstart) on Thursday, August 3, 2006 wrote:
> >My understanding is that all it
> >takes to be "fully virtualizable" is to have all privileged operations
> >trap in "user" mode.
>[Sorry to be so late replying. I've been traveling.]
>Having all privileged operations trap in "user" mode is necessary but
>not sufficient. On some Intel architectures, there were instructions
>that executed differently in privileged mode and in user mode. If I
>remember correctly, some extra information was returned in privileged
I was considering such an instruction "privileged" in the above sentence,
though I was of course being brief and informal.
>To be fully virtualizable, these instructions would also have to
>trap. I would say an additional criteria is, "All user mode
>instructions must have the same specification in both privileged and
At 11:36 PM 9/13/2006, Mark S. Miller wrote:
>Section 10.4 of my thesis summarizes the classic paper on this topic:
>Popek and Goldberg's "Formal Requirements for Virtualizable
>Third Generation Architectures" [PG74] explains the
>conditions needed for a hardware architecture to be cleanly
I well remember that work by Popek. He was working with DEC
under an ARPA contract to use a virtual machine architecture to
develop a secure operating system. I remember it well because I
was working in that area on the RISOS (Research Into the Security
of Operating Systems) project and considered the development of
such a secure OS difficult - VMM or no VMM. One of the things I
remember is that every year for at least two and perhaps three years
Dr. Popek spoke at the Fall Joint Computer Conference
suggesting that by the next year he would have an operating
system proven secure.
As far as I know it never happened. I don't remember if the VMM for
the PDP-11/45 ever happened, though I assume something was
The PDP-11 virtual machine architecture: A case study :
Ah, I see from rereading the above that they learned some things:
"Architectural changes contain pitfalls for the unwary. Desires to
slide hardware changes "under" an existing architecture
arise in a number of other areas. When protection and security are
important, for example, capability and domain
architectures are often proposed. Proponents are advised however
that, despite considerable early effort to
foresee difficulties, not all problems were uncovered by the UCLA
project until large portions of the detailed design were
nearly complete. A few of the hardware peculiarities mentioned in
the appendix were not noticed, and the magnitude of
certain of the sources of performance overhead sere inaccurately estimated."
That suggests to me that their work stayed an academic one off, but
at least they got that
far and learned what the pitfalls were. He also later says:
"It has been argued that one of the most promising application areas
for program verification, at least with
respect to cost effectiveness, is in code that is a) frequently
executed for many users, and b) whose failure has
significant consequences: in other words, segments of operating
systems. However, verification methods first
require an axiomatic representation of the environment in which the
programs of interest run. Operating system
code has hardware details, rather than high level programming
language constraints alone, as part of its relevant
complicate the verification task considerably, precisely at one of
the points where it could be so useful. Until hardware
architectures are simplified, this impediment is likely too limit the
utility of operating system verification."
which again suggests some learning, more than was evident in the talks I heard.
It's also interesting to me to hear him extol the virtues of the
UNIBUS I/O architecture in the
Formal Requirements paper, e.g.
"One key restriction in the model is the exclusion of I/O devices and
instructions. While it is commonplace
now to provide users with an extended software machine without
explicit I/O devices or instructions, there is one
late third generation hardware machine that exhibits this appearance.
In the DEC PDP-11, I/O devices are
treated as memory cells and I/O operations are performed by doing the
proper memory transfer to the
(back when he was trying to get access to such machines and
cooperation from DEC) and later bemoan
that same architecture once the full implications of that
architecture for a virtual machine monitor
was better understood, e.g.
"Virtualization of the PDP-11/45 is practical, although with more effort than
might be expected." and "UNIBUS style I/O architectures are generally
with respect to virtualization."
We also had a PDP-11/45 at LLNL, though without the VM support features
they added at UCLA. That was the system that the RATS OS ran on,
to a PDP-11/70.
You may recall that during that time frame the VMM support for the
IBM 360 line was already pretty well established. I think by the
1973/1974 time frame the virtual machine concepts were already
pretty well developed. There was even a conference or two devoted
just to that topic. As with many of the formalization efforts
of the time, what Popek and Goldberg did was to try to formalize
the requirements for a VMM, as they say:
"Formal techniques are used to derive precise sufficient conditions
to test whether such an
architecture can support virtual machines."
That was an interesting time period in computer science. It still
amazes me that virtualization
went so far under ground for so many years, though perhaps I should
be surprised that it
arose again at all. In any case it gives me hope that capability
architectures (not necessarily
hardware) can arise again.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the cap-talk