[cap-talk] Another "core" principle - virtualizing memory
capability at webstart.com
Tue Jan 2 02:16:26 CST 2007
At 09:52 PM 1/1/2007, Jonathan S. Shapiro wrote:
>On Mon, 2007-01-01 at 21:07 -0800, Jed Donnelley wrote:
> > At 08:59 AM 1/1/2007, Jonathan S. Shapiro wrote:
> > >Jed: if this is part of the Process object, please explain how processes
> > >can share memory,
> > This is what I described before that you criticized as merely a mechanism
> > for distributed shared memory. It can be distributed or not, it works
> > locally or remotely. It works because this is how processors must work.
> > Just to repeat the overview (for anybody that didn't read the other
> > message in detail), the basic idea is that data storage objects
> > (call them files, or segments, or whatever) are passed to the Process
> > object for mapping into memory. They need appropriate locking with
> > notification (write locks and read/write locks) for the shared
> > memory application. This is all rather simple.
>You have reduced the problem to a previously unsolved problem, which is
>the construction of "files, segments, or whatever". As I stated in my
>earlier mail, each of these is an example of an address space. So your
>answer amounts to "you build an address space by mapping address
No. Just because a data segment has an address space (0...n)
doesn't mean that it can't map into a separate process address
>So you seem to propose that there is a relatively high-level kernel
>operation "map", which accepts as arguments:
> a process (implicitly: the invoking process),
> an address relative to the process's address space, and
> an address space to be mapped at that address (a "file, segment,
> or whatever") whose construction happens by unspecified means.
No. I propose that there is an operation on a Process capability,
"map", that accepts a data object (one that has read and write
operations, and as I suggest lock operations). The operation
specifies where the data in the data object should be mapped
into the Process's address space. Of course a process might
have access to it's own capability, but it also may not.
In this discussion I admit that I'm not on as firm ground as
in the data buffering discussion (where I had many years
experience with a working system). The NLTSS system was
not a virtual memory system (the hardware didn't support
virtual memory), so the processes were fixed at a single
My thoughts on this mapping of virtual memory go back to the
RATS system that was a virtual memory system. All this amounts
to is what I regard as a relatively minor "flipping" of
the "attach" operation from being on a "file" object to
being an invocation on a process capability passing in
a file (or any object that acts like a file).
>This operation is somehow entitled to charge a process for the storage
>that is used by the low-level data structures that record the desired
Of course, though I view the term "somehow" as meaninglessly
pejorative in the above.
>How does the kernel know what storage pool should be used as
>the source of this storage (which is the essence of storage
>accountability)? Since the lower-level mapping data structures are
>pageable, this is an allocation of *real* resource, and it really needs
>to come from an explicitly designated storage pool.
I'm sorry, but the term "storage pool" is meaningless to me. From
my perspective both file objects and process objects use rotating
storage for their data. The NLTSS system the Process server
used a file to store the metadata and real data (e.g. register
and memory state) for a Process. There really is no difficulty
there. The storage is charged to whatever account was used to
create the object.
>Your solution does not satisfy the accountability requirement.
Can you define your "accountability requirement"? In my solution
all storage devolves to rotating storage that is accounted for
as is any object. Real memory use for us was tied into "CPU"
charges and depended on real memory residency * time.
>Can you explain how your DSM-style solution accounts for the behavior of
>load and store instructions, which *must* be reifiable as capability
>invocations if the system is to remain a pure object-capability system?
Load and store instruction act according to the processor architecture
on real memory - or trap. I fail to understand why you seem to
argue that each individual load and store instruction must
act as a capability invocation. From my perspective it's perfectly
adequate to have the invocations on the storage (rotating storage)
objects whose data is mapped into memory appear as capability
invocations when rotating storage is read into or written from
memory. This happens at a larger granularity (generally at
page faults). This approach mirrors what's actually going on.
What's the problem?
More information about the cap-talk