[cap-talk] capability systems and the orange book
btulloh at gmail.com
Thu Apr 21 10:43:11 EDT 2005
I stumbled across this paper that gives an interesting perspective on
the reception of capabilities systems (PSOS, GNOSIS/KeyKOS) among the
NSA/DOD trusted system folk during the early days of the development
of the orange book.
Schaefer, Marvin (2004) "If A1 is the Answer, What was the Question?
An Edgy Naïf's Retrospective on Promulgating the Trusted Computer
Systems Evaluation Criteria" Proceedings of the 20th Annual Computer
Security Applications Conference (ACSAC'04)
This clash of paradigms and general skepticism towards capabilities
seems to permeate the post PSOS line of research that starts with
Boebert's Secure Ada Target and continues with today's Security
PSOS -> SAT -> LOCK -> DTOS -> Flask -> SELinux
See for example the discussion of KeyKOS in the "DTOS General System
Security Assurability Assessment Report."
and the discussion of capability based systems in "The Flask Security
>From Schaefer pages 12-13
2.6.2. Distributed mediation, capabilities, PSOS, and Gnosis.
Still, the prejudice against the centralized security kernel concept
manifested itself in an altogether different way. It was argued that
automated formal code verification (or mechanical "proof of
correctness") was closer to becoming available, and soon all operating
system code – and then hardware design and implementation –
correctness could be established as mathematical fact. Thus, it would
be possible to include all required security checking as part of each
system module or function, thereby eliminating needless
access-checking function calls and their costly context switching.
Moreover, there would be no separation of call from function, and
hence no need for the access-checking functions to derive or establish
the relevant context of the requested operation in concert with the
semantics of the application.
And so, a movement gained momentum to design and field systems
structured along the lines of distributed mediation and that had no
distinct security perimeter other than the [most] privileged (or most
primitive) part of the operating system itself. The first such
research study, the capability-based Provably Secure Operating System
(PSOS)  project yielded: • A methodology for the design,
implementation, and proof of properties of large computing systems •
The design of a secure operating system using this methodology • The
security properties to be proven about the system • Formal
verification methods and tools that came to be known as the
Hierarchical Development Methodology (HDM) and the formal
specification language SPECIAL • Considerations for implementing such
a system, and
• An approach to monitoring security and performance.
PSOS was rigorously decomposed into a hierarchical specification that
had no upward functional- or data-dependencies. The unique protection
mechanism was a capability, a form of unforgeable, immutable token,
possession of which granted a set of specific access rights to the
object to which it was linked. The PSOS concept yielded considerable
new research, but left open the question of how a secure system is to
be initially configured, how the first capability was to be created,
and how one could algorithmically examine a capability distribution
and determine whether or not a system was in a secure state. In
addition, there were no efficient means of determining which users
possessed capabilities to which objects. Despite the open questions,
it was asserted that PSOS and its proven design could implement a
secure multilevel operating system.
Norm Hardy, Charlie Landau and Bill Frantz designed the Great New
Operating System In the Sky (GNOSIS)  while at Tymshare, Inc.
GNOSIS, unlike PSOS, was commercially developed and implemented a
capability-based time sharing environment similar to that of VM/370's
Cambridge Monitor System (CMS)interface. Questions similar to those
raised in PSOS remained to be answered in GNOSIS and its successor
>From Schaefer page 15:
>From the Pentagon, Steve Walker had put together a few assorted teams
of experts from academia and industry with the intention of providing
assistance to vendors who were interested in developing trusted
products that could be used by the DoD. Ted Lee and I participated in
several of these efforts along with a seasoned group of security
practitioners like Pete Tasker, John Woodward, Anne-Marie Claybrook,
Susan Rajunas, and Grace Nibaldi from MITRE Bedford. Under
nondisclosure agreements, the teams were also performing ad hoc
product "evaluations" using the Nibaldi draft criteria.
One of the products under consideration didn't appear to fit Nibaldi's
working criteria at all well. This was Tymshare Corporation's
capability-based Gnosis system. Susan Rajunas, who had been leading
the evaluation, was particularly articulate about the Gnosis design
and strength of its mechanisms. But there were numerous open questions
about the definition of secure state, of how one attained an initial
secure state, how individual accountability could be established in an
environment where capabilities were inscrutable, and where possession
of a capability could conceivably be used by a Trojan horse. Rajunas
was funded to assemble a workshop to investigate assembling a set of
interpreted criteria for evaluating a trusted capability base
I requested that Earl Boebert, who led a project to develop a system
based on PSOS, the Secure Ada Target (SAT), write a paper for an NCSC
Conference showing that multilevel security confinement could not be
assured in a pure capability based operating system. A year
earlier, Paul Karger had written a paper  on a design that
augmented capabilities to overcome such intrinsic shortcomings.
About this time, I heard Butler Lampson's observation: "Capability
based systems are the way of the future—and they always will be."
More information about the cap-talk