[cap-talk] Opinions of oauth?
dmbarbour at gmail.com
Thu Jan 5 17:53:24 PST 2012
On Thu, Jan 5, 2012 at 2:05 PM, Jonathan S. Shapiro <shap at eros-os.org>wrote:
> In my experience there is a scaling problem here. When a new document gets
> added to a group registry, there is a problem of discovery on the part of
> the impacted users (or rather, their clients).
We really don't want them polling the group registries to get this
I agree. Polling is bad. It has awful latency to deliver updates
(round-trip and polling period). Reducing time between queries consumes
more bandwidth and CPU. Gathering data just in case it will be polled
becomes a potential waste of computing resources. Initializing and
finalizing a data-gathering process introduces resource management risks.
Caching, as you say, is tricky.
I do not use polling. Instead, I favor reactive programming models. All
queries are `live` queries, i.e. long term subscriptions. This gets me
latency close to a one-way trip, near-optimal bandwidth, and the
subscriptions themselves become a foundation for resource management (i.e.
just gather the data that is actively needed). Caches are easily associated
Registries fit quite naturally into reactive systems. No need for polling.
Maybe you should look into some publish/subscribe systems for some
directory sizes and query result sizes matter. If the use of fine-grain
> caps means that I need to carry two caps where I previously needed one, and
> I've got hundreds of millions or billions of objects kicking around, and
> I'm running a search with a potentially large result set...
> Obviously there is a scaling limit here with or without the new capability
> types. At some point the search is just returning too much stuff to handle.
I tend to favor designs that are close to capability-per-method. This
simplifies facet patterns and composition. Capabilities can still be
grouped, of course, into records or collections. I'm willing to favor
fine-grained capabilities even when it means I have an order of magnitude
With regards to searches and directories: registries must be designed so
that most searches can be performed without a human in the loop. This
suggests our searches should have preferences and the first match is
preferred. From a security perspective, it matters that developers and
users have sufficient control over their results - e.g. to favor results
from one registry before another. For scalability, it helps if the queries
can be lazy - i.e. computation proportional to the number of results
actually observed rather than possible.
A large result-set can be leveraged as an advantage - i.e. another word for
`ambiguous` is `fallback`. Multiple options also supports implicit system
configuration by manipulating the heuristic preferences of the registries
(e.g. to favor GTK instead of QT). I see a potential basis for resilient,
dynamic, adaptive software. Graceful degradation and self-healing are
within reach, though some more work is necessary to separate state
management from the transient services.
>> I actually approach this from the opposite direction: rather than
>> explicitly `revoking` capabilities, developers must logically continue to
>> `provide` a capability in order to maintain the grant. Revocation is
>> implicit - just stop granting.
> This is hard to accomplish in an intermittently connected distributed
I don't see why. The period of disrupted communication is simply the same
as `no longer granting` the capability.
Also, we need to handle the case where the publisher of an object goes away
> and the object needs to live on.
You are correct. I handle this with a design pattern:
* Someone creates and shares a service that can receive code and keep it
* Publisher requests service to host an object with given code on its
IDEs might use this pattern for distributed live programming. But it is not
a pattern developers will often use directly. Managing resources or
activities that outlive their publisher is a challenge to be avoided if
> The problem with the object graph is that it can't be allowed to hold any
> of my capabilities in the first place, because my capabilities don't
> authorize *your* traversals through the graph. So now that I think about
> it, the problem of graph damage doesn't occur at revocation. It occurs when
> a reference of the wrong "flavor" is inserted.
Consider your ability to contribute to the object graph. You use your own
authority to create a service then integrate it with the system. Other
people may later grow to depend on your service. Your authority may be
revoked, thus breaking these dependencies. This is the damage I'm
One way to avoid the damage is to forbid or constrain your contribution.
This is the approach you suggest now - keeping your capabilities out of my
traversals. But I feel it is too restrictive for my own vision of
distributed systems programming.
I opt to control damage rather than avoid it. The temporal consistency
model at least ensures a `clean break` - i.e. at a particular logical
instant. Usefully, it also supports a seamless replacement with a fallback
service (or the same service via a different authority), assuming one is
available and the replaced service did not keep essential state.
> In the end, I think that when dynamic groups have to traverse shared
> mutable graphs, the recording of permissions needs to be separated from the
> recording of the object references. The presence of connectivity does not,
> in such systems, imply permission for any given user.
Well, we can certainly separate capabilities to observe a system from those
to manipulate it. Perhaps the separation you're looking for is between
object identity and references...
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the cap-talk