[e-lang] Decision Alignment (was: E research topics)

Mark S. Miller markm at cs.jhu.edu
Sun Apr 15 00:39:58 CDT 2007


More thoughts on research topics.

Decision Alignment <http://erights.org/decision/>.
Bill Tulloh and I have been talking about a larger taxonomic framework in 
which to organize plan coordination issues, both the emphasize the analogies 
between the human world and the computational world, as well as to draw 
attention to the continuity of these issues across the interface between these 
worlds.

Say we have a system complex enough to describe in terms of interacting 
intensional entities, say Alice and Bob, and potential cooperative 
interactions between Alice and Bob. Imagine that Alice is attempting to 
subcontract/delegate some portion of her job to Bob. In any such system, we 
need to examine how Alice can influence Bob to behave in a manner more likely 
to serve Alice's interests.

In the human world, when the question is put this way, the conventional 
economic point of view too quickly focuses on incentives almost exclusively as 
the answer. Indeed, our overall label, "Decision Alignment", is a takeoff on 
"Incentive Alignment".

I say "too quick" because incentives only become the limiting issue after 
various more fundamental issues have been dealt with. For example, how hard is 
it for Alice to explain to Bob what Alice needs in terms Bob can understand 
and act on? By jumping to incentive issues, the conventional economic 
perspective assume away the difficulty of these logically prior issues.

Even after explanations are adequate, incentives are rarely sufficient by 
themselves for Alice to shape Bob's behavior, because Bob may have too many 
"moral hazard" opportunities. By contrast, if Alice can grant Bob only narrow 
least authority, which is as relevant in the human world as we hope it will be 
in the computational world, this will often reduce Alice's risks from Bob more 
than incentive issues can.

In the human world, the distinction between what we call "inspection" vs 
"monitoring" is often fuzzy. But the logical distinction is still meaningful. 
By "monitoring", we mean watching (sampling) Bob's output, how he behaves.  By 
"inspection", we mean examining Bob's internal mechanism, how he's 
constituted, to figure out if the logic of Bob's internal mechanism means he's 
unlikely to misbehave in certain ways.

Once we see that Alice can use all five techniques to shape Bob's behavior, we 
can examine how Alice can make synergy between these techniques. For example, 
when Alice couples least authority with incentive alignment, now Alice only 
has to worry about moral hazards within the narrower set of choices left open 
to Bob.

-- 
Text by me above is hereby placed in the public domain

     Cheers,
     --MarkM


More information about the e-lang mailing list