[e-lang] Introducing Emily, a High Performance Language For Secure
daw at cs.berkeley.edu
Thu Feb 23 16:06:09 EST 2006
Constantine Plotnikov writes:
>This is a good analogy. If you close one door, you have one less door to
>watch out during program design.
But now I come back to the distinction I was trying to draw between
unintentional leakage of secrets vs deliberate leakage of secrets.
I agree that there is value in looking for programming language mechanisms
that reduce the likelihood of unintentional leakage of secrets. But what
I'm arguing is that there's no point trying to forbid malicious code from
deliberately leaking secrets. Given the existence of covert channels, you
probably can't prevent it anyway. "Don't forbid what you can't prevent."
It sounds like maybe we're going in circles here.
>Yes. But you have to cosider other factors like communication time and
>amount of possible communication attempts.
Sure. Maybe it takes 2048 attempts and 20 milliseconds to leak my
RSA private key instead of 1 attempt and 10 microseconds. Gee, thanks!
That's not much comfort.
>It is possible to design program that is more resistant to such attack.
>For example careful usage of the Factory pattern can limit an attacker
>to leaking exactly one bit with exception in case of some algorighms.
Well, with careful design you may be able to limit the attacker to
zero bits (through this channel) by catching all exceptions and refusing
to propagate any of them, but none of these patterns will eliminate
I think what we should be talking about is the potential for a programmer
to unintentionally leak secrets that her code has access to and that
she didn't mean to reveal to others.
Let me put this another way. Alice, Bob, and Charlie are three objects.
Suppose Alice calls Bob, who calls Charlie. Let's try to be clear about
what the threat model is.
1) One possible scenario: Alice and Charlie are malicious and colluding,
and Bob is the good guy. Can Bob prevent Charlie from communicating with
Alice? Given the existence of covert channels, nope, there's no hope for
that. So forget about that one.
2) Another possible scenario: Alice and/or Bob are malicious, and Charlie
is the good guy. However, Charlie's code might be buggy (after all, the
programmer who implemented Charlie isn't perfect and might occasionally
make mistakes). Can we reduce the likelihood that Charlie inadvertently
leaks his secrets to Alice or to Bob?
I think this is scenario 2) is the right scenario to be thinking about.
But if you don't agree, I wonder whether you can explain in these kinds
of terms what scenario and what threat model you had in mind.
>If it comes to guarntees, there is no meaningful guarantee that NP != P.
>However we are not throwing out our crypto away. Even more, crypto is
>used just to slow down an attacker even if we assume NP != P.
Apples and oranges. Crypto is useful because, once you make some
plausible assumptions, the reasoning is sound. Those assumptions take
the form of claims about mathematics. They are not assumptions of the
form "I assume the attacker is incompetent or ignorant or won't bother
to exploit that known security hole over there". Your objection that
exploiting covert channels would require more skill amounts to making
an implicit assumption that the attacker is ignorant or unskilled, and
that's a lousy sort of assumption to be making when it comes to security.
More information about the e-lang