[cap-talk] [friam] RE: Hoare's 1980 Turing Award Lecture ...andprogram verification mea culpa

Ben Kloosterman bklooste at gmail.com
Tue Oct 4 00:39:42 PDT 2011

Hi Ken,

Capability based HW ?  Sounds interesting,  any examples ?   How does the HW handle stack corruption for calls , do you use some register window scheme ? 

I may have it wrong but the idea of changing every procedure call into a ipc call seems expensive to me and specialized HW has not done well over the years.  I have seen something similar attempted in some (poor) SOA implementations where every class becomes a service and it tends to be a disaster.. that said with multi cores , Async runtimes  and fast IPC it maybe possible.
I do believe the future is in breaking apps down into smaller services  that communicate asynchronously via IPC which a run time manages , load balances , schedules etc  These services would consist of several methods . Classes with Heavy communication between them are normally in the same service. This is already happening now witness  browsers using multiple processes for pages  , and either banning ( IE10/Apple) or isolating add ins. 

Also if we have a verified compiler ( a BIG if) then wouldn't a language guarantee be enough ?   Isnt this easier than new forms of HW ?


From: Ken 
Sent: Friday, September 30, 2011 2:40 AM
To: Ben Kloosterman ; General discussions concerning capability systems. 
Cc: John R. Strohm ; General discussions concerning capability systems. 
Subject: Re: [cap-talk] [friam] RE: Hoare's 1980 Turing Award Lecture ...andprogram verification mea culpa

Ben et al
I agree with all the points you make. I would go a step further. The only way to do what is actually needed, in a solid and consistent way, is to inherit both verification and validation from capability based hardware. I know because i have seen it work. 
The alignment of goals can be reliably achieved when the hardware guarantees subroutine level containment and detailed context sensitive protection from the hard edge of cyberspace.  Linear VM must be replaced as part of this change. Cyberspace is atomic not linear. 
Everything today is based on trust without any big stick to enforce best practices and only super human special cases come close from time to time, in point solutions that do not pass the test of time. 
In a dynamically evolving world the specification of new requirements arrive daily so V&V must come for free and be enforced by a common denominator that cannot be bypassed  this means hardware enforcement of best practices.  Programming must focus on new features. The platform must guarantee cyber security for all to use and to misuse at the same time. 
I see this only getting more critical as technology changes ever faster. 
The good solutions will rise to the top in the end. 
Sent from my iPhone

On Sep 29, 2011, at 11:03 AM, "Ben Kloosterman" <bklooste at gmail.com> wrote:

  Nothing casual about the financial markets where the wrong code can cost millions but not being able to rapidly make changes can also cost big time.  And specs change quickly , even in the military the Ewacs failed on day 1 of the gulf war and the specs were changed immediately. eg lives depended on a new spec being written and implemented in hours .,. 

  If your talking space how about Apollo 13  a modern version may benefit from a new computer upload to re-manage power,  compare  make a few changes run your unit tests  and upload all with 12 hours.. This is difficult to do with verification and if you skipped the verification you would be relying on no automated testing. 

  Also many critical systems   are very simple and often implemented in HW circuits  not code so verification is not an option . 

  What im really saying is it the arguments for verification are not that simple
    a.. for 100K of C  +  verification vs Java/C# and TDD  , a type safe and memory safe  language will have far  less issues given equal resources. 
    b.. Absolutely key conditions like you mention can be tested with TDD or ensured with a few guards in the code. 
    c.. Verification does not protect against compiler errors  , TDD does !  So your 600 lines verified code has a 300K unverified dependency 
    d.. Verification provides an expectation that it will work (TDD does not)  , which may lead to reduced testing and false confidence. There can be errors in the compiler, specification , assumption's on 3rd party systems or the verification . 
    e.. Verification makes change very expensive , which means important things don't get done.  Look at Nuclear power plants ,  errors happened in Japan and Russia which a more modern software system may have prevented.  eg the static nature cost lives. 
    f.. TDD can evolve to handle the behaviour of other systems which may change ( and may not be verified) 
    g.. Verification works on older languages/assembly which have more security holes,  eg don't even come close to ocap and the programs don't degrade nicely.. 
    h.. Anyone written a verifier which supports multi threaded code ?  Having 2 cores and  threads ( especially with a scheme like E) can provide resiliency and cover HW failures without switching to a backup unit yet ( which is always problematic) . Likewise simple stateless code and simple clusters can provide excellent resilience., proving a load balancer would be something nice that should be proven and verified . Than several embedded machines would provide far greater work continuously 

  OK i wouldn't do this for mission critical code because im not that brave and few people have done it  , but  more should.


  From: John R. Strohm 
  Sent: Thursday, September 29, 2011 9:06 PM
  To: Ben Kloosterman ; General discussions concerning capability systems. 
  Subject: Re: [cap-talk] [friam] RE: Hoare's 1980 Turing Award Lecture ...andprogram verification mea culpa

  If your domain of discussion is CASUAL business programming, this may be true.

  When your domain of discussion is safety-critical systems, with human lives riding on them, those specs do not change anywhere near as fluidly.

  The airplane must fly.  It must operate in coordinated controlled flight.

  The rocket motor must not be commanded to start unless BOTH keys have been turned and BOTH launch switches depressed and the silo door is OPEN.  Observe that a system that never started the motor under any circumstances would meet a casual interpretation of that specification.  This is almost exactly the system that NASA managed to build for one of the Gemini launches.  An unexpected failure left them with an interesting piece of modern art and a major exercise in defeating their own safety interlocks.

  The implanted pacemaker must pulse correctly, continuously.  Rebooting is not an option.
  cap-talk mailing list
  cap-talk at mail.eros-os.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.eros-os.org/pipermail/cap-talk/attachments/20111004/e863db9a/attachment.html 

More information about the cap-talk mailing list