Monday, August 06, 2007

Vardi on Process Equivalences

Last Saturday, Moshe Vardi posted the paper Branching vs. Linear Time: Semantical Perspective (invited ATVA'07 paper with Sumit Nain). This is a nicely written and balanced paper distilling Moshe's thoughts on the choice of semantics for concurrent processes. I have no doubt that it will generate some healthy discussion in the concurrency theory community. I strongly encourage my two readers to have a look at it.

Oversimplifying the message in the paper by Moshe and Sumit, in their opinion the plethora of process semantics is due to "semantic underspecification". To overcome this underspecification, they propose to develop process semantics based on the following principles.

  1. Principle of Contextual Equivalence: Two processes are equivalent if they behave the
    same in all contexts, which are processes with “holes”.
  2. Principle of Comprehensive Modeling: A process description should model all relevant
    aspects of process behaviour.
  3. Principle of Observable I/O: The observable behaviour of a tested process is precisely
    its input/output behaviour.
In the paper, Moshe and Sumit apply their approach to transducers, and show that, once the aforementioned three principles are applied, there is a trace-based equivalence that is adequate and fully abstract.

As the authors state in the paper "In conclusion, this paper puts forward an, admittedly provocative, thesis, which is that process-equivalence theory allowed itself to wander in the “wilderness” for lack of accepted guiding principles. The obvious definition of contextual equivalence was not scrupulously adhered to, and the underspecificity of the formalisms proposed led to too many interpretations of equivalence.While one may not realistic expect a single paper to overwrite about 30 years of research, a more modest hope would be for a renewed discussion on the basic principles of process-equivalence theory."

What do you think?

1 comment:

Anonymous said...

The principle of contextual equivalence makes a lot of sense for security. (Security can often be defined as indistinguishability between instantiations of a parameterized process.) Bisimulation is indeed too fine an equivalence for security; often weaker notions of observational congruence are defined, such as for the applied pi calculus.

However deriving (denotational) semantics from equivalences seldom helps proofs. I find proofs easier in a setting that starts off with an operational semantics and then defines congruences.

I don't think bisimulation is a good technique for proving implementations correct (given a specification) in any case; a much better approach is proving full abstraction, i.e., preservation of observational congruence. However without bisimulation, proving full abstraction can be tedious. I usually show simulations both ways modulo "wrappers", and then show that the wrappers in the two directions "cancel out", thus doing 3 proofs for one. Is there a better technique for this setting?

The principle of observable I/O is also obvious. However I don't quite understand the principle of comprehensive modeling. To me it seems to push under the rug the very reasons for having so many equivalences.