Sunday, October 07, 2007

Frits Vaandrager's Answers

I have received a contribution from Frits Vaandrager to the discussion arising from the CONCUR workshop. I am very pleased to post them here.
Enjoy!


# What is the role/importance of real-time in modelling? Does industry want dense-time or discrete-time models?

When modelling embedded systems and/or protocols, real-time aspects have to be dealt with very frequently. If one prefers not to use a real-time model checker, one can often encode
real-time constraints in finite state model checkers. For instance, in an STTT paper from 2002, Brinksma, Mader and Fehnker analyzed a PLC control schedule using SPIN, and obtained results that were (at that time) competitive with Uppaal. Their trick was to advance the discrete clock variable not one by one, but directly to the point of the next event.

In his PhD thesis, Rajeev Alur has made a good case for dense time models. In the case of open systems with components that have different clocks, dense time is conceptually the right
thing to do. But in practice one often gets away with discrete-time models.

Industry has no preference for dense-time or discrete-time: any approach that solves their problems is fine.


# How does one involve small- and medium-size companies in collaborations with concurrency theoreticians/practitioners? Does "company size" matter?

The maturity of model checkers has increased enormously over the last years and as a result it becomes much easier to apply them, also for non-experts. I give two examples:

- Allen B. Downey has written "The little book of semaphores", a nice book in which he presents solutions to dozens of concurrency problems, ranging from classical ones (like the barbershop)
to tricky and obscure problems (like the Sushi bar). This year I asked a class of first-year computer science students with just a few hours of model checking experience to pick a problem from Downey's book (a different problem for each pair of students) and to model and analyze Downey's solution for it using Uppaal. As a result, my students spotted several mistakes in Downey's book, mistakes which have been confirmed by the author.

- Matthijs Mekking, a student interested in implementing protocols and with no particular expertise in formal methods just completed an MSc thesis project at NLnet Labs on the SHIM6
protocol, a proposed IETF standard for multi-homing. At some point he started to model the protocol using Uppaal and became enthusiastic. He managed to find several nontrivial mistakes
in the standard and greatly impressed the protocol designers. His results directly influenced the new version of the standard. (See http://www.ita.cs.ru.nl/publications/papers/fvaan/SHIM6/)

SMEs usually don't have verification specialists in house, so they need some advice on modelling and analysis. But with todays technology it is possible to get results rather quickly, and they can do part of the work themselves. The situation will further improve when MSc and PhD graduates with a background in model checking get jobs at these companies. My group is collaborating with several SMEs and the verification problems they provide us with are often
interesting from an academic perspective.

Unlike SMEs large companies are able to invest in long-term research.

# Is there any need for stochastic and probabilistic modelling in applications? More pointedly, have you met an example that you could not model because your tool does not support stochastic or probabilistic phenomena?

Yes!! There is a great need for stochastic and probabilistic modelling and analysis techniques, and I would for instance welcome a tool that combines the functionality of Uppaal and PRISM.

The Zeroconf protocol, for instance, is full of probabilistic features that we could not model using Uppaal. If we really want to make impact in the area of wireless sensor networks, mobile ad-hoc networks, and P2P networks we need to extend model checkers with probabilities since the design of these networks can only be properly understood if we take probabilities into account.

# How can we, as a community, foster the building of industrial-strength tools based on sound theories?

Theory for the sake of theory is good and important but IMHO the concurrency community has too much of it. In order to really push model checking technology into industry the performance and functionality of these tools must be further improved by several orders of magnitude. This can only be achieved by a combined and focused effort of a large team of researchers and is also a major academic challenge.

In order to test new ideas one needs prototype tools to play with. However, I believe it is not healthy that almost every research group on model checking has its own tool. Only a few groups manage to keep their model checking tool in the air for more than a decade. Developing an
industrial-strength model checking tool requires a huge effort. I think academic groups have to join forces if they want to build (and maintain!) industrial-strength tools. Uppaal has been highly succesful in part due to the continued efforts from teams in Aalborg, Uppsala, Nijmegen, Twente and elsewhere. So why don't the CADP and muCRL teams join forces? Why isn't it possible to establish stronger ties between Uppaal and IF? Why are there so many probabilistic model
checkers?

By combining efforts we can be much more effective.

# What has concurrency theory offered industry so far? What are the next steps that the concurrency community should take in order to increase the impact of its research in an industrial setting? And what are future promising application areas for concurrency research?

Contributions are very powerful modeling languages and concepts, and of course model checking.

A major challenge ahead of us is to combine features. In model checking people proposed symmetry reduction, partial order reduction, CEGAR, etc. What about the combination of all these features? How can we prove it is sound. Can we combine probabilistic choice, hybrid aspects, real-time, and hierarchy as in state-charts? I believe it will be possible to define such combinations but I expect we will need theorem provers and proof assistents to help us to manage the resulting complexity.

I am aware of applications in other areas, but I believe computer system engineering will remain the most important application area for concurrency theory the coming decade.

No comments: