You are hereCorrectness-by-construction
Correctness-by-construction
Joseph Kiniry, Professor at the Technical University of Denmark, Copenhagen replied in a lively discussion topic on "GNATprove, integrating theorem provers with software development" in the LinkedIn discussion group "Formal Methods: Specification, Verification, TCG" as follows:
"We find Altreonic's work tremendous. Their pragmatic use of formal methods toward high-end business needs is really a case study in how to do things right and communicate ones results to industry and the academic community. We, too, believe that there is too much emphasis on post-design validation (and, rarely, verification) and believe that a pragmatic correctness-by-construction approach that appreciates and leverages existing quality development practices is the way forward." (quoted with permission).
This statement was in the context of using formal methods. Altreonic applied it successfully to the development of a network-centric RTOS and published a book about it with Springer. The strength of Altreonic's approach lies not only in the application of formal methods, but in the global, formalised approach we have taken. Rather than being hyper-specialised, we tackle the challenging domain of systems and software engineering using a pragmatic yet formal(ised) approach. The two key phrases we use are "Unified Semantics" and "Interacting Entities". If this all sounds heavy, keep in mind that "pragmatic" for us does not mean "anything goes" but adopting a "lean engineering" approach. Let us explain.
Engineering while seen as a technical activity, is still an activity primarily driven and executed by humans. More than ever, this is a collaborative effort by people coming from different backgrounds. The natural language(s) used reflect sometimes a deep historical or domain specific context, whereby words often receive their meaning in a given context. Yet, they must work together to achieve the engineering goals. Unfortunately, the same concept is often designated with different terms making communication difficult.
But it goes further. Terms have associated semantics. When used in a context they express behaviour. Often terms are used in a context without given the assumptions that define the context. For example, the term "scheduling" means different things for e.g. a production engineer, a real-time software engineer or for a hardware designer. These people even have different notions of time. The production engineer will think in terms of seconds, perhaps even minutes. The real-time software engineer will assume micro- or milliseconds and the hardware engineer will assume nano- and picoseconds.
While these concepts can still be clearly expressed by e.g. using mathematical constructs, how do we deal with concepts from the intentional domain? What do we mean with a requirement statement? What do we mean with a requirement specification? When is it a "High Level Requirement" and when is its a "Low Level Requirement"? Is the latter a design decision? If you think it doesn't matter, just browse the questions asked on LinkedIn about the interpretation of these terms. People even routineously mix up the term used as a verb (the action) and used as a noun (the object). What we witness here is a common conflict between the flexibility of natural language (often context driven) and the rigour and predictability needed for trustworthy engineering.
How to tackle this dilemma?
One could argue that engineers should rely on mathematics. Indeed, engineering without mathematics would only be a skilful craft. However, mathematics is not a natural language. It is the language of the logical mind and is to be used when accuracy and precision are of the outermost importance. This is not effortless. Without mathematics, no proof. Without logic, no evidence. The challenge is to make the transition from the natural language domain to the domain of mathematics.
The key to get there is not to make a near impossible jump in one go, but to use the intermediate stepping stone of formalisation. It is a necessary step for going fully formal and often it will be sufficient to address key issues and contribute to a reduction of the problem space. The step towards formal techniques is then easier because the formalisation will have done the groundwork by removing most of the superfluous information. It will have cleared up the ambiguity of natural language by orthogonalisation, often reducing the state space to a much smaller one or a decomposed set of smaller ones. Going formal is then often a confirmation step that increases the confidence or provides more rigorous proof when that is needed .
Altreonic's methodology
What does this mean in terms of the methodology and tools Altreonic develops and supports? We take the two most important ones: OpenComRTOS Designer and GoedelWorks.
The first step in a any engineering project is to define what the system is supposed to deliver to its users. We call this the intentional domain, often called the requirements capturing phase. Collecting requirements is an open process with potentially many stakeholders. Half of the requirements are likely never to make it into the system. They will be eliminated during the requirements analysis, a process whereby the requirements are dissected and refined until they are concrete enough to define the system and its components. They become testable and therefore we call them specifications. Specifications are in principle independent of a given implementation. They express what we expect in quantitative and qualitative terms from the system.
Now comes a crucial phase: how to transition from the intentional to the implementation domain? Specifications are mapped onto entities and their interactions. They are the seat of fulfilment of the specifications. This mapping is facilitated by separating the different concerns. We call this orthogonalisation. It applies as well to the specifications as to the implementation. It can never be total. Specifications can be classified in different categories. Entities and their interactions will be grouped in architectural clusters. Often, the latter are called models. Some models will be virtual prototypes that simulate the real system, allowing to reason about its behaviour and properties in a general sense, often verifying the feasibility of requirements and specifications. Other models will be formal, e.g. mathematical or logical, allowing to verify critical properties. The latter are often safety or security related. In principle, we would need to have formal models for all properties, but that is yet not within reach. And finally we can build implementations models. We call these also models because multiple models can meet the same specifications, even if they are vastly different.
An small example to illustrate the flow
Let's take a small, but maybe less trivial example that we all know: a car brake.
Requirement: it must be possible to stop the car.
Specification: it must be able to stop the car with a weight of 2000 kg and moving at a maximum speed of 200 km/hr with a de-accelaration of at least 0.5 g.
Physical model: based on Newtonian mechanics, we can develop an equation that gives us the required braking force as a function of time, mass and speed.
Simulation model: we develop a simple visualisation program that shows how a car with a given weight and speed brake when applying a braking force. The simulator uses internally a physical model to display the moving car and its braking. On a first iteration, the simulation shows that we forgot to take into account the wheel to road friction. Once we have updated the physical model, we discover that we need to model how the braking force varies over time. Given this input, the updated simulation model then shows that we can brake too hard and the wheel can starts slipping. Hence, we need a sensor and controller that applies a time-varying brake force. The more the simulation model is updated in an iterative cycle, the closer our simulation model will mimic reality and how closer we come to an implementation model.
From the simulation model, we actually have also defined an implementation model. For example, we selected a digital controller and not an analog amplifier with a feedback loop or a straight hydraulic line. If we use the same programming language for the simulator and the implementation, we can very likely reuse the software in the real system.
Formal model: given that we selected a digital controller, we need to be sure that it will always apply the correct braking force. Ignoring the physical sensor and actuator, we formally model the software written control loop. This formal model verifies for example that when the control loop is stable, e.g. the braking force will not oscillate independently from the speed and force exercised by the driver. For example, when the brake pedal is suddenly released, the brake force should stop immediately as well. Moreover, we also need to be sure that when erroneous states occur, the controller will generate a force that does not result in hazardous situations. In other words, the controller must be safe and can be trusted.
Implementation model: finally, all the modelling done before will have resulted in certain implementation choices. What are the components needed to meet the specifications? What are their specifications? How do they interact? The physical connections as well as the logical connections (e.g. how the data is exchanged) determine the architecture of the system.
The attentive reader will have noticed that developing the models also requires feedback loops in the process. Models, once proven to be correctly implemented (i.e. verified), will be exercised (i.e. tested) against specifications and the observed deviations will result in modifications, e.g. adding sensors, selecting a different control loop, etc. When all this has been done, the implementation model will be frozen. It can then be integrated and validated in a real system.
It is clear that to make all this transitions from one domain to another and how a simulation model is transformed into an implementation model can only be done with great confidence and accuracy if the semantics as well as the architecture match. And here we find back the "Unified Semantics" and "Interacting Entities" paradigm that steer our methodology. It is clear as well that this goal can only be reached if we stay away from the details of a selected implementation. The Systems Engineering process is itself first modelled as a more abstract meta-model, even meta-meta-model.
How do we support it already?
GoedelWorks implements a meta-meta-model that allows to describe any system engineering process from early requirements capturing till final implementation. It has 16 conceptual and orthogonal entities that are linked through associative and structural links. This can be customised for a specific domain or organisation mostly by sub typing. A specific engineering project is then to be seen as a dedicated instance of a system determined by executing a (development) project using a specific (engineering) process. In GoedelWorks we mostly work in the intentional domain while defining entities to be developed in the implementation domain.
OpenComRTOS Designer is the key tool for the implementation domain. At the core is a formally developed concurrent programming environment. It allows to model and simulate a specific system as a set of concurrent real-time entities (called tasks) that create a system by their interactions. This defines the logical or functional topology of the system. Independently, it also allows to define a hardware topology. Using code generators and the RTOS runtime support, it allows to map the logical topology into a physical one, essentially in an independent way. Thanks to this approach we have reached the initial goals of separation of concerns and achieved a higher degree of modularity and reuse.
The development of OpenComRTOS itself has also shown that the real power of formalisation and in particular of using formal techniques up front during the design phase, and not only during the verification phase, pays off for achieving correctness as well as more efficiency. It has also resulted in a cleaner and leaner architecture shrinking the code size with a factor of 10. This has a direct impact on maybe other specified and desirable properties like energy and resource consumption but also on scalability and reuse, etc. It has facilitates the creation of correct applications with minimal software writing efforts because the software runs in a standardised concurrent environment and hence the bulk of the framework in which the software executes is automatically generated. The only coding left to do is to write the sequential state machines inside each concurrent task.
- Printer-friendly version
- Login to post comments