Substitutions
$$\%f [x:=v](y) ==
v if $x\; =\; y$
$$\%f (y) otherwise
slide: Substitution
A substitution $\%f[x:=v](y)$
states that modifying $\%f$ by
assigning the value v to the variable x
then, for a variable y, the state $\%f$ will
deliver v whenever y is identical to x and $\%f(y)$
otherwise.
When we have, for example, an assignment $i\; =\; 5$
then we have as the corresponding transition
$\%f\; -\{i=5\}->\; \%f\text{'}$
where $\%f\text{'}\; =\; \%f[i:=5]$,
that is $\%f\text{'}$ is like $\%f$ except
for the variable i for which the value 5
will now be delivered.
Whenever we have a sequence of actions $a\_1,...,a\_n$
then, starting from a state $\%f\_0$ we have corresponding
state transformations resulting in states
$\%f\_1,...,\%f\_\{n-1\}$ as intermediary states and
$\%f\_n$ as the final state.
Often the states $\%f\_0$ and $\%f\_n$ are referred to
as respectively the input and output state
and the program that results in the actions $a\_1,...,a\_n$
as the state transformer modifying $\%f\_0$ into $\%f\_n$.
Program state -- $%f$
- $\%f\; \backslash e\; \%S\; :\; Var\; ->\; Value$
State transformations -- operations $a_1,a_2,...,a_n$
- $\%f\_\{0\}\; -\{a\_1\}->\; \%f\_\{1\}\; ...\; -\{a\_n\}->\; \%f\_n$
Correctness formulae -- Hoare logic
- $\{\; P\; \}\; S\; \{\; Q\; \}$
Verification
- $\%f\_i\; -\{a\}->\; \%f\_j\; /\backslash \; \%f\_i\; |=\; P\; =>\; \%f\_j\; |=\; Q$
slide: The verification of state transformations
To characterize the actions that result from
executing a program, we need an operational semantics
that relates the programming constructs to
the dynamic behavior of a program.
We will study such a semantics in section \ref{10:behavior}.
The requirements a program (fragment)
has to meet may be expressed by using predicates
characterizing certain properties
of a program state.
Then, all we need to do is check whether the final state
of a computation satisfies these requirements.
Predicates characterizing the properties of a state
before and after executing a program (fragment)
may be conveniently stated by
correctness formulae of the form
$\{\; P\; \}\; S\; \{\; Q\; \}$
where S denotes a program (fragment)
and P and Q respectively the pre-condition
and post-condition associated with S.
A formula of the form $\{\; P\; \}\; S\; \{\; Q\; \}$ is true if, for every
initial state $\%f$ that satisfies P and for which
the computation characterized by S terminates,
the final state $\%f\text{'}$ satisfies Q.
This interpretation of $\{\; P\; \}\; S\; \{\; Q\; \}$
characterizes partial correctness, partial since
the truth of the formula is dependent on the termination of
S (which may, for example, for a while statement,
not always be guaranteed).
When termination can be guaranteed, then we
may use the stronger notion of total correctness,
which makes the truth of $\{\; P\; \}\; S\; \{\; Q\; \}$
no longer dependent on the termination of S.
Pre- and post-conditions may also be used to check
invariance properties.
As an example, consider the following
correctness formula:
$\{\; s\; =\; i*(i+1)/2\; \}\; i\; =\; i\; +\; 1;\; s\; =\; s\; +\; i;\; \{\; s\; =\; i*(i+1)/2\; \}$
It states that the begin and end state of the
computation characterized by $i\; =\; i\; +\; 1;\; s\; =\; s\; +\; i$
is invariant with respect to the condition $s\; =\; i*(i+1)/2$.
As an exercise, try to establish the correctness of
this formula!
To verify whether for a particular program fragment
S and (initial) state $\%f\_i$ satisfying P
the correctness formula $\{\; P\; \}\; S\; \{\; Q\; \}$
holds,
we need to compute the (final) state $\%f\_j$
and check that Q is true for $\%f\_j$.
In general, for example in the case of non-deterministic
programs, there may be multiple (final) states
resulting from the execution of S.
For each of these states we have to establish that
it satisfies (the post-condition) Q.
We call the collection of possible
computation sequences of a program fragment S
the traces of S.
Traces characterize the (operational) behavior
of a program.
Assertion logic
Reasoning about program states based on the traces
of a program may be quite cumbersome.
Moreover, a disadvantage is that it relies
to a great extent on our operational intuition
of the effect of a program
on a state.
Instead, [Hoare69] has proposed using an axiomatic
characterization of the correctness properties of programming
constructs.
An axiomatic definition
allows us to prove the correctness of a program
with respect to the conditions stating its
requirements by applying the appropriate inference rules.
Axioms
- assignment -- $\{\; Q\; [x:=e]\; \}\; x\; =\; e\; \{\; Q\; \}$
- composition -- $\{\; P\; \}\; S1\; \{\; R\; \}\; /\backslash \; \{\; R\; \}\; S2\; \{\; Q\; \}\; =>\; \{\; P\; \}\; S1;S2\; \{\; Q\; \}$
- conditional -- $\{\; P\; /\backslash \; b\; \}\; S\; \{\; Q\; \}\; =>\; \{\; P\; \}\; if\; (b)\; S\; \{\; Q\; \}$
- iteration -- $\{\; I\; /\backslash \; b\; \}\; S\; \{\; I\; \}\; =>\; \{\; I\; \}\; while\; (b)\; S\; \{\; I\; /\backslash \; \backslash neg\; b\; \}$
Consequence rules
- $P\; ->\; R\; /\backslash \; \{\; R\; \}\; S\; \{\; Q\; \}\; =>\; \{\; P\; \}\; S\; \{\; Q\; \}$
- $R\; ->\; Q\; /\backslash \; \{\; P\; \}\; S\; \{\; R\; \}\; =>\; \{\; P\; \}\; S\; \{\; Q\; \}$
Procedural abstraction
- $m(x)\; |->\; S(x)\; /\backslash \; \{\; P\; \}\; S(e)\; \{\; Q\; \}\; =>\; \{\; P\; \}\; m(e)\; \{\; Q\; \}$
slide: The correctness calculus
In slide [10-correctness] correctness axioms have been given for
assignment, sequential composition, conditional statements
and iteration.
These axioms rely on the side-effect free nature
of expressions in the programming language.
Also, they assume convertibility between
programming language expressions and the expressions
used in the assertion language.
The assignment axiom states that for any post-condition Q we
can derive the (weakest) pre-condition by substituting the value e
assigned to the variable x for x in Q.
This axiom is related to the weakest pre-condition calculus
introduced by [Dijkstra76].
It is perhaps the most basic axiom in the correctness calculus
for imperative programs.
As an example, consider the assignment $x\; =\; 3$ and
the requirement $\{\; P\; \}\; x\; =\; 3\; \{\; y\; =\; x\; \}$.
Applying the assignment axiom we have $\{\; y\; =\; 3\; \}\; x\; =\; 3\; \{\; y\; =\; x\; \}$.
Consequently, when we are able to prove that P implies $y\; =\; 3$,
we have, by virtue of the first consequence rule, proved that
$\{\; P\; \}\; x\; =\; 3\; \{\; x\; =\; y\; \}$.
The next rule, for sequential composition, allows
us to break a program (fragment) into parts.
For convenience, the correctness formulae for multiple
program fragments that are composed in sequential order are
often organized as a so-called proof outline of the form
$\{\; P\; \}\; S1\; \{\; R\; \}\; S2\; \{\; Q\; \}$.
When sufficiently detailed, proof outlines may be regarded
as a proof.
For example, the proof outline
$\{\; s\; =\; i*(i+1)/2\; \}\; i\; =\; i\; +\; 1;\; \{\; s\; +\; i\; =\; i\; *\; (i+1)/2\; \}\; s\; =\; s\; +\; i;\; \{\; s\; =\; i*(i+1)/2\; \}$
constitutes a proof for the invariance property discussed earlier.
Clearly, the correctness formula for the two individual components
can be proved by applying the assignment axiom.
Using the sequential composition rule, these components
can now be easily glued together.
As a third rule, we have a rule for conditional statements of the form
$\backslash !\; if\; \backslash !\; (b)\; S$.
As an example, consider the correctness formula
$\{\; true\; \}\; if\; (x\; >\; y\; )\; z\; =\; x;\; \{\; z\; >\; y\; \}$.
All we need to prove, by virtue of the inference rule
for conditional statements, is that $\{\; x\; >\; y\; \}\; z\; =\; x\; \{\; z\; >\; y\; \}$
which (again) immediately follows from the assignment axiom.
As the last rule for proving correctness, we present here
the inference rule for iterative (while) statements.
The rule states that whenever we can prove that a certain
invariant I is maintained when executing the body of
the while statement (provided that the condition b is satisfied)
then, when terminating the loop,
we know that both I and $\backslash neg\; b$ hold.
As an example, the formula
$\{\; true\; \}$ while ($i>0$) i--; $\{\; i\; <=\; 0\; \}$
trivially follows from the while rule by taking I to be true.
Actually, the while rule plays a crucial role in constructing
verifiable algorithms in a structured way.
The central idea, advocated among others by [Gries], is
to develop the algorithm around a well-chosen invariant.
Several heuristics may be applied to find the proper invariant
starting from the requirements expressed in the (output) predicate
stating the post-condition.
In addition to the assignment axiom and the basic inference rules
related to the major constructs of imperative programming languages,
we may use so-called structural rules to facilitate
the actual proof of a correctness formula.
The first structural (consequence) rule states that
we may replace a particular pre-condition for which we
can prove a correctness formula (pertaining to a program fragment S)
by any pre-condition of which the original pre-condition
is a consequence, in other words which is stronger than the original
pre-condition.
Similarly, we may replace a post-condition for which we
know a correctness formula to hold by any post-condition
that is weaker than the original post-condition.
As an example, suppose that we have proved the formula
$\{\; x\; >=\; 0\; \}\; S\; \{\; x\; <\; 0\; \}$
then we may, by simultaneously applying the two consequence rules,
derive the formula
$\{\; x\; >\; 0\; \}\; S\; \{\; x\; <=\; 0\; \}$
which amounts to strengthening the pre-condition
and weakening the post-condition.
The intuition justifying this derivation
is that we can safely promise less and require more,
as it were.
Finally, the rule most important to us in the present context
is the inference rule characterizing correctness
under procedural abstraction.
Assuming that we have a function m with formal parameter x
(for convenience we assume we have only one parameter, but this
can easily be generalized to multiple parameters), of which
the (function) body consists of $S(x)$.
Now, moreover, assume that we can prove for an arbitrary expression
e the correctness formula $\{\; P\; \}\; S(e)\; \{\; Q\; \}$,
with e substituted for the formal parameter x in both
the conditions and the function body, then we also have
that $\{\; P\; \}\; m(e)\; \{\; Q\; \}$, provided
that P and Q do not contain references to local variables
of the function m.
In other words, we may abstract from a complex program fragment
by defining a function or procedure
and use the original (local) correctness proof
by properly substituting actual parameters for formal parameters.
The procedural abstraction rule, which allows us
to employ functions to perform correct operations,
may be regarded as the basic construct needed to verify
that an object embodies a (client/server) contract.
On the notion of behavior
The assertion logic presented in the
previous section allows us to reason
about the behavior of a system without
explicitly generating the possible sequences
of states resulting from the execution of the program.
However, underlying the inference rules
of our assertion logic we need a mathematical
model for the operational behavior of a system.
An operational model is needed to prove
the soundness of the inference rules.
Further, an operational model may aid in understanding the
meaning of particular language constructs
and their associated correctness rules.
In the following we will sketch the construction of
a transition system modeling the behavior
of an object-based program.
Studying the formal semantics is relevant to
understanding object orientation
only in so far as it provides a means with which to characterize
the desired behavior of object creation and
message passing in an unambiguous manner.
Transition system
A transition system for a program is a collection
of rules that collectively describe the effect of executing
the statements of the program.
A labeled transition system is one
that enables us to label the transition from one state
to another by some label indicating the observable
behavior of a program step.
In the transition system defined below, we will
employ states $\%f$, which may be decorated by object identifiers
$\%a$, as in $\%f^\{\%a\}$.
Object identifiers are created when creating a new
instance of an object type $\%t$.
We assume newly created object identifiers to be unique.
We assume that each object type $\%t$ has a
constructor (which is a, possibly empty,
statement that we write as $S\_\{\%t\}$)
and an arbitrary number of methods m.
Each method m is assumed to be defined by
some statement, which we write as $S\_m\; (e)$,
for method calls of the form $m(e)$.
Also we allow an object $\%a$ of type $\%t$
to have attributes
or instance variables v that may be accessed
(read-only) as $\%a.v$ for an object identifier $\%a$
or $x.v$ for an object variable x (which must have
$\%a$ as its value).
To determine the visible behavior of a program,
we will employ labels of the form $\%a$
(to denote the creation of an object $\%a$)
and $m\_\{\%a\}$ (to indicate the invocation of
a method m for object $\%a$).
We allow transitions to be labeled by sequences of
labels that we write as $\%l$ and which are concatenated
in the usual way.
We will define a transition system for a simple
language of which the syntax is defined in slide 10-syntax.
Expressions
syntax
Elementary statements
- $s\; ::=\; v\; =\; e\; |\; x\; =\; new\; \%t\; |\; x.m(e)$
Compound statement
- $S\; ::=\; \%e\; |\; s\; |\; S1;S2\; |\; \backslash !\; if\; (b)\; S\; |\; while\; (b)\; S$
slide: The syntax of a simple OO language
Expressions are either local variables v
or object instance variables that we write
as $x.v$, where x is an object variable.
As elementary statements we have $v\; =\; e$
(indicating the assignment of (the value of) an expression
e to a local variable v),
$x\; =\; new\; \%t$ (which stands for the creation of a new
object of type $\%t$),
and $x.m(e)$
(which calls a method m with arguments e for object x).
The object variable x is associated with an object identifier
$\%a$ by the state $\%f$ in which the statement in which x
occurs is executed.
As compound statements we have an empty statement
$\%e$ (which is needed for technical reasons),
an elementary statement s (as defined above),
a sequential composition statement,
a conditional statement and an iteration statement,
similar to that in the previous section.
The transition rules for elementary statements
are given in slide [10-rules].
Assignment
rules
- $<<\; v\; =\; e,\; \%f^\{\%a\}\; >>\; ->\; <<\; \%e,\; \%f\; [\; \%a.v\; :=\; e\_\{\%f\}\; ]\; >>$
Object creation
- \ffrac{$<<\; S\_\{\%t\},\; \%f^\{\%a\}\; >>\; \backslash apijl\{\%l\}\; <<\; \%e,\; \%f\text{'}\; >>$}{
$<<\; x\; \backslash !\; =\; \backslash !\; new\; \%t,\; \%f\; >>\; \backslash !\; \backslash apijl\{\; \%a\; \backslash .\; \%l\; \}\; \backslash !\; <<\; \%e\; ,\; \%f\text{'}[\; x\; \backslash !\; :=\; \%a\; \backslash !\; ]>>$
}
Method call
- \ffrac{$<<\; S\_m(e),\; \%f^\{\%a\}\; >>\; \backslash apijl\{\%l\}\; <<\; \%e,\; \%f\text{'}\; >>$}{
$<<\; x.m(e),\; \%f\; >>\; \backslash apijl\{\; m\_\{\%a\}\; \backslash .\; \%l\; \}\; <<\; \%e,\; \%f\text{'}\; >>$
}
slide: Transition system -- rules
The assignment rule states that the assignment
of (the value of) an expression e to a (local)
variable v in state $\%f$ decorated by an object
identifier $\%a$ results in the empty statement
and the state $\%f$ modified by assigning
the value of e in $\%f$ (which is written as $e\_\{\%f\}$)
to the instance variable v of object $\%a$.
Hence, decorations allow us to work in the
local environment of the object indicated by the
decoration.
The object creation
rule states that if we assume
a transition $<<\; S\_\{\%t\},\; \%f^\{\%a\}\; >>\; \backslash apijl\{\%l\}\; <<\; \%e,\; \%f\text{'}\; >>$
(which states that the constructor for type $\%t$
executed in state $\%f$ decorated by $\%a$ results
in the state $\%f\text{'}$ with behavior $\%l$)
then we may interpret the creation of a new $\%t$
object to result in behavior $\%a\; \backslash .\; \%l$
(where $\%a$ is the newly created object identifier)
and state $\%f\text{'}$ in which the object variable x
has the value $\%a$.
Finally, in a similar way,
the method call rule states that if we
assume
a transition $<<\; S\_m(e),\; \%f^\{\%a\}\; >>\; \backslash apijl\{\%l\}\; <<\; \%e,\; \%f\text{'}\; >>$
(which states that executing
the statement $S\_m(e)$,
that is the code associated with method m
and arguments e, for object $\%a$ in state $\%f$,
results in behavior $\%l$ and state $\%f\text{'}$)
then we may interpret the
method call $x.m(e)$ in $\%f$
as a transition to state $\%f\text{'}$ displaying behavior
$m\_\{\%a\}\; \backslash .\; \%l$.
The rules for object creation and method call already
indicate that transition rules may be used
to construct a complex transition from
elementary steps.
In other words, a transition system defines
a collection of proof rules that allow us to
derive (state) transitions and to characterize
the behavior that may be observed.
To obtain a full derivation, we need in addition to
the rules for elementary statements
the rules for compound statements listed in slide [10-compound].
Composition
compound
- \ffrac{
$<<\; S\_1,\; \%f\; >>\; \backslash apijl\{\; \%l\_1\; \}\; <<\; \%e,\; \%f\text{'}\; >>$
\ifsli{\hspace{-0.5cm}}{\hspace{1cm}}
$<<\; S\_2,\; \%f\text{'}\; >>\; \backslash apijl\{\; \%l\_2\; \}\; <<\; \%e,\; \%f\text{'}\text{'}\; >>$
}{
$<<\; S\_1;S\_2,\; \%f\; >>\; \backslash apijl\{\; \%l\_1\; \backslash .\; \%l\_2\; \}\; <<\; \%e,\; \%f\text{'}\text{'}\; >>$
}
Conditional
- $<<\; \backslash !\; if\; (b)\; S,\; \%f>>\; ->\; <<\; \%e,\; \%f>>$ if $\%f(b)\; ==\; false$
- \ffrac{ $<<\; S,\; \%f\; >>\; \backslash apijl\{\%l\}\; <<\; \%e,\; \%f\text{'}\; >>$ }{
$<<\; \backslash !\; if\; (b)\; S,\; \%f\; >>\; \backslash apijl\{\%l\}\; <<\; \%e,\; \%f\text{'}\; >>$
} if \ifsli{$\%f(b)$}{$\%f(b)\; ==\; true$}
Iteration
- $<<\; while\; \backslash ;\; (b)\; S,\; \%f>>\; ->\; <<\; \%e,\; \%f>>$ if $\%f(b)\; ==\; false$
- \ffrac{ $<<\; S,\; \%f\; >>\; \backslash apijl\{\%l\}\; <<\; \%e,\; \%f\text{'}\; >>$ }{
$<<\; while\; (b)\; S,\; \%f\; >>\; \backslash apijl\{\%l\}\; <<\; while\; (b)\; S,\; \%f\text{'}\; >>$
} \ifsli{}{ if $\%f(b)\; ==\; true$ }
slide: Transition system -- compound statement
The composition rule states that
if a statement $S\_1$ transforms $\%f$ into $\%f\text{'}$
with behavior $\%l\_1$ and $S\_2$ transforms $\%f\text{'}$ into
$\%f\text{'}\text{'}$ with behavior $\%l\_2$
then the compound statement $S\_1;S\_2$ transforms
$\%f$ into $\%f\text{'}\text{'}$ with behavior $\%l\_1\; \backslash .\; \%l\_2$.
The conditional rules state that, dependent
on the value of the boolean b, the statement
$if\; (b)\; S$ has either no effect
or results in a state $\%f\text{'}$ assuming that
S transforms $\%f$ into $\%f\text{'}$ with behavior $\%l$.
The iteration rules state that dependent
on the value of the boolean b the statement
$while\; (b)\; S$ has either no effect
or results in a state $\%f\text{'}$ assuming that
S transforms $\%f$ into $\%f\text{'}$ with behavior $\%l$.
In contrast to the conditional, an iteration
statement is repeated when b is true,
in accordance with our operational
understanding of iteration.
Example
In our rules we have made a distinction between
unadorned states $\%f$ and states $\%f^\{\%a\}$
decorated with an object identifier $\%a$.
This reflects the distinction between the execution
of a program fragment in a global
context and a local context, within the confines
of a particular object $\%a$.
Assume, for example, that we have defined a counter type
ctr with a method inc that adds one to an instance variable n.
In slide [10-t-example], a derivation is given of the behavior
resulting from a program fragment consisting of
the creation of an instance of ctr,
a method call to inc
and the assignment of the value of the attribute n
of the counter to a variable v.
Program
- $x\; =\; new\; ctr;\; x.inc();\; v\; =\; x.n$
Transitions
- $<<\; x\; =\; new\; ctr,\; \%f\_1\; >>\; \backslash apijl\{ctr\_1\}\; <<\; \%e,\; \%f\_1[\; x\; :=\; ctr\_1\; ]>>$
$[1]$
- $<<\; n\; =\; n\; +\; 1,\; \%f\_2^\{\%a\}\; >>\; ->\; <<\%e,\; \%f\_2[\%a.n\; =\; \%a.n\; +\; 1]>>$
$[2]$
- $<<\; x.inc(),\; \%f\_2\; >>$ $\backslash apijl\{inc\_\{\%a\}$ $<<\; \%e,\; \%f\_2[\%f\_2(x).n\; :=\; \%f\_2(x).n\; ]>>$
$[2\text{'}]$
- $<<\; v\; =\; x.n,\; \%f\_3>>\; ->\; <<\; \%e,\; \%f\_3[v\; :=\; \%f\_3(x).n]\; >>$
$[3]$
Trace
- $\%f\; \backslash apijl\{\%l\}\; \%f\text{'}$ with $\%f\; =\; \%f\_1$, $\%f\text{'}\; =\; \%f\_3$ and $\%l\; =\; ctr\_1\; \backslash .\; inc\_\{\; \%a\; \}$
slide: Transitions -- example
To derive the transition $\%f\; \backslash apijl\{\%l\}\; \%f\text{'}$
corresponding with the program fragment
$x\; =\; new\; ctr;\; x.inc();\; v\; =\; x.n$
we must dissect the fragment and construct
transitions for each of its (elementary)
statements as shown in $[1]$, $[2]$ and $[3]$.
The second statement, the method call $x.inc()$,
needs two transitions $[2]$ and $[2\text{'}]$, of which the
first represents the execution of the body of inc
in the local context of the object created in $[1]$
and the second represents the effect of the method
call from a global perspective.
For the first statement, we have assumed that
the constructor for ctr is empty
and may hence be omitted.
Notice that the object identifier $\%a$
(introduced in $[1]$) is assumed in $[2]$ to effect
the appropriate local changes to n.
After constructing the transitions for
the individual statements we may compose these
transitions by applying the composition rule
and, in this case, identifying
$\%f\_2$ with $\%f\_1[x\; :=\; \%a\; ]$
and
$\%f\_3$ with $\%f\_2[\; \%a.n\; :=\; \%a.n\; +\; 1]$.
As observable behavior we obtain $ctr\_1\; \backslash .\; inc\_\{\%a\}$
(where $ctr\_1\; =\; \%a$),
which represents the creation of a counter
and its subsequent modification by inc.
Discussion
Transition systems, such as the one given above,
were originally introduced
as a means to model the behavior of CSP.
They have been extensively used to model
the operational semantics of programming languages,
including concurrent and object-oriented languages.
See, for example, [ABKR89] and [Eliens92].
In [AptO], transition systems have been used as
a model to prove the soundness and completeness
of correctness rules for concurrent programming
constructs.
Also in [AmBo93], transition systems are used
to demonstrate the validity of a proof system
for a parallel object-oriented programming language.
The interested reader is invited to explore
the sources mentioned for further study.
Objects as behavioral types
subsections:
A syntax-directed correctness calculus as presented in
section \ref{10:verification}
provides, in principle, excellent support for a problem-oriented
approach to program development,
provided that the requirements a program has to meet can be made explicit in a
mathematical, logical framework.
When specifying requirements, we are primarily interested
in the abstract properties of a program, as may be expressed in some
mathematical domain.
However, when actually implementing the program
(and verifying its correctness) we mostly need to take recourse
to details we do not wish to bother with when reasoning
on an abstract level.
In this section we will discuss how we may verify
that an abstract type is correctly implemented by
a behavioral (implementation) subtype, following
[Am90].
Also, we will define precise guidelines for determining
whether two (behavioral) types satisfy
the (behavioral) subtype relation, following [Liskov93].
Abstraction and representation
In [Am90] a proposal is sketched to define the functionality
of objects by means of behavioral types.
Behavioral types characterize the
behavioral properties of objects in terms
of (possible) modifications of an abstract state.
So as to be able to ignore the details of an
implementation when reasoning about the properties
of a particular program, we may employ a
representation abstraction function which maps
the concrete data structures and operations to their
counterparts in the abstract domain.
Abstract data types -- representation function
Representation function -- abstraction
- $\%a(\%c)(\; \%a(\%f))\; =\; \%a(\%f\text{'})$ $<=>$ $\%c\; (\%f)\; =\; \%f\text{'}$
slide: Abstraction and representation
The diagram in slide [10-ADT] pictures the reasoning involved in
proving that a particular implementation is correct
with respect to a specification in some abstract mathematical
domain.
Assume that we have, in the concrete domain, an action
a that corresponds with a state transformation function
$\%c$.
Now assume that we have a similar operation in the
abstract domain, that we will write as $\%a(a)$,
with a corresponding state transformation function $\%a(\%c)$.
To prove that the concrete operation a correctly
implements the abstract operation $\%a(a)$, we must prove
that the concrete state modification $\%c$ resulting from
a corresponds with the modification that occurs in the abstract domain.
Technically speaking, we must prove that the diagram above commutes,
that is, that $\%c(\%f)\; =\; \%f\text{'}\; <=>\; \%a(\%c)(\%a(\%f))\; =\; \%a(\%f\text{'})$ whenever
we have that $\%f\; -\{a\}->\; \%f\text{'}$.
To prove that a particular implementation
a respects the abstract operation $\%a(a)$,
for which we assume that it has abstract pre- and
post-conditions $\%a(P)$ and $\%a(Q)$,
we must find a representation invariant I
and (concrete) pre- and post-conditions P
and Q for which we can prove that
$\%a(P)\; /\backslash \; I\; =>\; P$ and that $\%a(Q)\; /\backslash \; I\; =>\; Q$.
Furthermore, the representation invariant I
must hold before and after the concrete
operation a.
The proof strategy outlined above is of particular relevance for
object-oriented program development, since the behavior of objects
may, as we have already seen, be adequately captured by contracts.
As an additional advantage, however, the method outlined enables
us to specify the behavior of an object in a more abstract way than
allowed by contracts as supported by Eiffel.
Realization
As an example, consider the
specification of a generic stack as given
in slide [10-specification].
The specification of the stack is based on the (mathematically)
well-known notion of sequences.
We distinguish between empty sequences, that we write as $<<\; >>$,
and non-empty (finite) sequences, that we write as $<,...,x\_n>$.
Further, we assume to have a concatenation operator
for which we define $s\; \backslash .\; <<>>\; =\; <<>>\; \backslash .\; s\; =\; s$ and
$<,...,x\_n>\; \backslash .,...,y\_m>\; =,...,x\_n,y\_1,...,y\_m>$.
A sequence is employed to represent the state of the stack.
Sequences -- abstract domain
- empty sequence -- $<<\; >>$
- concatenation -- $<,..,x\_n>\; \backslash .,..,y\_m>$
$=$
$<,..,x\_n,y1,..,y\_m>$
Specification
type stack T {
$s\; :\; seq\; T$;
axioms:
$\{\; true\; \}\; push(t:T)\; \{\; s\text{'}\; =\; s\; \backslash .\; <>\; \}$
$\{\; s\; \backslash neq\; <<\; >>\; \}\; pop()\; \{\; s\; =\; \{\; s\text{'}\; \}\; \backslash .\; <>\; \}$
};
slide: The specification of a stack
The operations push and pop may conveniently
be defined with reference to the sequence
representing the (abstract) state of the stack.
We use s and $s\text{'}$ to represent the state respectively before
and after the operation.
The operations themselves are completely specified by their
respective pre- and post-conditions.
Pushing an element e results in concatenating the one-element
sequence $<>$ to the stacks state.
For the operation pop we require that the state of the stack must be
non-empty.
The post-condition specifies that the resulting state $s\text{'}$ is a prefix
of the original state, that is the original state with the last element
(which is returned as a result) removed.
To prove that a particular implementation of the stack
is conformant with the type definition given above
we must prove that
$\{\; I\; /\backslash \; pre(\%a(m(e)))\; \}\; m(e)\; \{\; I\text{'}\; /\backslash \; post(\%a(m(e)))\; \}$
for both methods push and pop.
These proofs involve both an abstraction function $\%a$
and a representation invariant I, relating the abstract state of
the stack to the concrete state of the implementation.
Now consider an implementation of the generic stack in C++,
as given in slide [10-realization].
template implementation
class as {
int t;
T a[MAX];
public:
as() { t = 0; }
void push(T e) {
require(t0); return a[--t]; }
invariant:
0 <= t && t < MAX;
};
slide: The realization of a stack
To prove that this implementation may be regarded as an element of
the (abstract) type stack, we must find a representation (abstraction)
function to map the concrete implementation to the abstract domain,
and further we must specify a representation invariant
that allows us to relate the abstract properties to the properties
of
the implementation.
For the implementation in slide [10-realization],
the abstraction function $\%a$ simply
creates the sequence of length t,
with elements $a[0],...,a[t-1]$.
The representation invariant, moreover,
gives an explicit definition of this relation.
See slide [10-absf].
Abstraction function
Representation invariant
- $I(a,t,s)\; ==\; t\; =\; length(s)\; /\backslash \; t\; >=\; 0\; /\backslash \; s\; =\; \%a(a,t)$
slide: Abstraction function and representation invariant
In order to verify that our implementation
of the abstract data type stack is correct
(that is as long as the bound MAX is not exceeded),
we must show, given that the representation invariant holds,
that the pre-conditions of the
concrete operations imply the pre-conditions
of the corresponding abstract operations,
and, similarly, that the post-conditions
of the abstract operations imply the post-conditions
of the concrete operations.
First, we show that for the operation push
the post-condition of the
abstract type specification
is indeed stronger than the (implicit) post-condition
of the implementation.
This is expressed by the following formula.
$s\text{'}\; =\; s\; \backslash ./\backslash \; I(a\text{'},t\text{'},s\text{'})\; =>\; t\text{'}\; =\; t\; +\; 1\; /\backslash \; a\text{'}[t\text{'}]\; =\; e$
Since we know that $I(a\text{'},t\text{'},s\text{'})$, we may derive that
$t\text{'}\; =\; t\; +\; 1$ and $a\text{'}[t\text{'}]\; =\; e$.
To establish the correctness of the operation pop,
we must prove that the pre-condition specified
for the abstract operation is indeed stronger
than the pre-condition specified for the concrete
operation, as expressed by the formula
$I(a,t,s)\; /\backslash \; s\; \backslash not=\; <>\; =>\; t\; >\; 0$
It is easy to see that $t\; >\; 0$ immediately follows
from the requirement that the sequence is non-empty.
Finally, to prove that the operator pop
leaves the stack in a correct state, we must prove that
$s\; =\; s\text{'}\; \backslash ./\backslash \; I(a\text{'},t\text{'},s\text{'})\; =>\; result\; =\; a\text{'}[t]/\backslash \; t\text{'}\; =\; t\; -\; 1$
which is done in a similar manner as for push.
Behavioral refinement is not restricted
to the realization of abstract specifications.
We will now look at a definition of behavioral
refinement, following [Liskov93],
that may serve as a guideline for programmers
to define behavioral subtypes, both abstract and concrete,
including subtypes extending the behavioral
repertoire of their supertypes.
In [Liskov93] the relation between behavioral
types is explained by means of a so-called
correspondence mapping, that relates
a subtype to its (abstract) supertype.
Correspondence
$<<\; \%a,\; \%r,\; \%x\; >>$
- $\%a$ abstraction -- maps $\%s$ values to $\%t$ values
- $\%r$ renaming -- maps subtype to supertype methods
- $\%x$ extension -- explains effects of extra methods
slide: The subtype correspondence mapping
A correspondence mapping is a triple
consisting of an abstraction function $\%a$
(that projects the values of the subtype
on the value domain of the supertype),
a renaming $\%r$
(that defines the relation between methods defined
in both types)
and an extension map $\%x$ (that defines the
meaning of additional methods).
See slide [10-correspondence].
Technically, the function $\%a$ must be onto, that is
each value of the supertype domain must be
representable by one or more values of the subtype domain.
Generally, when applying the abstraction function,
we loose information (which is irrelevant from
the perspective of the supertype), for
example the specific ordering of items in a container.
Signature
$\%s\; <=\; \%t$
- $dom(\; m\_\{\%t\}\; )\; <=\; dom(\; m\_\{\%s\}\; )$
- $ran(\; m\_\{\%s\}\; )\; <=\; ran(\; m\_\{\%t\}\; )$
Behavior
- $pre(\; m\_\{\%t\}\; )[\; x\_\{\%t\}\; :=\; \%a(\; x\_\{\%s\}\; )]\; =>\; pre(\; m\_\{\; \%s\; \}\; )$
- $post(\; m\_\{\; \%s\; \}\; )\; =>\; post(\; m\_\{\; \%t\; \}\; )[\; x\_\{\; \%t\; \}\; :=\; \%a(\; x\_\{\; \%s\; \}\; )\; ]$
- $invariant(\; \%s\; )$ $=>$
$invariant(\; \%t\; )[\; x\_\{\; \%t\; \}\; :=\; \%a(\; x\_\{\; \%s\; \}\; )\; ]$
Extension -- diamond rule
$\%x(x.m(a))\; =\; \%p\_\{m\}$
- $\%f\; \backslash apijl\{\; x.m(a)\; \}\; \%f\text{'}\; /\backslash \; \%f\; \backslash apijl\{\; \%p\_\{m\}\; \}\; \%Q$
$=>$ $\%a(\; x\_\{\; \%f\; \}\; )\; =\; \%a(\; x\_\{\; \%Q\; \}\; )$
slide: Behavioral subtyping constraints
To determine whether a type $\%s$ is a
(behavioral) subtype of a type $\%t$, one
has to define a correspondence mapping
$<<\; \%a,\; \%r,\; \%x\; >>$
and check the issues listed in slide [10-subtyping].
First, syntactically, we must check that the
signature of $\%s$ and $\%t$ satisfy the (signature)
subtyping relation defined in the previous chapter.
In other words, for each method m associated
with the object type $\%t$ (which we call $m\_\{\%t\}$),
and corresponding method $m\_\{\%s\}$
(which is determined by applying the renaming $\%r$)
we must check the (contravariant) function subtyping
rule, that is
$dom(\; m\_\{\%t\}\; )\; <=\; dom(\; m\_\{\; \%s\; \}\; )$
and
$ran(\; m\_\{\%s\}\; )\; <=\; ran(\; m\_\{\; \%t\; \}\; )$,
where ran is the range or result type of m.
Secondly, we must check that the behavioral
properties of $\%s$ respect those of $\%t$.
In other words, for each method m
occurring in $\%t$ we must check
that $pre(\; m\_\{\%t\}\; )[\; x\_\{\%t\}\; :=\; \%a(\; x\_\{\%s\}\; )]\; =>\; pre(\; m\_\{\%s\}\; )$
and that
$post(\; m\_\{\; \%s\; \}\; )\; =>\; post(\; m\_\{\; \%t\; \}\; )[\; x\_\{\; \%t\; \}\; :=\; \%a(\; x\_\{\; \%s\; \}\; )\; ]$.
Moreover, the invariant characterizing $\%s$
must respect the invariant
characterizing $\%t$,
that is
$invariant(\; \%s\; )\; =>\; invariant(\; \%t\; )[\; x\_\{\; \%t\; \}\; :=\; \%a(\; x\_\{\; \%s\; \}\; )\; ]$.
The substitutions $[\; x\_\{\; \%t\; \}\; :=\; \%a(\; x\_\{\; \%s\; \}\; )]$
occurring in the behavioral rules are meant
to indicate that each variable of
type $\%t$ must be replaced by a corresponding variable
of type $\%s$
to which the abstraction function is applied
(in order to obtain a value in the (abstract) domain of $\%t$).
And thirdly, in the final place,
it must be shown that the extension map
$\%x$ is well-defined.
The extension map must be defined in such a way
that each method call for an object
x of type $\%s$,
which we write as $x.m(a)$ where a represents
the arguments to the call,
is mapped to a program $\%p\_\{m\}$ in
which only calls appear to methods shared by
$\%s$ and $\%t$ (modulo renaming)
or external function or method calls.
In addition the diamond rule
must be satisfied,
which means that the states $\%f\text{'}$ and $\%q$ resulting from
applying respectively $x.m(a)$ and $\%p\_\{m\}$
in state $\%f$ must deliver identical
values for x from the perspective of $\%t$,
that is after applying the abstraction function.
In other words, extension maps allow us to understand
the effect of adding new methods and to establish
whether they endanger behavioral compatibility.
In [Liskov93] a distinction is made between
constructors (by which objects are created),
mutators
(that modify the state or value of an object)
and observers
(that leave the state of an object unaffected).
Extension maps are only needed for mutator methods.
Clearly, for observer methods the result
of $\%x$ is empty, and constructors
are taken care of by the abstraction function.
Behavioral subtypes
The behavioral subtyping rules defined above
are applicable to arbitrary (sub)types, and not only
to (sub)types defined by inheritance.
As an example, we will sketch (still
following [Liskov93]) that a stack may be defined as
a behavioral subtype of the type bag.
Recall that a bag is a set allowing duplicates.
See slide [10-ex-subtype].
Behavioral subtypes
$stack\; <=\; bag$
- bag -- put, get
- stack -- push, pop, settop
Representation
- bag -- $<<\; elems,\; bound\; >>$ \zline{multiset}
- stack -- $<<\; items,\; limit\; >>$ \zline{sequence}
Behavior -- put/push
$put(i):$
$require\; size(\; b.elems\; )\; <\; b.bound$
$promise\; b\text{'}\; =\; <<\; b.\; elems\; \backslash uplus\; \{\; i\; \},\; b.bound\; >>$
$push(i):$
$require\; length(\; s.items\; )\; <\; s.limit$
$promise\; s\text{'}\; =\; <<\; s.items\; \backslash .\; i,\; s.limit\; >>$
slide: Behavioral subtypes -- example
Let the type bag support the methods
$put(i:Int)$ and $get():Int$ and assume
that the type stack supports
the methods $push(i:Int)$, $pop():Int$ and
in addition a method $settop(i:Int)$
that replaces the top element of
the stack with its argument.
Now, assume that a bag is represented by
a pair $<,\; bound>$, where
elems is a multiset (which is a set
which may contain multiple elements of the same value)
and bound is an integer indicating the maximal
number of elements that may be in the bag.
Further, we assume that a stack is represented as
a pair $<<\; items,\; limit\; >>$,
where items is a sequence and limit is
the maximal length of the sequence.
For example $<<\; \{\; 1,2,7,1\; \},\; 12\; >>$
is a legal value of bag
and $<<\; 1\; \backslash .\; 2\; \backslash .\; 7\; \backslash .\; 1,\; 12\; >>$
is a legal value of stack.
The behavioral constraints for respectively the
method put
for bag
and push
for stack
are given as pre- and post-conditions in slide [10-ex-subtype].
To apply put, we require that the size
of the multiset is strictly smaller than the bound
and we ensure that the element i is inserted
when that pre-condition is satisfied.
The multi-set union operator $\backslash uplus$
is employed to add the new element to
the bag.
Similarly, for push we require the length
of the sequence to be smaller than the limit
of the stack
and we then ensure that the element is appended
to the sequence.
As before, we use the primed variables $b\text{'}$ and $s\text{'}$
to denote the value of respectively
the bag b and the stack s after
applying the operations, respectively put and push.
Proceeding from the characterization of bag
and stack we may define the correspondence
mapping $<<\; \%a,\; \%r,\; \%x\; >>$ as in slide [10-ex-correspondence].
Correspondence
$stack\; ->\; bag$
- $\%a(<,\; limit>)\; =$
$<<\; mk\_set(\; items\; ),\; limit\; >>$
where
$mk\_set($\%e ) = \0
$mk\_set(\; e\; \backslash .\; s\; )\; =\; mk\_set(s)\; \backslash uplus\; \{\; e\; \}$
- $\%r(\; push\; )\; =\; put$, $\%r(\; pop\; )\; =\; get$
- $\%x(s.settop(i))$ $=$ $s.pop();\; s.push(i);$
slide: Behavioral subtypes -- correspondence
To map the representation of a stack to
the bag representation we use the function
$mk\_set$ (which is inductively defined to map
the empty sequence to the empty set and to
transform a non-empty sequence into the union
of the one-element multiset of its first element
and the result of applying $mk\_set$ to the remaining
part).
The stack limit is left unchanged, since
it directly corresponds with the bound of the bag.
The renaming function $\%r$ maps push to put
and pop to get, straightforwardly.
And, the extension map describes the result of $settop(i)$
as the application of (subsequently) $pop()$ and $push(i)$.
Proof obligations -- push/put
- $size(\; \%a(s).\; elems\; )\; <\; \%a(s).bound$
$=>$ $length(\; s.items\; )\; <\; s.limit$
- $s\text{'}\; =\; <<\; s.items\; \backslash .\; i,\; s.limit\; >>$
$=>$ $\%a(s\text{'})$ $=$ $<<\; \%a(s).elems\; \backslash uplus\; \{\; i\; \},\; \%a(s).bound\; >>$
slide: Behavioral subtypes -- proof obligations
With respect to the behavioral definitions given for
push and put we have to verify that
$pre(\; put(i)\; )[\; b\; :=\; \%a(s)\; ]\; =>\; pre(\; push(i)\; )$
and that
$post(\; push(i)\; )\; =>\; post(\; put(i)\; )[\; b\; :=\; \%a(s)\; ]$.
These conditions, written out fully in slide [10-ex-proof],
are easy to verify.
Generally, a formal proof is not really necessary
to check that two types satisfy the behavioral
subtype relation.
As argued in [Liskov93],
the definition of the appropriate behavioral
constraints and the formulation of
a correspondence mapping is already a significant
step towards verifying that the types have the
desired behavioral properties.
Specifying behavioral compositions
The notion of behavioral types
may be regarded as the formal underpinning of
the notion of contracts specifying
the interaction between a client and server (object);
cf. [Meyer93].
Due to the limited power of the (boolean)
assertion language, contracts as supported by
Eiffel are more limited in what may be specified
than (a general notion of) behavioral types.
However,
some of the limitations are due,
not to limitations on the assertion language,
but to the local nature of specifying object behavior by
means of contracts.
See also [Meyer93].
To conclude this chapter,
we will look at an example illustrating the need
to specify global invariants.
Further we will briefly look at alternative formalisms
for specifying the behavior of collections of objects,
and in particular we will
explore the interpretation of contracts as
behavioral compositions.
Global invariants
Invariants specify the constraints on the state
of a system that must be met for the system
to be consistent.
Clearly, as elementary logic teaches us,
an inconsistent system is totally unreliable.
Some inconsistencies cannot be detected locally,
within the scope of an object,
since they may be caused by actions that do not involve
the object directly.
An example of a situation in which an
externally caused inconsistent object state may occur
is given in slide 10-invariants.
(The example is taken from [Meyer93], but rephrased in C++.)
Problem -- dynamic aliasing
class A {
public:
A() { forward = 0; }
attach(B* b) { forward = b; b->attach(this); }
bool invariant() {
return !forward || forward->backward == this;
}
private:
B* forward;
};
class B {
public:
B() { backward = 0; }
attach(A* a) { backward = a; }
bool invariant() {
return !backward || backward->forward == this;
}
private:
A* backward;
};
slide: Establishing global invariants
When creating an instance of A, the forward pointer
to an instance of B is still empty.
Hence, after creation, the invariant of
the object is satisfied.
Similarly when, after creating an instance of B,
this instance is attached to the forward pointer,
and as a consequence the object itself is attached to the
backward pointer of the instance of B.
After this, the
invariant is still satisfied.
However, when a second instance of A is created,
for which the same instance of B is attached
to the forward pointer,
the invariant for this object will hold,
but as a result the invariance for the first
instance of A will become violated. See below.
A a1, a2; B b;
a1.attach(b);
a2.attach(b); // violates invariant a1
This violation cannot be detected by the object
itself, since it is not involved
in any activity.
Of course, it is possible to check externally
for the objects not directly involved
whether their invariants are still satisfied.
However, the cost of exhaustive checking will
in general be prohibitive.
Selective checking is feasible only
when guided by an adequate specification
of the possible interferences between object states.
Specifying interaction
Elementary logic and set-theory provide
a powerful vehicle for specifying
the behavior of a system,
including the interaction between its components.
However, taking into account that many
software developers prefer a more
operational mode of thinking
when dealing with the intricacies of
complex interactions, we will briefly look
at formalisms that allow a more explicit
specification of the operational aspects
of interaction and communication,
yet support to some extent to reason about
such specifications.
See slide 10-interaction.
Contracts -- behavioral compositions
interaction
- specification, refinement, conformance declarations
Scripts -- cooperation by enrollment
- roles, initialization/termination protocols, critical role set
Multiparty interactions -- communication primitive
- frozen state, fault-tolerance, weakening synchrony
Joint action systems -- action-oriented
- state charts, refinement, superposition
slide: Specifying interactions
In [HHG90], a notion of behavioral contracts
is introduced that allows for characterizing
the behavior of compositions of objects.
Behavioral contracts fit quite naturally in the
object oriented paradigm, since they allow
both refinement and (type) conformance declarations.
See below.
Somewhat unclear, yet, is what specification
language the behavioral contracts formalism
is intended to support.
On the other hand, from an implementation perspective
the interactions captured by behavioral contracts
seem to be expressible also within the confines
of a class system supporting generic classes
and inheritance.
A similar criticism seems to be applicable to the formalism
of (role) scripts as proposed in [Francez].
Role scripts allow the developer to specify
the behavior of a system as a set of roles and the
interaction between objects as subscribing to a role.
In contrast to behavioral contracts, the
script formalism may also be applied to describe
the behavior of concurrently active objects.
In particular, the script formalism allows
for the specification of predefined initialization and
termination policies and for the designation of
a so-called critical role set,
specifying the number and kind of participants
minimally required for a successful computation.
Also directed towards the specification of concurrent
systems is the multi-party interactions formalism
proposed in [Evangelist],
which is centered around a (synchronous) communication
primitive allowing multiple objects to interact
simultaneously.
The notion of frozen state (which may be understood as
an invariance requirement that holds during the interaction)
may be useful in particular for the specification of
fault-tolerant systems.
An interesting research issue in this respect is to what extent
the assumption of synchrony may be weakened
in favor of efficiency.
A rather different orientation towards specifying
the interaction between collections of concurrently active
objects is embodied by the joint action systems
approach described in [Kurki].
Instead of relying on the direct communication
between objects, joint action systems proceed
from the assumption that there exists some global
decision procedure that decides which actions
(and interactions) are appropriate.
Joint action systems
action $service()$ by client c; server s is
when c.requesting && s.free do
<body>
slide: Specifying actions -- example
An example of an action specification is
given in slide [10-action].
Whether the service is performed depends
upon the state of both the client and the server object
selected by the action manager.
[Kurki] characterize their approach as
action-oriented to stress the importance
of specifying actions in an independent manner
(as entities separate from classes and objects).
An interesting feature of the joint action systems
approach is that the behavior of individual objects
is specified by means of state charts,
a visual specification formalism based on [Harel87].
The specification formalism adopted gives rise
to interesting variants on the object-oriented
repertoire,
such as inheritance and refinement by superposition.
From a pragmatic viewpoint, the assumption of
a global manager seems to impose high demands
on system resources.
Yet, as a specification technique, the concept
of actions may turn out to be surprisingly
powerful.
In summary, this brief survey of specification formalisms
demonstrates that there is a wide variety
of potentially useful constructs that all bear
some relevance to object-oriented modeling,
and as such may enrich the repertoire
of (object-oriented) system developers.
Contracts as protocols of interaction
Contracts as supported by Eiffel and
Annotated C++ are a very powerful means of
characterizing the interaction between a server object
and a client object.
However, with software becoming increasingly complex,
what we need is a mechanism to characterize
the behavior of collections or compositions of objects
as embodied in the notion of behavioral contracts
as introduced in [HHG90].
A contract (in the extended sense)
lists the objects that participate in the task
and characterizes the dependencies and constraints
imposed on their mutual interaction.
For example, the contract model-view, shown below
(in a slightly different notation than the original
presentation in [HHG90]), introduces
the object model and a collection of view objects.
Also, it characterizes the minimal assumptions with respect
to the functionality these objects must support
and it gives an abstract characterization of
the effect of each of the supported operations.
contract model-view< V > { MV(C)
subject : model supports [
state : V;
value( val : V ) $|->$ [state = val]; notify();
notify() $|->$ $\forall v \e $views $\bl$ v.update();
attach( v : view ) $|->$ v $\e$ views;
detach( v : view ) $|->$ v $\not\e$ views;
]
views : set where view supports [
update() $|->$ [view reflects state];
subject( m : model ) $|->$ subject = m;
]
invariant:
$\forall$ v $\e$ views $\bl$ [v reflects subject.state]
instantiation:
$\forall$ v $\e$ views $\bl$ subject.attach(v) & v.subject(subject);
subject.notify();
}
slide: The Model-View contract
To indicate the type of variables, the notation
$v : type$ is used expressing that variable v
is typed as type.
The object subject of type model has an
instance variable state of type V that
represents (in an abstract fashion) the value
of the model object.
Methods are defined using the notation
Actions may consist either of other method calls
or conditions that are considered to be satisfied
after calling the method.
Quantification as for example in
- $\A$ v $\e$ views $\bl$ v.update()
is used to express that the method $update()$
is to be called for all elements in views.
The model-view contract specifies
in more formal terms the MV part of the MVC paradigm
discussed in section [MVC].
Recall, that the idea of a model-view pair
is to distinguish between the actual
information (which is contained in the model object)
and the presentation of that information, which is
taken care of by possibly multiple view objects.
The actual protocol of interaction between a model
and its view objects is quite straightforward.
Each view object may be considered as a handler
that must minimally have a method to install
a model and a method update which is invoked,
as the result of the model object calling notify,
whenever the information contained in the model
changes.
The effect of calling $notify()$ is abstractly
characterized as a universal quantification
over the collection of view object.
Calling $notify()$ for subject results in calling
$update()$ for each view.
The meaning of $update()$ is abstractly represented as
- update() $|->$ [view reflects state];
which tells us that the state of the subject
is adequately reflected by the view object.
The invariant clause of the model-view
contract states that every change of the (state of the)
model will be reflected by each view.
The instantiation clause describes, in a rather operational way,
how to initialize each object participating in the contract.
In order to instantiate such a contract, we need to define
appropriate classes realizing the abstract entities
participating in the contract,
and further we need to define how these
classes are related to their abstract counterparts
in the contract by means of what we may call, following [HHG90],
conformance declarations.
Conformance declarations specify, in other words, how
concrete classes embody an abstract role,
in the same sense as in
in the realization of a partial type
by means of inheritance.
[]
readme
course
preface
1
2
3
4
5
6
7
8
9
10
11
12
appendix
lectures
resources
eliens@cs.vu.nl
draft version 0.1 (15/7/2001)