C. VERHOEF
Eindhoven University of Technology
Department of Mathematics and Computing Science
P.O. Box 513, NL-5600 MB Eindhoven, The Netherlands
email: chrisv@win.tue.nl
NOTE: Since this document contains loads of formulas that cannot be processed with a tex to html facility please find a PS file of this document on my SOS page.
We proposed a syntactic format, the panth format, for structured operational semantics in which besides ordinary transitions also predicates, negated predicates, and negative transitions may occur such that if the rules are stratifiable, strong bisimulation equivalence is a congruence for all the operators that can be defined within the panth format. To show that this format is useful we took some examples from the literature satisfying the panth format but no formats proposed by others. The examples touch upon issues such as priorities, termination, convergence, discrete time, recursion, (infinitary) Hennessy-Milner logic, and universal quantification.
[IMAGE ]
Figure 1.1: The lattice of formats
In this way a whole lattice of formats came into being. We depict this lattice in figure 1.1. An arrow from one format to another indicates that all operators definable in the first format can also be defined in the second one. If there are no arrows connecting two formats they are (syntactically) incomparable. The most basic format originates from [De Simone1985]. Yet it is already powerful enough to define all the usual operators of, for instance, CCS or ACP. The GSOS format of [Bloom et al.1988] allows negative premises but no look-ahead and the tyft /tyxt format of [Groote and Vaandrager1992] allows look-ahead but no negative premises. They both generalize the format of De Simone. The positive GSOS format is, so the speak, the greatest common divisor of the GSOS and the tyft /tyxt format. The ntyft /ntyxt format of [Groote1990] is, using the same informal phrasing, the least common multiple of the tyft /tyxt format and the GSOS format: it allows both look-ahead and negative premises. The path format of [Baeten and Verhoef1993] generalizes the tyft /tyxt format with predicates; path format stands for ``predicates and tyft /tyxt \ hybrid format''. In this paper we discuss the panth format, which stands for ``predicates and ntyft /ntyxt hybrid format''. The dashed arrows in figure 1.1 point to it. We will not give the definitions of all the formats in the lattice except the definitions of the four formats in the upper diamond.
The main result of this paper is a congruence theorem stating that if a so-called term deduction system satisfies the panth format and is stratifiable then strong bisimulation is a congruence for all the operators that can be defined within the format. First, we will briefly explain the italics. A term deduction system is a generalization of a transition system specification ([Groote and Vaandrager1992]): it allows not only transitions but also (unary) predicates on states. The panth format is a syntactical constraint on a term deduction system; it still allows for occurrence of both transitions and predicates and their negations, in the premises. A term deduction system is stratifiable if the complexity of the conclusion of each rule is greater than the complexity of its premises. This notion is based on [Groote1990]. The notion of strong bisimulation originates from [Park1981] but we require in addition that bisimilar processes satisfy the same predicates; cf. [Baeten and Verhoef1993]. Now that we have an idea of the significant notions occurring in the main result we briefly discuss its proof. [Baeten and Verhoef1993] already conjectured that this result could be proved in the same way as their congruence theorem for the path format. Indeed, this turns out to be the case: we code each predicate as a binary relation and we apply the congruence theorem of [Groote1990] to the coded system. This coding trick was first announced by [Groote and Vaandrager1992] and independently found by others, most notably [Baeten and Verhoef1993] and Wan Fokkink. As a consequence of this coding trick, all the operators that can be defined in the panth format can also be defined in Groote's ntyft /ntyxt format. This observation might give raise to the question if there is need for the panth format at all. Next, we will motivate the need for this new format.
An advantage of the panth format is that it provides simply more syntactic freedom than other formats for defining rules since we can use transitions and predicates and both their negations, whereas in other formats we either have predicates but no negative premises or negative premises but no predicates. This is not just an intrinsic advantage since there are examples of such operational semantics in the literature in which the combination of transitions and predicates with negative transitions and/or negated predicates occurs. We will sketch this in the next paragraph.
In the literature we see more and more that operational rules in the style of Plotkin are decorated with extra predicates on states to express matters like (un)successful termination, convergence, divergence ([Aceto and Hennessy1992]), enabledness ([Bergstra et al.1994]), maximal delay, side conditions ([Moller and Tofts1990]), and so on. [Baeten and Verhoef1993] give many examples of this kind of decorated transition rules in their paper on the path format thereby showing that there is a need for a general format describing such decorated rules. Another phenomenon that we see in the literature is the use of negative premises in rules defining the operational semantics. We can find negative premises to operationally describe deadlock detection ([Larsen1989]), sequencing ([Bloom et al.1988]), priorities ([Baeten and Bergstra1988]), probabilistic behaviour ([Larsen and Skou1992]), urgency ([Bolognesi and Lucidi1992]), and various real ([Klusener1991]) and discrete time ([Baeten and Bergstra1992]) settings. Now it will not be very surprising that there are also hybrid rules using both decorations and negative premises (we will treat some of them in the applications). This is where the panth format comes into play, since these hybrid rules quite often turn out to satisfy the panth format and are stratifiable. Now the advantage is that we immediately have that strong bisimulation is a congruence for all the operators defined in this way; a property that we wish to hold in many cases.
The above advantage is not only of practical value but also of intuitive value since encoding rules to fit one of the known formats in order to get congruency in return often contraindicates the intuitive character of the original rules. Another disadvantage of such a coding trick is that there now are two transition systems that have to be shown equivalent. A fast solution to the latter problem is to throw away the original transition system, which is inadvisable in our opinion. In fact, many people prefer to use their own rules rather than encoded rules (that the reader has to decode) and choose to verify the congruence property without a general congruence theorem. We think that our panth format is very user-friendly in the sense that people immediately can apply our congruence result to their own rules instead of first having to become coding experts.
There are also theoretical advantages to adding predicates to known formats. For instance, [Baeten and Verhoef1993] observe that some negative premises can be expressed positively using predicates and pose the question which negative premises can be written positively with predicates. Vaandrager gives a partial answer: for any GSOS system there exists an equivalent positive GSOS system over the same language, extended with positive predicates. Vaandrager and Verhoef proved on a scratch paper that this result extends to the infinite case. However, in this paper we do not dive into these theoretical issues.
Now that we have given some motivation for this paper we discuss the organization of it in the remainder of this section. The paper consists of two parts: a practical and a theoretical part. This is due to the fact that we pursue two communicative targets. The first target is that we want to give rules of thumb accompanied with instructive examples for readers merely interested in applying our congruence theorem. The second target is to formally treat our theory and prove the congruence theorem; this part is for readers more interested in the theoretical side of this paper. We did not choose for a chronological ordering of our paper. In section 2 we start with the end: namely the applications. At first sight this may seem a bit illogical but there are good reasons for this ordering. An important reason advocating this ordering is that (uninitiated) readers can see that it is not at all necessary to go through all the theory to be able to apply the congruence theorem and that mostly a few simple rules of thumb will do. Another reason justifying this ordering is that the area of application is operational semantics. Operational rules often are easy to read and, moreover, they can be understood without the theoretical part. The last and maybe most important reason for this ordering is that the reader immediately can see if his or her operational semantics has a good chance to fit our format. If this is the case the time has come to read on and enter the theoretical part of this paper. An additional advantage is that those readers already have a good impression of the notions that will be made precise in the second part. This part starts in section 3 where the notions stratifiable and term deduction system are made precise. Also in this section we do our very best not to loose the reader by interspersing a running example among the abstract definitions. Following [Groote1990], we show that stratifiability is a sufficient condition on a term deduction system to guarantee that there exists a transition relation that agrees with it. In section 4, we define the panth format and the notion of strong bisimulation in the presence of predicates on states. Then we state and prove our main result: the congruence theorem. The last section contains concluding remarks and discusses future work.
We use the first example to define the significant notions informally: the panth format and stratifiability.
The first example is an operational semantics of a basic process algebra with priorities, _[theta][IMAGE ], that originates from [Baeten and Bergstra1988]; it can also be found in [Baeten and Weijland1990]. In this language we have alternative and sequential composition and a priority operator (denoted +, [IMAGE ], and [IMAGE ] resp.) and a set A of atomic actions. There is also a partial ordering < on the set of atomic actions to express priorities. For instance, if a<b and b and c are not related we have [IMAGE ] and [IMAGE ]. We list the operational semantics of _[theta][IMAGE ] in table i. This operational semantics is a small one; still it contains besides transitions [IMAGE ] also (postfix) predicates [IMAGE ]a[IMAGE ], for all [IMAGE ], and both their negations [IMAGE ] and [IMAGE ]a[IMAGE ]. So this example is particularly suitable to informally introduce our panth format. For completeness we recall that [IMAGE ] means that there is no x' such that [IMAGE ] and [IMAGE ]a[IMAGE ] if we do not have [IMAGE ]a[IMAGE ]. Often, we will omit the centered dot: [IMAGE ].
[IMAGE ]
Table: A Transition system for _[theta][IMAGE ].
There are two conditions that must hold for a transition system before we can apply our congruence theorem. They are that the rules have to be in panth format and that the system has to be stratifiable. We first list the conditions for the panth format.
Check for each rule the following. All the transitions in the premises must end in distinct variables; denote this set by Y. If the conclusion is a transition [IMAGE ] then either t=x for a variable [IMAGE ] or [IMAGE ] with [IMAGE ] distinct variables not occurring in Y. If the conclusion is of the form Pt then we treat t as above (P is some unary predicate). Of course, f is an n-ary function symbol.
Now it is easy to verify that the rules of table i are in panth format but it will be even more easy if we also list the things that we do not have to worry about.
There is no restriction on the number of premises. There is also no restriction on terms occurring in predicates, negated predicates, and negated transitions in the premises. There is no restriction on a term occurring in the left-hand side of a transition in a premise or in the right-hand side of a conclusion.
As an example we treat the last but one rule of table i. There is just one positive transition ending in a variable x', for the negated predicates and negative transitions there is nothing to check, since there are no restrictions on their terms. The conclusion begins with a term of the form f(x) and [IMAGE ]. So this rule is in panth format. The other rules are treated the same only simpler.
Now we give the rules of thumb for the stratifiability. This condition is a bit more involved: we have to define a map, called a stratification, for which two conditions must hold for each rule instantiated with closed terms. If a stratification exists for a set of rules we call this set stratifiable. Roughly, a rule is stratifiable if the complexity of the conclusion is greater than the complexity of its premises. This complexity is measured with a stratification. The arguments that a stratification takes are positive transitions and predicates; we call them positive formulas. A stratification measures the complexity of its arguments in terms of numbers, so it ranges over numbers. We also have the following two conditions on a stratification S for every rule instantiated with closed terms to express that the complexity of the premises may not exceed the complexity of the conclusion. Let c be the conclusion of a closed instantiation of a rule and let h be any positive premise of it. Then we want that [IMAGE ]. Now we treat the negative premises. Since S is only defined on positive formulas we have to turn the negative formulas into positive ones. There are two cases: first let [IMAGE ] be a closed instantiation of a negative transition. Then we want that [IMAGE ] for all closed terms s. Secondly, let [IMAGE ] be a closed instantiation of a negated predicate P then we want that S(Pt)<S(c). See definition 7 for a formal definition.
Next, we will give a recipe for finding a stratification. In most cases we can find a stratification (for which the two conditions hold) by measuring the complexity of a positive formula in terms of counting a particular symbol occurring in the conclusion of a rule with negative premises.
As an example we give a stratification for the rules in table i. The rules containing negative premises have in their conclusion a [IMAGE ]. We define a map that counts the number of [IMAGE ]'s as follows: let t be a closed term with n occurrences of [IMAGE ]'s then [IMAGE ]a[IMAGE ]. Now we check the two conditions for the last but one rule. Replace each x and x' by closed terms t and t'. Since the number of [IMAGE ]'s occurring in [IMAGE ] is one greater than the number of [IMAGE ]'s occurring in t we are done. The other rules are dealt with just as simply.
[IMAGE ]
Table: The rules of Aceto and Hennessy for a[IMAGE ], [IMAGE ],
and their action relations.
It is easy to see that the operational semantics consisting of the rules in table ii satisfy the panth format. We will give a stratification. We already explained that the first thing to do is to look at the rules with negative premises. In this case there is just one such rule. In the conclusion we see the symbol [IMAGE ]. Define a map S that counts the number of [IMAGE ]'s occurring in a positive formula. It is easy to see that this map is a stratification. We check the two conditions for the negative rule. Replace each x and y by closed terms t and s respectively. Since [IMAGE ]a[IMAGE ] the negative condition holds. For the positive condition we have [IMAGE ]. The other rules are also very simple.
Aceto and Hennessy are interested in rooted weak bisimulation instead of strong bisimulation, so our theorem will not directly apply to their situation. However, [Baeten and Verhoef1993] show for an operational semantics of [Glabbeek1987] for ACP\ with abstraction that rooted weak bisimulation is a congruence with the aid of their congruence theorem for strong bisimulation. We leave it as an open problem whether a similar trick can also be applied for the CCS like process algebra of Aceto and Hennessy.
[IMAGE ]
Table iii: BPA with discrete time.
The next example is an operational semantics of [Baeten and Bergstra1992] describing a basic process algebra with relative discrete time. In table iii we list their rules. The [IMAGE ] stands for the action a in the current time slice and [IMAGE ] stands for the discrete time unit delay. A transition [IMAGE ] denotes the passage to the next time slice. It is easy to see that the rules of table iii satisfy the panth format. Baeten and Bergstra apply a coding trick to obtain that their rules satisfy the ntyft /ntyxt format of Groote and give a stratification S with [IMAGE ] if the number of function symbols of t equals n. If we just add [IMAGE ]a[IMAGE ] with the number of function symbols of t equals n (and not encode the rules) we are done. Baeten and Bergstra still have to show that their system is equivalent to the encoded one. Note that according to our recipe it suffices to only count the number of + signs in t.
[IMAGE ]
Table: Recursion rules for BPA with discrete time.
We extend the last operational semantics with rules concerning recursion. The resulting example will be particularly interesting, since it deepens our insight in the notion of a stratification. Before we continue, we briefly explain some recursion terminology. We extend the signature with a set of constants called process names with typical elements [IMAGE ]. A recursive specification E over a set N of process names is a set of equations of the form X=t such that t is a closed term that may only contain guarded occurrences of process names in N and [IMAGE ]. An occurrence of a process name in a term is guarded if it occurs in a subterm of the form [IMAGE ] or [IMAGE ]. The intention of a recursive specification is to (uniquely) specify the behaviour of infinite processes. The guardedness demand is to exclude specifications that specify more than one process like X=X. Now [IMAGE ] denotes the X component of a solution of the recursive specification E. The expression [IMAGE ] is short-hand for the right-hand side of the equation belonging to X with every process name Y replaced by [IMAGE ]. So, for example, if [IMAGE ] the expression [IMAGE ] is short-hand for [IMAGE ]. It is easy to see with the rules of tables iii and iv that [IMAGE ] and [IMAGE ].
Our recipe for finding a stratification was to count the function symbol occurring in the conclusion of a rule with a negative premise. In this case it is the + symbol. Since there can be any finite number of + symbols in the premise of a recursion rule whereas in its conclusion there is not a single + symbol our approach no longer works; so a fortiori the stratification of Baeten and Bergstra will not work. We solve this by assigning to these dangerous conclusions the ordinal number [IMAGE ]. Define a stratification S as follows. Let n be the number of unguarded occurrences of process names in t and let m be the number of + symbols that occur in t. Let [IMAGE ]a[IMAGE ] with [IMAGE ] either a or [IMAGE ]. Now it is not hard to check that the two conditions hold for S. As an example, we check a recursion rule: [IMAGE ], since we have forbidden unguarded occurrences in right-hand sides. Now [IMAGE ]. The other rules are also simple.
The reader can easily see that S could be chosen more minimally: it suffices to define S only on [IMAGE ]-transitions and let [IMAGE ]. However, the above stratification also works for all the extensions that Baeten and Bergstra discuss in their paper whereas the minimal version only works for this example. So also for all their extensions we have that bisimulation equivalence is a congruence.
The above recursion problem also arises if we add recursion to our first example concerning priorities. Fortunately, it can be solved in the same way; for more information on the combination of priorities and recursion we refer to [Groote1990]. In fact, we did not have this problem with the example of Aceto and Hennessy since we there counted the rare symbol [IMAGE ]. This illustrates that it is wise not to count too much. For, if we had counted all function symbols we immediately ran into the recursion problem above.
The next example concerning Hennessy-Milner logic [Hennessy and Milner1985] is due to [Vaandrager1993]. The set of Hennessy-Milner logic formulas over a given alphabet [IMAGE ] is given by the following grammar: [IMAGE ]. Suppose that we have a positive transition system specification, in say tyft /tyxt format (see definition 11), defining transition relations [IMAGE ] for each [IMAGE ]. With the four simple rules in table v we can define the satisfaction relation [IMAGE ] within the panth format by defining postfix predicates [IMAGE ] for all formulas [IMAGE ].
With the aid of the fundamental result of Hennessy and Milner saying that two processes are bisimilar if and only if they satisfy the same Hennessy-Milner logic formulas we also have that this extension does not change the definition of bisimulation in the presence of the satisfaction predicates.
[IMAGE ]
Table: The satisfaction relation [IMAGE ] as postfix
predicates [IMAGE ].
Let S be a stratification given by [IMAGE ] and [IMAGE ] ([IMAGE ] = closed terms) with n the number of [IMAGE ] symbols occurring in [IMAGE ]. It is easy to see that the rules of table v together with our positive operational semantics defining [IMAGE ] satisfy the panth format and that they are stratifiable.
We can do the same with infinitary Hennessy-Milner logic formulas. We restrict ourselves to index sets of a size bounded by some regular cardinal, since otherwise the hypotheses do not necessarily form a set (and thus not obey the panth format). Only the third rule of table v changes to the following rule (I an index set)
It is easy to see that this rule satisfies the panth format. We have to be careful with our choice of a stratification. The one above will no longer work, since addition of ordinals is not commutative. The stratification that we now need measures maximal nesting of [IMAGE ] symbols within a formula. For instance [IMAGE ]. We inductively define S on the postfix predicates [IMAGE ]:
Note that this stratification also works for the finite case.
Some predicates that we find in the literature are defined with a universal quantifier in their hypotheses. The purpose of this last example is to show that it is often possible to define such predicates within our format. We illustrate this with the weak termination predicate [IMAGE ]a[IMAGE ] of [Aceto and Hennessy1992]. A process p is weakly terminating [IMAGE ]a[IMAGE ] if for all q that cannot perform any silent moves but are reachable from p with only silent steps (zero or more) we have that [IMAGE ]a[IMAGE ]; see table ii for the termination predicate [IMAGE ]a[IMAGE ]. Or in a Plotkin style rule:
Clearly, this rule does not fit our format. This is due to the fact that our format is of an existential nature. However, the combination of an existential quantifier and negation leads to a universal quantifier. With this we can define the weak termination predicate [IMAGE ]a[IMAGE ] of Aceto and Hennessy within our format. We mention that [IMAGE ] means that p evolves into q by performing zero or more silent actions, which can be easily defined within our format. We need an auxiliary predicate to define the weak termination predicate. The first rule below defines this auxiliary predicate which holds if the negation of the hypothesis of the above rule holds. The second rule defines the weak termination predicate by simply negating the auxiliary predicate.
We can find a stratification with a cumulative application of our recipe: count the number of [IMAGE ] symbols plus two times the number of [IMAGE ]a[IMAGE ] symbols in a positive formula.
In this way we can also define Aceto and Hennessy's weak convergence predicate and its parameterized version (and the resulting operational semantics is stratifiable). We recall that Aceto and Hennessy are interested in rooted weak bisimulation instead of strong bisimulation. Moreover, we think it would be a better idea to study a format that allows universal quantification.
Before we continue with the definitions we will list some preliminaries for completeness sake.
We assume that we have an infinite set V of variables with typical elements [IMAGE ]. A (single sorted) signature [IMAGE ] is a set of function symbols together with their arity. If the arity of a function symbol [IMAGE ] is zero we say that f is a constant symbol. We restrict ourselves to signatures that contain at least one constant symbol. The notion of a term (over [IMAGE ]) is defined as expected: [IMAGE ] is a term; if [IMAGE ] are terms and if [IMAGE ] is n-ary then [IMAGE ] is a term. A term is also called an open term; if it contains no variables we call it closed. We denote the set of closed terms by [IMAGE ] and the set of open terms by [IMAGE ] (note that a closed term is also open). We also want to speak about the variables occurring in terms: let [IMAGE ] then [IMAGE ] is the set of variables occurring in t.
A substitution [IMAGE ] is a map from the set of variables into the set of terms over a given signature. This map can easily be extended to the set of all terms by substituting for each variable occurring in an open term its [IMAGE ]-image.
The purpose of a term deduction system is to define a set of positive formulas that can be deduced using the deduction rules. For instance, if the term deduction system is a transition system specification then a transition relation is such a set. For term deduction systems without negative formulas this set comprises all the formulas that can be proved by a well-founded proof tree. If we allow negative formulas in the premises of a deduction rule it is no longer obvious which set of positive formulas can be deduced using the deduction rules. [Bloom et al.1988] formulate that a transition relation must agree with a transition system specification. We will use their notion; it is only adapted to the framework of this paper.
Not every term deduction system defines a set of positive formulas that agrees with it. A term deduction system can define more than one set of positive formulas that agrees with it. We show this in the following two examples. [Groote1990] gives similar examples with relations instead of predicates.
[Groote1990] formulates a sufficient condition for the existence of a transition relation that agrees with a given transition system specification. We essentially follow Groote by formulating a similar condition: we incorporate predicates in his notion. Indeed, this condition is sufficient for the existence of a set of positive formulas for a given term deduction system. We obtain this result in a similar way as Groote by extending his notions with predicates and by proving his results for these extended notions.
[IMAGE ]
Figure 3.1: Three formula dependency graphs.
As is noted in [Groote1990], a dependency graph is often a useful means to show that a term deduction system is not consistent. A useful way to show that a term deduction system is consistent is based on so-called local stratifications that stem from logic programming. We refer to [Apt1990] and [Przymusinski1987] for more information. The definition of a stratification of a term deduction system is given below.
We need the next definition and lemma to construct a set of positive formulas that agrees with a given term deduction system. We define a mapping, called the degree, that assigns to a term deduction system an ordinal number with a property (called regularity) that is proved in the lemma. We assume that the axiom of choice holds for this definition and for the lemma, since we assume that the only cardinal numbers that exist are the natural numbers or [IMAGE ] for all ordinal numbers [IMAGE ]. We recall that an ordinal number is a transitive set of transitive sets and that for all ordinal numbers [IMAGE ] the cardinality of the (initial) ordinal number [IMAGE ] equals [IMAGE ] (by definition). We also recall that [IMAGE ] and [IMAGE ] are regular for all [IMAGE ] (if we assume the axiom of choice).
The following lemma is inserted for the readers that are not familiar with the notion of regularity of a cardinal number and thus with the fact that [IMAGE ] and [IMAGE ] are regular for all [IMAGE ] (if we assume the axiom of choice).
Next, we will define a set of positive formulas from which we will show that it agrees with a given term deduction system.
Although the idea behind the proof of our congruence theorem is natural, it turns out that we need a lot of bookkeeping. Therefore, the proof may be difficult reading. For convenience sake, we intersperse the formal proof with informal intuitions.
We need the technical notion of well-foundedness of a term deduction system, which will be used in the proof of the congruence theorem. The notion of well-foundedness is taken from [Groote and Vaandrager1992], where it is also used in the proof of their congruence theorem. The same phenomenon occurs in Groote's paper [Groote1990] and in [Baeten and Verhoef1993]. [Fokkink1994] shows that the well-foundedness is not necessary for the congruence theorem of [Groote and Vaandrager1992] and that this generalizes to our format. Therefore, we omitted the well-foundedness demand in the examples we discussed in section 2 (but all these examples are well-founded).
Next, we will define the notion of strong bisimulation, which is based on [Park1981].
In this paper we presented a syntactical format, the panth format, for structured operational semantics with predicates and negative premises such that if the rules are stratifiable we have that strong bisimulation is a congruence for all the operators that can be defined within this format. With operational semantics mostly taken from the literature we showed that our format is useful: the examples satisfy our format but no formats proposed by others. Moreover, with these examples we informally explained the notions necessary to use our result thereby showing that it can be easily applied without scrutinizing the abstract definitions. The examples include issues such as priorities, termination, convergence, discrete time, recursion, (infinitary) Hennessy-Milner logic, and universal quantification (in particular, so-called weak predicates).
We will briefly discuss future work. Since the stratification technique is not always satisfactory (cf. 2), [Bol and Groote1991] proposed the more general reduction technique (for the less general ntyft /ntyxt format). A first possibility for future work could be to use their methods to generalize our work. A second possibility is to incorporate recursion within our framework as is done for the GSOS format [Bloom et al.1988] and De Simone's format [De Simone1985]. A third generalization could be to allow universal quantification in the hypotheses.
We conclude that the panth format is useful, and that our congruence theorem is practical.