John-Jules Meyer and Jan Treur
One of the recognized problems in AI is the gap between applications and formal foundations. This book (as the previous one in the DRUMS Handbook series) does not present the final solution to this problem, but at least does an attempt to reduce the gap by bringing together state-of-the-art material from both sides and to clarify their mutual relation. In this book the main theme is agents and dynamics: dynamics of reasoning processes, but also dynamics of the external world. Agents often reason about both types of dynamics.
Agents are (hardware or software) entities that act on the basis of a ‘mental state’. They possess both informational and motivational attitudes, which means that while performing their actions they are guided by their knowledge and beliefs as well as their desires, intentions and goals, and, moreover, they are able to modify their knowledge, intentions, etc. in the process of acting as well. Clearly the description of agent behaviour involves reasoning about the dynamics of acting, and if agents are supposed to be reflective, they should also themselves be able to do so. Furthermore, since the actions of agents may - apart from actions that change the external world directly - also include reasoning (for example, performing some belief-revising action or an action comprising of reasoning by default), it may be clear that in the context of agent systems the dynamics of reasoning (as a special kind of mental action) and reasoning about dynamics go hand in hand.
The different phenomena to be modelled in real-world applications show a very wide variety. To model them, models covering quite different aspects are needed. One approach is to build a large collection of models for each phenomenon separately. Another approach is to define one ‘grand universal model’ that covers as many of the phenomena as possible. Both solutions have serious drawbacks. The first solution would end up in a large and ad hoc collection of non-related models. The second solution would impose very high requirements on the universal model to be developed; any proposed model would have strong limitations. Besides, a very complex model would result from which in a given application only a small part is relevant.
The solution that has been chosen in practice, in a sense combines the two options pointed out. Indeed generic models for different phenomena have been developed, however, not in an ad hoc, non-related manner, but in a form that is based on one unifying modelling framework. In such a unifying modelling framework, different models can be compared and combined if needed. In this part two of such proposed modelling frameworks are presented. Furthermore, also a generic model for reasoning tasks and agents is presented.
First in this part two modelling frameworks are presented, which both employ modularisation and control techniques. The first one is the underlying modelling framework of the compositional development method for multi-agent systems DESIRE, based on the notions of component and compositionality. The second one is based on the architecture MILORD II for knowledge-based and multi-agent systems, which combines modularisation techniques with both implicit and explicit control mechanisms, and with approximate reasoning facilities based on multi-valued logics. Both modelling frameworks support reflective reasoning.
The third paper shows how the language Concurrent METATEM, which is based on an executable fragment of temporal logic, can be used as a coordination language, that is, as a high-level mechanism to control the coordination of processes that themselves are written in other (‘subject’) languages (and may deal with the internals of agents, like BDI-like notions).
The fourth paper shows a generic agent model based on the weak notion of agency: an autonomous agent able to observe, communicate and act in the world, and doing so, can behave in a reactive, pro-active or social manner. An agent based on this model is able to reason about dynamics of the world, but also about the dynamics of its own reasoning and communication. This generic agent model has been applied in a large number of applications, such as information brokering, negotiation for load balancing of electricity use, and distributed Call Center support.
In this part of the book we will consider various formal means regarding the topics agents and dynamics. By formal means we mean formal logics / calculi as well as formal specification languages together with their formal semantics. We will see how in particular temporal and dynamic logic / semantics are useful means to perform this formal analysis.
Formal foundations of models and modelling techniques are of importance for different reasons. First, by defining semantics of a model or modelling technique in a formal manner, a precise and unambiguous meaning of the syntactical constructs is obtained, which may help designers. This requires that designers are familiar with the formal techniques used to define such formal semantics. Unfortunately, this requirement is often not fulfilled for developers in practice, and there is no reason to expect that this will change in the short term. However, more realistically, those who develop a modelling technique often have more knowledge of formal methods. Therefore they can benefit a lot from knowledge of formal foundations during development of their modelling technique, and use that also as a basis to informally or semi-formally describe the semantics for others (users of the modelling technique) with a less formal background.
Secondly, formal foundations are especially important to obtain the possibility of verification of a design or verification of requirements. Verification is usually a rather technical and tedious matter, only feasible for specialists (‘verification engineers’). They need to know about the formal foundations, including formal semantics and proof systems.
The problem of how to cover a large variety of phenomena, for example, as discussed for modelling, also occurs in the formal analysis of such phenomena or models. Many contributions in the literature address different phenomena in an ad hoc and unrelated manner. For example, different theories have been proposed for diagnosis from first principles, nonmonotonic reasoning, BDI-agents, and many other types of agents. Relations between these foundational approaches are almost non-existent, and one ‘grand unifying theory’ is not within sight at all. Moreover, if such a theory would be constructed, many believe that it would have such a high complexity, that it is hard to use, whereas in a given application only a small part would be relevant.
As far as the relation to applications is concerned, within these foundational approaches especially the dynamic aspects of the agent's internal (reasoning) processes are often not, or at least not fully covered. For example, theories of diagnosis from first principles address diagnosis from a static perspective, but do not cover process-oriented questions such as, on which hypothesis should the process be focused, and which observations should be done at which moment. Similarly, most contributions in the area of logics for nonmonotonic reasoning only offer a formal description of the possible conclusions of such a reasoning process, and not of the dynamics of the reasoning process itself. Moreover, although some degree of dynamics is captured by them, BDI-logics lack a clear relationship to agent architectures in applications of BDI-agents. In such applications, for example, the rather complex issue of revision of beliefs, desires and/or intentions has to be addressed; e.g., when to revise only an intention and not the underlying desire, when to revise both, when to revise a desire and still keep a (committed) intention which was based on the desire?
At this point something can be learned from the way in which the problem of unification of a variety of different models was solved for modelling: the different models are specified within one unifying modelling framework. For the question to obtain formal semantics, this implies two different types of semantics. The first type is the semantics as used for the modelling framework as a whole. The second type is the semantics used for a specific model. So specific models can share the semantics they inherit from a modelling technique, but differ (and be incomparable) for their more specific semantics. Semantics of a modelling framework abstract from the inherent meaning of concepts specific for a particular model at hand, but still can address how all concepts in a model behave dynamically, given the specification of the model. Inherent meaning of the concepts specific for a given model, can be found, for example, for the BDI-agent model (or for nonmonotonic reasoning, as we have seen in the previous DRUMS volume). In this part both types of semantics are addressed. First, in Part IIIA semantics for modelling frameworks are presented, and their use in verification. Next, in Part IIIB semantics of an agent model is presented.. In Part IIIC a number of approaches are collected for reasoning about the dynamics of the world.
We now first look at the analysis of systems, and in particular that of agent systems and compositional systems, from a more general perspective. Of course, the aspects of dynamics and reasoning are omnipresent here, the focus covers both on reasoning about dynamics and the dynamics of reasoning. Most papers in this subpart contain both. In this part semantics for modelling frameworks are presented, and their use in verification. In the first paper it is shown how semantics for multi-agent systems can be defined based on principles of locality and compatibility. For each process a local set of behaviours is defined, based on temporal models. To obtain behaviour of the whole system, local behaviours are selected based on mutual compatibility due to interactions. In the second paper the use of Descriptive Dynamic Logic to describe semantics of a modelling framework for multi-language reflective architectures for reasoning tasks is shown. In the third paper it is shown how temporal epistemic logic can be used to formalize behavioural properties of models specified in a compositional modelling framework, and how verification proofs for properties of such a model can be specified.
Several papers are devoted to the logical description of intelligent agents, in particular focusing on the various attitudes that are commonly ascribed to them, such as handling knowledge from various sources (observation, communication), pursuing goals and making commitments. The basis of the approach in the papers below is dynamic logic. This logic is also employed to describe / specify agents that are characterized in a multi-level, multi-language logical architecture, providing (strong) semantics of an agent model. In the first paper the basic framework is presented, which is based on modal logic, and a blend of dynamic and epistemic logic in particular. This logical framework, called KARO, enables one to analyse the Knowledge, Abilities, Results, Opportunities of agents, and is intended as a basis for reasoning about the knowledge and the results of actions of agents. In the second paper this basic framework is extended so that one can adequately analyse the agent's informational attitudes such as coping with belief revision on the basis of observations, communication and some simple form of default reasoning. The third paper extends the KARO framework to capture the agent's motivational attitudes so as to get to a full-scale BDI-like logic but now grounded in dynamic logic (and thus the performance of action!). The fourth paper, finally, extends this even further by adding a deontic layer (expressing obligations and permissions) in order to incorporate social attitudes in multi-agent systems, and speech acts in particular, resulting in a very expressive logical framework.
In this part we will concentrate on the formal analysis of reasoning about dynamics, viz. the formal analysis of (reasoning about) actions that take place in a system (possibly initiated by an agent) and the changes that these cause to happen in the world. Here we touch upon a well-known area in knowledge representation and commonsense reasoning with problems like the infamous frame problem. The problem here is to express the effects (and, perhaps more importantly, the non-effects) due to the performance of actions (of agents) in an ‘economic’ way. In general, when describing the effects of actions it is assumed that by default things (or perhaps more precisely, fluents) do not change (their truth value) by the execution of an action, unless stated so explicitly. In this section we see approaches that employ Dijkstra's weakest precondition / strongest postcondition formalism and the dynamic logic formalism to describe the effect of actions, combined with some way of expressing that fluents are subject to some form of ‘epistemic inertia’: unless it is known to be otherwise (i.e., ‘normally’ or ‘by default’) things remain the same. For the latter we see the use of default logic and also a mechanism for specifying in the language of actions what things are known to be changed and which things are likely to stay the same. Furthermore a detailed analysis is made of the different dynamic properties that are required to guarantee that an agent's actions in a dsynamic world take place in a co-ordinated manner. Finally an application concerning reasoning about dynamics is discussed, viz. one for agents negotiating for balancing the load of (sometimes strongly fluctuating) electricity use over time.