Reasoning about Safety


Acknowledgement

This research forms part of the RED project, Decision Support Systems, Safety and Liability, and is supported by the United Kingdom Engineering and Physical Sciences Research Council under grant number GR/H 12348.

Introduction

The objective of the research is the development of safety logics, that is, the development of the various formal logics which can be used for reasoning about safety, danger, and risk. These safety logics are expected to find some applications in the relevant fields such as decision support systems, planning for rational agents, in particular, safety-criticial systems.

As we know, the task of planning for rational agents is to find plans which will achieve their goals, given an appropriate theory of action and change. In safety-critical systems, agents have to consider not only how they can achieve their goals, they also have to consider how they can achieve them safely. One solution is constraint-based planning, however the constraints have to be specified in advance, and there is no principled attempt to reason about them. As a result, the limitations of this approach will make themselves felt as the the application domains increase in complexity. Developing formal logics which agents can reason about safety is a promising and useful alternative.

However, developing safety logics is a complex task, as reasoning about safety may involve commonsense reasoning about actions and change, time, beliefs, goals, rationality, etc. We begin the task by abstracting away from many of these features, and developing a formal theory of what we consider to be the essential features of safety, and develop a formal theory of absolute safety.

We start with disasters and the idea of a disastrous state. A disastrous state is one in which some fact, which the agent considers to be disastrous, is true. It is assumed that such states are abhorrent to the agent and that the agent tries to avoid them if at all possible. A disastrous action can now be defined to be an action which always leads to a disastrous state. We then define a dangerous action to be an action which may lead to a disastrous state, and a dangerous state to be state in which every action which the agent can perform -- every action which is open to the agent -- is dangerous.

Results.

We have developed a formal theory of absolute safety, called Safety Logic I , and a formal theory of normative safety, called Safety Logic II.

In the Safety Logic I, we formalize absolute safety in Dynamic Logic, as this offers a powerful tool for formalizing actions using classical logic and Kripke semantics. In particular, states are represented as possible worlds, and actions are represented as accessibility relations between worlds. The crucial distinction between the necessary and the possible consequences of actions in our informal analysis is thus readily and naturally formalized.

We define a language of safety, and give formal semantics for it. We then turn to logics of safety and give our basic safety logic SL and discuss several extensions of it.

In th Safety Logic II, we continue our analysis and formalisation of reasoning about safety, and develop a formal theory of normative safety. We introduce Defeasible Dynamic Logic in order to extend the possible-worlds theory to normative safety. We then present a logic, NSL, of normative safety. We then define a preferential entailment relation, in order to be able to represent reasoning a bout the normal termination of actions. We then discuss and give a formalisation of the relationship between safety, obligation, and rationality.

Key Papers.

  • Zhisheng Huang and John Bell, Safety Logic I: Absolute Safety, Proceedings of COMMON SENSE'96,59-66, 1996.
  • John Bell and Zhisheng Huang, Safety Logic II: Normative Safety,Proceedings of ECAI-96, 1996.

  • Please mail to: huang@cs.vu.nl if you have any comment.