Most node types have at least one eventIn definition
and thus can receive events. Incoming events are data messages
sent by other nodes to change some state within the receiving node.
Some nodes also have eventOut definitions. These are used to send data
messages to destination nodes that some state has changed within the
If an eventOut is read before it has sent any
events (e.g., get_foo_changed), the initial value
as specified in "Chapter 4, Field and Event Reference"
for each field/event type is returned.
Events are the most important new feature in
VRML 2.0. Events make the world move; the only way to change something
in a VRML world is to send an event to some node. They form the foundation
for all of the animation and interaction capabilities of VRML, and more
effort was put into the event model design than any of the other new
features in VRML 2.0. VRML's event model design is a result of collaboration
between the Silicon Graphics team, the Sony team, and Mitra.
The connection between the node generating the
event and the node receiving the event is called a route. Routes
are not nodes. The ROUTE statement is a construct for establishing event
paths between nodes. ROUTE statements may either appear at the top level
of a VRML file, in a prototype definition, or inside a node wherever
fields may appear. Nodes referenced in a ROUTE statement shall
be defined before the ROUTE statement.
Note that the only way to refer to a node in
a ROUTE statement is by its name, which means that you must give a node
a name if you are establishing routes to or from it. See Section 2.3.2,
Instancing, for the recommended way of automatically generating unique
(but boring) names.
The types of the eventIn and the eventOut shall
match exactly. For example, it is illegal to route from an SFFloat to
an SFInt32 or from an SFFloat to an MFFloat.
Automatic type conversion along routes would
often be convenient. So would simple arithmetic operations along SFFloat/SFInt32/SFVec*
routes, and simple logical operations for SFBool routes. However, one
of the most important design criteria for VRML 2.0 was to keep it as
simple as possible. Therefore, since the ROUTE mechanism is such a fundamental
aspect of the browser implementation and even simple type conversions
require significant amounts of code and complexity, it was decided not
to include any data modification along routes.
If type conversion is required, it is easy (although
tedious) to define a Script that does the appropriate conversion. Standard
prototypes for type conversion nodes have already been proposed to the
VRML community. If they are used often enough, browser implementors
may begin to provide built-in, optimized implementations of these prototypes,
which will be a clear signal that they should be added to a future version
of the VRML specification.
Routes may be established only from eventOuts
to eventIns. For convenience, when routing to or from an eventIn or
eventOut (or the eventIn or eventOut part of an exposedField), the set_
or _changed part of the event's name is optional. If the browser
is trying to establish a ROUTE to an eventIn named zzz and an
eventIn of that name is not found, the browser shall then try to establish
the ROUTE to the eventIn named set_zzz. Similarly, if establishing
a ROUTE from an eventOut named zzz and an eventOut of that name
is not found, the browser shall try to establish the ROUTE from zzz_changed.
Redundant routing is ignored. If a file repeats
a routing path, the second and subsequent identical routes are ignored.
This also applies for routes created dynamically via a scripting language
supported by the browser.
Three different architectures for applying
changes to the scene graph were considered during the VRML 2.0 design
process. The key considerations were how much information the VRML
browser knows about the world, how little reinvention of existing
technology needed to be done, and how easy it would be for nonprogrammers
to create interactive worlds. The architecture chosen is a compromise
between these conflicting desires.
One extreme would be to keep all behaviors
out of VRML and perform all behaviors in an existing language such
as Java. In this model, a VRML file looks very much like a VRML 1.0
file, containing only static geometry, and instead of loading a .wrl
VRML file into your browser, you would load an applet that referenced
a VRML file and then -proceed to modify the objects in the world over
time. This is similar to conventional programming; the program (applet)
loads the data file (VRML world) into memory and then proceeds to
make changes to it over time. The advantages of this approach are
that it would make the VRML file format simpler and it matches the
traditional way applications are created.
There are several disadvantages to this approach,
however. Tools meant to help with the creation of interactive worlds
would either have to be able to parse and understand
the code for an applet (since all of the interactive
code would be contained inside an applet) or would be forced to use
their own proprietary format for representing behaviors, which were
then "published" into the required applet+VRML world form.
This would severely limit the interoperability between tools and would
make it very difficult for tools or world creators to update the geometry
of a VRML world without breaking the behaviors that affect the world.
In addition, it isn't clear that the scalability
and composability goals for VRML could be met if all behaviors were
performed outside the VRML world. Architectures for composing arbitrary
applets (such as Microsoft's ActiveX or Netscape's LiveConnect) have
only recently been defined and are designed for the case of a small
number of applets on a Web page. The vision for VRML is a potentially
infinite, continuous landscape containing an arbitrary number of interacting
entities; a very different environment than a Web page!
Another extreme would be to redefine VRML to
be a complete programming language, allowing any behavior to be expressed
completely in VRML. In this model, a VRML browser would act as a compiler
and runtime system, much like the Java runtime reads in Java byte
codes and runs them. This approach has all of the disadvantages just
described. Defining a specialized language just for VRML would make
it possible to do many VRML-specific optimizations, but the disadvantages
of defining Yet Another Programming Language probably outweigh the
The architecture chosen treats behaviors as
"black boxes" (Script nodes) with well-defined interfaces
(routes and events). Treating behaviors as black boxes allows any
scripting language to be used without changing the fundamental architecture
of VRML. Implementing a browser is much easier because only the interface
between the scene and the scripting language needs to be implemented,
not the entire scripting language.
Expressing the interface to behaviors in the
VRML file allows an authoring system to deal intelligently with the
behaviors and allows most world creation tasks to be done with a graphical
interface. A programming editor only need appear when a sophisticated
user decides to create or modify a behavioropening up the black
box. The authoring system can safely manipulate the scene hierarchy
(add geometry, delete geometry, rename objects, etc.) and still maintain
routes to behaviors, and yet the authoring system does not need to
be able to parse or understand what happens inside the behavior.
The VRML browser also does not need to know
what happens inside each behavior to optimize the execution and display
of the world. Since the possible effects of a Script are expressed
by the routes coming from it (and by the nodes it may directly modify,
which are also known), browsers can perform almost all of the optimizations
that would be possible if VRML were a specialized programming language.
Synchronization and scheduling can also be handled by the browser,
making it much easier for the world creator since they can express
their intent rather than worry about explicit synchronization between
independent applets. For example, giving a sound and an animation
the same starting time synchronizes them in VRML. Performing the equivalent
task with an architecture that exposes the implementation of sounds
and animations as asynchronous threads is more difficult.
Once a sensor or Script has generated an initial
event, the event is propagated from the eventOut producing the event
along any ROUTEs to other nodes. These other nodes may respond by generating
additional events, continuing until all routes have been honored. This
process is called an event cascade. All events generated during
a given event cascade are assigned the same timestamp as the initial
event, since all are considered to happen instantaneously.
Some sensors generate multiple events simultaneously.
In these cases, each event generated initiates a different event cascade
with identical timestamps.
Figure 2-6 provides a conceptual illustration
of the execution model. This figure is for illustration purposes only
and is not intended for literal implementation.
Figure 2-6: Conceptual Execution
The task of defining the execution model for
events is simplified by breaking it down into three subtasks:
- Defining what causes an initial event
- Defining an ordering for initial events
- Defining exactly what happens during an event
The only nodes in the VRML 2.0 specification
that can generate initial events are the sensor nodes, Collision
group, and Script nodes. ExposedFields never generate initial events
(they are always part of the event cascade) and neither do the interpolator
nodes. So the first subtask, defining what causes an initial event,
is satisfied by precisely defining the conditions under which each
sensor or Script node will generate events. See Section 2.7, Scripting,
for a discussion of when Script nodes generate initial events, and
see the description for each sensor node for a discussion of when
they generate initial events.
The second subtask, defining an ordering
for initial events, is made easier by introducing the notion that
all events are given time stamps. We can then guarantee determinism
by requiring that an implementation produce results that are indistinguishable
from an implementation that processes events in time stamp order,
and defining an order for events that have the same time stamp (or
declare that the results are inherently indeterministic and tell
world creators, "Don't do that!"). Defining the execution
model becomes manageable only if each change can be considered in
isolation. Implementations may choose to process events out of order
(or in parallel, or may choose not to process some events at all!)
only if the results are the same as an implementation that completely
processes each event as it occurs. VRML 2.0 is carefully designed
so that implementations may reason about what effects a particular
event might possibly have, allowing sophisticated implementations
to be very efficient when processing events.
The third subtask, defining what happens
during an event cascade, is made easier by not considering all possible
route topologies at once. In particular, event cascades that contain
loops and fan-ins are difficult to define and are considered separately
(see -Sections 2.4.4, Loops, and 2.4.5, Fan-in and Fan-out).
Processing an event cascade ideally takes
no time, which is why all events that are part of a given event
cascade are given the same time stamp. ROUTE statements set up explicit
dependencies between nodes, forcing implementations to process certain
events in an event cascade before others.
For example, given nodes A, B, and C in the
arrangement in Figure 2-7, where A is a TouchSensor detecting the
user touching some geometry in the world, B is a Script that outputs
TRUE and then FALSE every other time it receives input, and C is
a TimeSensor that starts an animation, the ROUTE statements would
ROUTE A.touchTime TO B.toggleNow
ROUTE A.touchTime TO C.set_startTime
ROUTE B.toggle_changed TO C.set_enabled
Figure 2-7: Routing Example
In this case, whether or not TimeSensor C will
start generating events when TouchSensor A is touched depends on whether
or not it is enabled, so an implementation must run Script B's script
before deciding which events C should generate. If B outputs TRUE
and C becomes active, then C should generate startTime_changed, enabled_changed,
is-Active, fraction_changed, cycleTime, and time events. If B outputs
FALSE and C becomes inactive, then it should only generate startTime_changed,
enable_changed, and isActive events.
Paradoxical dependencies (when, for example,
results of A depend on B and results of B depend on A) can be created,
and implementations are free to do whatever they wish with themresults
are undefined. See Section 2.4.5, Fan-in and Fan-out, for an explanation
of what happens when more than one event is sent to a single eventIn.
Event cascades may contain loops, where
an event E is routed to a node that generates an event that eventually
results in E being generated again. To break such loops, implementations
shall not generate two events from the same eventOut or to the same
eventIn that have identical timestamps. This rule shall also be used
to break loops created by cyclic dependencies between different sensor
In general, it is best to avoid route loops.
There are some situations in which they're useful, however, and the
loop-breaking rule combined with the dependencies implied by the routes
are sufficient to make loops deterministic, except for some cases
of cyclic dependencies (which are inherently indeterministic and must
be avoided by world creators) and some cases of fan-in (which must
also be avoided and are discussed later).
One simple situation in which a route loop
might be useful is two exposedFields, A.foo and B.foo, with values
that you want to remain identical. You can route them to each other,
ROUTE A.foo_changed TO B.set_foo
ROUTE B.foo_changed TO A.set_foo
First, note that no events will be generated
unless either A or B is changed. There must be either another route
to A or B or a Script node that has access to and will change A or
B, or neither A nor B will ever change. A route is a conduit for events;
it does not establish equality between two fields. Or, in other words,
if A.foo and B.foo start out with different values, then establishing
a route between them will not make their values become equal. They
will not become equal until either A receives a set_foo event or B
receives a set_foo event. See Section 2.7, Scripting, for a description
of how to write a script that generates initial events after the world
has been loaded, if you want to guarantee equality between exposedFields.
The loop-breaking rule prevents an infinite
sequence of events from being generated and results in "the right
thing" happening. If A receives a set_foo event from somewhere,
it sets its value and sends a set_foo event to B. B then sets its
value and sends A another set_foo event, which A ignores since it
has already received a set_foo event during this event cascade.
when two or more routes write to the same eventIn. If two events with
different values but the same timestamp are received at an eventIn,
the results are indeterminate.
occurs when one eventOut routes to two or more eventIns. This results
in sending any event generated by the eventOut to all of the eventIns.
Like loops, in general it is best to avoid
fanning into a single eventIn, since it is possible to create situations
that lead to undefined results. Fan-in can be useful if used properly,
though. For example, you might create several different animations
that can apply to a Transform node's translation field. If you know
that only one animation will ever be active at the same time and all
of the animations start with and leave the objects in the same position,
then routing all of the animations to the set_translation eventIn
is a safe and useful thing to do. However, if more than one animation
might be active at the same time, results will be undefined and you
will likely get different results in different browsers. In this case,
you should insert a Script that combines the results of the animations
in the appropriate way, perhaps by adding up the various translations
and outputting their sum. The Script must have a different eventIn
for each animation to avoid the problem of two events arriving at
the same eventIn at the same time.
While designing VRML 2.0, various schemes for
getting rid of ambiguous fan-in were considered. The simplest would
be to declare all fan-in situations illegal, allowing only one route
to any eventIn. That solution was rejected because it makes some simple
things hard to do. Other possibilities that were considered and rejected
included determining a deterministic ordering for each connection
to an eventIn (rejected because determining an order is expensive
and difficult) and built-in rules to automatically combine the values
of each eventIn type, such as logical "OR" for SFBool events
(rejected because it would make implementations more complex and because
some event types [e.g., SFNode] don't have obvious combination rules).
World creators are given the power to create ambiguous situations
and are trusted with the responsibility to avoid such situations.
Fan-out is very useful and, by itself, can
never cause undefined results. It can also be implemented very efficiently,
because a node can't modify the events it receives. Only one event
needs to be created for any eventOut, even if there are multiple routes
leading from that eventOut.