This paper is designed to give an overview of the Living Worlds proposed standard.
This paper is intended as a tutorial to the technology of Living Worlds, it assumes a familiarity with VRML2.0. Readers might find it useful to read the Living Worlds concepts paper first, which goes into more detail into what we are trying to achieve. For more technical detail read the rest of the standard at http://www.livingworlds.com.
Since the standard is currently (Nov '96) an evolving document, if there are differences between this paper and the standard, then the standard should take precedence. This tutorial was last updated 19th November '96 and reflects the current draft of the standard as of that date, plus the proposed extensions for managing ASA's.
Please send any comments or corrections to Mitra, mitra@mitra.biz. This paper reflects my own views, and not necessarily those of the other Living Worlds editors.
The standardization of VRML has been described as having three phases.
The Living Worlds proposal is intended to meet the third phase of this progress. The concepts paper covers in much more detail what we are trying to achieve, but in summary it provides the ability to open a world, and have your avatar appear in that world and be seen by other users, to interact with the world, and see the interactions appear in other worlds.
The design goals led us to the following technical requirements.
Persistent Objects | Objects had to be able to be introduced to a scene, and remain even after the user which created them disconnected. |
MUtech independence | It had to be possible to build an object or avatar that did not depend on the particular multi-user technology being used. |
Personal persistent avatars | Avatars had to be able to belong to a user, to be taken to any world supporting the Living Worlds spec. |
Object-Object communication | Objects had to have a means to send events to each other. |
State sharing | It had to be possible to specify a set of arbitrary states for the MUtech to share. |
Control over what gets shared | It had to be possible to specify at both a course and fine grain what gets shared and when, so that MUtech's can optimise for low bandwidth. |
VRML2.0 | The specification had to work entirely within VRML2.0 without requiring, at least in the first phase, any extensions to the browsers. |
To achieve this we defined the following three nodes.
The Zone node usually represents an area of physical space, (although it could be more abstract), this is usually the unit of sharing, i.e. either everything in a Zone will be shared or nothing. The Zone contains SharedObjects, which may correspond to physical objects in the space, or something more abstract. SharedObjects can have Pilot nodes, a Pilot node typically has the code that changes the shared state of the SharedObject.
In order to separate out the information only relevant to specific networking technologies. We define a concept called a MUtech (pronounced mew-tek), the MUtech is the code which manages sharing the state of the world, its exact characteristics will vary from supplier to supplier. The choice of MUtech is made by the author of the world, who adds at least one Zone node, and a MUtechZone to their world. The term "MUtechZone" is used to mean any one of the vendor-specific nodes for handling networking. For example a ParaGraphZone node. At run-time the MUtechZone will add corresponding MUtechSharedObject and MUtechPilot nodes to the various SharedObjects and Pilots in the Zone, to generate a scene graph looking like:
The Zone node is the top node of any shared portion of the tree, it is essentially acting as a Group that contains each of the SharedObject nodes, in fact it is PROTOtyped as Group. Look at the spec for a detailed description of all its fields, but a few things deserve attention here.
The SharedObject node, and its companion - the Pilot node - are authored independantly of MUtech, that is, a SharedObject could be introduced as a child of a Zone in any world, and still work. This is crucial for scenarios involving either libraries of objects, or for avatars which must be able to exist in any world. Look at the spec for a detailed description of all their fields, but a few things deserve attention here.
A significant part of the debate around Living Worlds centered around whether Avatars and Objects are the same thing, or different. At first glance they appear different - Avatars are usually human controlled, leave the scene when the User disconnects, have a single machine where they are controlled from, and move around quickly. Objects are usually under program control, are persistent even when the User creating them leaves, can be controlled from multiple locations (e.g. a light-switch will always want to turn on the light without waiting for network latency) and are static.
However all the "usually"s muck up the nice clean distinction, where do robots that walk around fit in for example. So instead the author is given access to a number of fields to tune the behavior of their object or avatar.
In order to make it easier to author these multi-user worlds we defined a set of nodes as a library. Their definitions will be made publicly available so that they can be used confidently by authors, and used by the Living Worlds prototypes.
Some are trivial - look at the spec if you think the object is useful to you.
Other nodes take more explanation.
The AssociativeStringArray is a critical component in constructing a SharedObject, it is used to package up an arbitrary collection of states for sharing across the network.
To work effectively it is usually used in combination with some related nodes that enable each of the VRML2.0 event types to be stored or extracted in the array without complicated custom scripts. The example below should illustrate it.
SharedObject { currentState DEF ASA1 AssociativeStringArray { } pilot DEF P Pilot { DEF S Script { eventOut SFColor newColor url "http://foo.com/spheres.wrl" } DEF C1 SFColorToASA { tag "myColor" } ROUTE S.newColor TO C1.in ROUTE C1.out TO ASA1.in } visibleDefinition Shape { geometry Sphere { radius 2.3 } appearance Appearance { material DEF M Material {} # Red } DEF C2 ASAToSFColor { tag "myColor" } ROUTE ASA1.out TO C2.in ROUTE C2.out TO M.emissiveColor } }
In this example, the pilot's script "spheres.wrl" sets a new color, which is converted for storage in the array, and then read in the visibleDefinition and ROUTEd to the emissiveColor of the sphere.
The Selector and its SelectorItems can be used to create a menu. These menus are attached to each SharedObject's selector field. The intention is that browsers will eventually implement these nodes themselves in order to display the menu in a User Interface consistent with that of the browser. In the short run they will be supplied as a set of freely available definitions using Java menus or something similar, at run-time the MUtechSharedObject will add a TouchSensor so that when an object is clicked on its menu is displayed. There is some discussion of generalising this concept to support other forms of UI, so watch the spec for changes.
The EventHandler is used to dispatch events sent to a SharedObject or Pilot. EventHandlers are typically used for Inter-Object communication (see below). The EventHandler contains an MFString listing event names, and an MFNode pointing at Script nodes defining the handling of the event. This may be generally useful in other applications.
It is expected that a lot of experimentation will happen with the best way to predict motion, and some fields are provided in the SharedObject for that purpose, the SmoothMover node can be patched into those fields to implement trivial interpolation between a current position and a predicted one.
Living Worlds took the same approach as VRML2.0 of defining lots of small nodes, that can be composed in different ways, rather than a few mega-nodes. This can make event flow difficult to follow, so here is an example of event flow for the common case of motion.
There are some other control flow examples in the spec.
The concept of Capabilities is another key concept of Living Worlds. The general idea is that an Avatar should be able to carry around some functionality which can be used to interact with other avatars, for example if two avatars both have IBM Connection Phone then they should be able to open a voice communication whether or not the World author has ever heard of this product.
The implemention of this has three parts which are described in more detail in the spec in more detail.
Objects have a number of ways to communciate with each other. Critical to any inter-object communication is the notion of trust, i.e. an object is going to react to something that is part of itself, or written as part of the same world, to the way it will react to events coming from some other arbitrary avatar that happens to have walked into the scene. VRML2.0 has a limited security model, based on total access to any exposedField of anything you have a pointer to, and no access to anything else. So ti handle this, we define public and private event handlers for the SharedObject and Pilot nodes. The idea is that pointers to public event handlers can be freely passed around, while those to private event handlers are only available to the MUtechs,
In addition, each Living Worlds Mutech must supply three events "eventToPilot", "eventToLocalPilot", and "replyToDrone". These can be used by drones to communciate with their pilots, and to communicate with the local avatar. Because VRMl2.0 doesn't have any indication of where an event comes from, we adopt the convention that the second field of the MFString that makes up the event indicates the originator in a MUtech dependent way, and is set by the MUtech as it passes the event along. The event replyToDrone can then be used to respond to the originator.
Several of the scenarios envisioned for Living Worlds include the concept of adding objects to a scene. Typically this occurs in one of two ways,
In a single-user world, the implementation of this (for example in an editor) is outside the scope of standards. However in a multi-user world, it is required that the insertion of this object be communicated to other users viewing the scene. This is done by only attaching SharedObjects to Zones, not by adding arbitrary VRML at arbitrary points in the scene graph.
For "Drag-and-Drop" insertion, the browser should use the same rules as are used for determining when a TouchSensor is activated to determine a Zone which is the parent for the Geometry at the insertion point. It should then add the SharedObject to the Zone and adjust the position and orientation fields of the SharedObject to correctly position it relative to the Zone. There are several parts of this process which are UI-dependent, for example:
For script initiated object insertion this is harder in VRML2.0 since there is currently no way for a Script to convert a 3D point to a path in the scene graph. Work arounds are TBD.
In either case, the added object will be added by an addChildren event being sent to the Zone, this event is intercepted by a script - which can check permissions etc., and can also be intercepted by the MUtechZone to share across the network to other instances of this Zone.
I hope this tutorial explains some of the complicated technology of Living Worlds. I expect to put together another version of this aimed at Content developers at some time, who don't need the implementation details. For more information I recommend the follow sources.