Real Time 3D Alerting Systems
The principal advantage of real-time 3D representation of the alert systems is precisely that they are both real time systems.
A human has to master reading tables. As a representation of real-time without animation or the third dimension, it is difficult to read at the rate required of a human reader over real-time events. Staring into names with timestamps does not enable the user to quickly grasp the tensor relationships of real-time objects with proximate motion and type. The real-time 3D space represents this easily and given the geolocations of sensors (any type/message/interface/object with a report/event) provides the broad relationship selectors. Space and time narrow queries. Objects in space and time route those relationships. It's just VRML. ;-)
The power of the real-time 3D interface is that the types in a MU system represent multiple web services/network-sensor-connects. In a single distributed space we have all of their first-level accessors synchronized in real-time across multiple devices with all users sharing a near-real-time view. Close is close enough. Pick your representation. Avatars with big A or little A. Is avatarness human representation or any 3D object with clocks, engines and routes?
(I won't even touch interoperable here. KitschBitch.)
If the topicalSpace (DEF names) is associated to the namespace/URI for classes when combined to the cascading event system, one has a perfect switch based model for real-time 3D alert systems representation.
It's a beautiful thing.
UPDATE: See Planet 9's RayGun. It is a good start on this technology. I suppose iPhone's might get this capability some day.