The undeveloped part is the avatar as dominating object for discovering the environment. It is easier to create 2D buttons for basic property operations. We do not use the gestural animation of avatars to convey information about the environment. We have a rudimentary agreement on the basic signs of chat, but we haven't stepped to the next level where the avatar can be selected and get information about the system.
Steve Guynup has worked this problem and provided some innovative designs.
(BS Contact/Direct X as rendered & ABNet/link provided)
The blue ball does the lighting demo
O In addition to H-anim, you need a parallel activity to create gesture libraries.
O The gesture libraries should share properties with markup, not the syntax, but the "practice"
The more market domains you markup (how to classify gestures sets emergently) the better.