The undeveloped part is the avatar as dominating object for discovering the environment. It is easier to create 2D buttons for basic property operations. We do not use the gestural animation of avatars to convey information about the environment. We have a rudimentary agreement on the basic signs of chat, but we haven't stepped to the next level where the avatar can be selected and get information about the system.
Steve Guynup has worked this problem and provided some innovative designs.
http://noel.pd.org/~thatguy/aio/
(BS Contact/Direct X as rendered & ABNet/link provided)
The blue ball does the lighting demo
O In addition to H-anim, you need a parallel activity to create gesture libraries.
O The gesture libraries should share properties with markup, not the syntax, but the "practice"
The more market domains you markup (how to classify gestures sets emergently) the better.
2 comments:
lol thanks Len,
To be honest, most folks won't get the point. It usually takes years of failure to accept that what they are trying is just foolish. Sadder still, the work you linked to the dumbed down version of my work and even it isn't understood by most folks.
The only help I can offer is add the work of Adam Nash which pushes the same principles to a more abstract and musically charged extreme http://yamanakanash.net/3dmusic/mprexp.html
And for the curious, I might be up for a live demo or my work. steve_guynup@hotmail.com
In the end, it matters little if passersby here or elsewhere get my work. I'm using it and works just like I said it would. For the past 4 months I've been using it to teach game design for my 100% online classes and am continuing to building content for more in depth presentations.
Send me a better link, Steve. I'll be happy to post it.
I think you have some great ideas. Same as Adam. Happy to post any of those links.
Post a Comment