Who Am I?

Toney, Alabama, United States
Software Engineer, Systems Analyst, XML/X3D/VRML97 Designer, Consultant, Musician, Composer, Writer

Tuesday, April 7, 2009

Emotional Autopilot for Generating Situation Behavior in Real Time Avatars and Machinima

Are emotions transitions? Given an editor where the output is a video segment, can the emotional vocabulary of emoticons be dragged and dropped to characters in scene the same way transitions, media generators and effects are dropped on video segments? The answer is no, but the emotions can be dragged and dropped as any of these on objects in the scene. Because X3D/VRML are type-compatible routing, the final output of the emotion effect is mesh/audio, The semantic of the emotion has to be determined by the choice of choices for an emotion valid in the intent of the author.

No higher level of AI is needed for a drag and drop stagecraft editor (eg, machinima). The language of camera shots is generative here.

AI becomes useful when creating non-linear environments where stories are built inside stories and the presentation order is not determined a priori but is navigated in response to or furtherance of events made possible as the aggregate of all active events at some points in the timeline or story arc.

Non-linear storytelling can be thought of as a sync'd multi-arc story where nodes along each arc are pulling and pushing each other similar to flocking behavior but more complex because of the emotion nodes linked and delinked when proximity and type conflate to alter selections for the *immediate* choices available. It is as if you hooked up the Google sorting algorithm for presenting YouTube alternatives to the scene node thus the environment of the character.

It is the chooser of their choices, an autopilot in emotional domain space.

Systems such as the Google YouTube selector illustrate emergent intelligence that can be mapped into real-time 3D scene generation.

No comments: