People fail the Turing Test every day.
It isn't how smart you make the 'bot; it is how persuasive you make it. The early AI researchers made the common mistake that logical thinking (chess) was a sign of intelligence instead of a sign of intelligent search in a solution space. To fool a person, engage their emotions intelligently.
The keys to intelligent avatars are:
1. Emotive intelligence (see HumanML work on modeling a vector scalar model for emotion engines) as proximity based acts.
2. Intelligent searching built into the avatar for synthesizing new behaviors based on existing templates.
But first, implement the impersonate() function for identity management and privileges acquisition when avatars enter new non-local worlds (ie, a server farm with a different 3D hosting framework than the one on which it was developed).
Then you can fool all the people all the time.