Wow, thats some rapid responces, thanks everyone :¬)
The system controls the movements, turning, direction and animations of the avatar, which allows us to use the same route over and over, giving me time/ability to shoot with one camera and gain many perspectives. We can not only control the speed at every point all the way through the chosen layout, but the system also adjusts the walking animation to suit. This prevents the appearance of being welded to the spot, when movement is slowed down, or taking huge long steps, when moving faster.
Perhaps I misused the term 'actors', but we pay them for being who they are, in the movies, they do not just particpate, but also bring their own talents/skills and opinions of what would be good, to all our movies.
In many of our scenes, each of our members will be asigned a task like, say a waitress (current project

) and then they will improvise, or take direct direction if interfacing with other actors involved with the storyline. Unless wardrobe/stylist are directly involved with that character, again they improvise from their inventory to fit the role.
Because we have predictible flow of movement from the system, we can be sure of getting smoother and totally adjustable framerate tweening from our editors/rendering software. Our investment for using this system and the others we have delveloped, is simply to make our SL movies more acceptable to RL, as a direct alternative.
The problem with followcam/lock cam alone, is that it still only records the viarable framerate of the scenes and potential jerky movements of the avatar. Once you set the route out with all the positions where animations are carried out, we can position the sets to become interactive. The actor passing the car and stroking the bonnet (Hood) in exactly the right place, without the system, we would have to shoot several scenes just to get the hand placement right, along with distance from the car. I don't know anyone in SL that can nail that absolutely perfectly, time after time. Same with the door scene.
The present issue with SL is the lack of true emotives linked to mouth animations linked to actual words, its all a bit random and 'almost like'. Our technique will be to stream the data from CrazyTalks engine right into SL. We have been carrying out tests with some encouraging results. We still have a way to go before it being good enough for us to put into our clients commercials. It will transform our use of CrazyTalk/Lip Syncing from relatively static shots, to full movement of the avatar during the shoot.