Andrew, the author of the PuppetVision blog wrote me to explain in further detail about the Blender puppet plug in he’s helping to develop called Panda Puppet. He said I could share his thoughts here….
Thanks for taking notice of my digital puppetry project! I had actually been
looking at a bunch of the tutorials on your site earlier today so it was neat
to see that.
Just to clarify though, Panda Puppet isn’t really motion capture. Although it
could certainly work with motion capture and it is possible to have direct
control over over a character’s movement, Blender’s real-time engine is kind of
lousy so the easiest way to control a character is to assign hand-keyed Blender
“Actions” (short animated movements like a hand gesture…I assume Maya has
something similar) to different controls. A large library of actions can be
developed and then what a puppeteer is really doing is mixing different actions
together on the fly to create a performance.
The effectiveness of this type of system is really dependent on having a
skilled animator setting up and rigging the character (I am trying to recruit
some experienced Blender animators to help out with the project). The
rudimentary-looking tests right now are more of a testament to my very limited
character animation skills than any real limitations with this technique.
So there you have it- hopefully that helps fill in the blanks for everybody. I think it sounds like a pretty cool system in theory.