I have a confession to make: I am a closet rigging/workflow geek. I say rigging & workflow together because to a large degree how something is rigged in CG determines how you end up using it. So for me I enjoy the problem solving invention aspects of rigging. Of course this is driven by my animator side- the side that wants a good animation workflow, you see. I don’t like rigging for a living. I’ve been dragged into doing it at various studios and I haven’t liked the experience. Dull, dreary repetition of solutions others have figured out. And lots of skin weighting. I hate skin weighting. I can’t think of a more torturous activity in a studio. Wait. Texture UV editing. And toilet cleaning. They’re about equal, and they’re worse. But not by much. Anyhow, I do like to explore and find new things in rigging to help me have more fun as an animator. It’s all about making the animation more enjoyable, faster, easier, less klunky.
I forget exactly when, but sometime a number of years back it became the “way” to rig characters to create these proxy objects for selecting and controlling the puppet. It was a necessary convention as rigs became more complex and picking bones just wasn’t feasible anymore. (we won’t even dwell on the dark ages before bones. *shudder*). I’ve long yearned for the simplicity of stop motion in CG puppets. Wanna move a part? Well, grab and move that part. But that tended to be kinda messy. Rigs would get broken. Or you couldn’t see the bones inside the skin and so ended up picking the wrong one, or had to switch in and out of wireframe mode. Or the thing you wanted to control had nothing to grab hold of. It was more of an idea than an object. So the proxy control objects were made.
As time went by graphical user interfaces were developed to help the animator pick different parts of the character and try and keep the workspace cleaner. Some were more elaborate than others.
(simple)
(less so)
But the thing that always bugged me about these systems was they tended to be a wee bit too left brainy. With lots of buttons and widgets to work with it felt like an airplane control console. Various little technical advances have been made and we got things like head’s up display GUI’s for picking and such. This one emulates a system defined by Jason Osipa.
Still, here we are a good 8 or more years after the proxy controller phase kicked in a widespread manner in CG and we really haven’t moved away from it. The obvious argument is “Hey it works, why mess with a good thing?” Well, yeah it works, but it’s kinda klunky. Imagine if everytime you wanted to turn the page of a book you had to pick up a pair of hot-dog tongs and turn the page with that instead of your fingers. Or imagine if you wanted to tear off a piece of bread you had to use two forks and couldn’t touch the bread with your hands? It works, it’s do-able, it’s not too awful inconvenient. It’s just.. clumsy. This one thing coupled with the Spaghetti Box (ie: f curve editor) was the downfall of many careers of hand drawn animators trying to make the jump to the CG age. It’s just all too abstract and techy. Like operating machinery, not creating a performance. I mean, this is one feature film animator’s workspace….
Talk about artist friendly. Ahem.
When I’m posing my character I am in a very artsy place (it’s a neat place. I imagine there’s a lava lamp, a crushed velvet couch, a Sergio Leone movie poster on the wall, some jazz hybrid funk on the radio and fresh fruit in a bowl. But I digress with my little fantasies…). When I’m in that creative part of the work I want my focus to be on the performance, the quality of the pose, the communicative aspects of the work and the qualitative level of the resulting art. I want to stay in my artsy, Bob Ross “happy trees”place. But in CG I always have to stop every few seconds and go find the trigger or control or proxy off someplace to manipulate the next thing. Right brain, right brain, right brain– screeech! Left brain. OK, got it. Right brain, right brain, etc. I’d like to find a clever uncluttered way to get around attribute sliders as well. Those can be real flow killers. I have to stop, read some name in a list, find the thing I think might do the trick, then click and slide off in space and watch the results on the character. But to do away with them all ends up cluttering the viewport a bit too much. It’s hard to find that perfect balance.
It’s been a mini crusade of mine over the last few years to try and get back to the simple ways of dealing with a character. Here’s my latest results. First a screen grab of the thing in beta form. The first bit shows the ’standard’ wire curve proxies, then i turn them off and turn on the body trigger thingies. I have a toggle for body and face since having them both up at once is a pain. Anyhow..
This is a video of me working directly on the character with my hands on a Cintiq.
The rig in this video is still in development and hasn’t been optimized very much (ie: not at all).
So it runs a little pokey on my laptop. The screen grab software running at the same time probably contributes to the chunkiness a bit I’m sure. Once all the bugs are worked out the thing will get optimized and tuned for better speed. I did a lot more reaching than I usually do because to shoot the video I kinda had to make some room for the tripod in my small office. So I didn’t have a place to set my keyboard and use it like I normally do. Anyhow, as you can see it’s not perfect. I’m gonna put a trigger right on each elbow to make that more direct and less atrribute slider-esque as well. And I’ve added stuff that seemed cool but ended up not working very well in practice so I ripped it out. But the idea is to keep refining it so that (as much as possible) the controls for the body part are right there on that body part. It has room to improve but so far in tests I’ve really enjoyed the way it feels when working. More on how I did it in a little bit.