I’m new to 3D art and currently using iClone 8 and Character Creator 4 (CC4) for a project centered on animating seated character interviews.
My primary challenge is establishing a sustainable workflow to achieve spontaneous, character-driven conversational gestures for interviews involving multiple distinct personalities. I need a reliable way to generate groups of differentiated motions to distinguish each character’s style (e.g., slow/polite vs. fast/exaggerated).
To animate a typical interview I use a “listen” loop motions to keep the characters alive the whole scene and then insert emphasis gestures from time to time according to dialog.
I have issues with existing tools. Premade ActorCore clips: I spent a lot of time searching clips online and I cannot really group motions for different personalities. Motion Director, nice idea but pretty impossible to apply correctly to seated characters. Motion Puppet (sliding exaggerated / speed) is the closest to what I look for, but the tool feels abandoned (few motions, hard to extend).
What is the most efficient strategy would you use to produce seated, conversational gestures at scale?
However, for your case you might try the new Video Mocap feature that’s now part of iClone. You could act out and record the dialog gestures as video clips the way you want them and then convert them to motions. You pay 250 DA points ($2.50) for 1 minute, so I think that would work out cheaper in the end.
You get one free conversion to get started so you can try it out.
Video Mocap is indeed my newest workflow. I would say it is nice as it allows you to apply real acting, however, I have to admit that after all the necessary adjustments to the generated Video Mocap clips, the final result is often a bit far from the initial vision, even if it is still usable.
It is also quite slow, at least with the QuickMagic service tier used by iClone. The price point is good, though: I can pack several distinct gestures into a single 60-second video, which generally makes it more cost-effective than purchasing multiple short motions from ActorCore.
I initially didn’t mention Video Mocap in my first post because I felt it was overkill just for generating simple, spontaneous gestures. I feel like Motion Puppet could be the ultimate “gesture generator,” but that’s just a newbie’s opinion.
I can easily imagine a “Motion Puppet 2” where a user could add custom motions and tune the speed, exaggeration, and posture settings to quickly shape a character’s personality, all at zero cost. Adding new clips is currently possible in Motion Puppet, but I believe it is limited to the .iMotion format, and the experience of adding them is not good.
Simple every-day gestures are often the hardest. The problem is that there are movie clips available for them, but to make those actually fit with what you want the charcater to do takes quite a lot of work.
I did one test with Video Mocap. I can’t dance flamenco (or at all), so I created a short AI video for that and applied that with Video Mocap.
The hands needed editing but overall it worked pretty well.
Oddly enough, it appears that capturing motions like dance, skating, or martial arts gives good results with Video Mocap.
In contrast, trying to capture simple conversational gestures while the character is seated is quite problematic. If one attempts to use Video Mocap to capture actions like crossing the arms or legs, the generated motion often produces highly distorted poses. It looks like the system struggles with constrained motion, as I frequently see crazy leg positions or arms placed unnaturally far forward and distant from the torso.
Furthermore, fingers are a mandatory, time-consuming editing task on nearly all of my Video Mocap clips. I wonder if this finger problem could be fixed while recording the video. For instance, taping each finger black or white might help the system recognize the individual joints better, as it looks like when the source video is not visually clear, the system tends to get… creative.
This excessive cleanup is why I was looking for a simpler solution for dialogue, however, Video Mocap at the moment it’s my viable workflow.