Home Technical Talk

Lip Syncing - What do you use?

bp72
polycounter lvl 4
Offline / Send Message
bp72 polycounter lvl 4
I know there are many different ways to lip sync depending on what software you have to work with from full face bone rigs, spline systems, morph targets and i am sure more.

Please post your current workflow for lip syncing or even ideal workflow for lipsyncing.

OR

If you happen to have more experence than the average bear answer one (or both if you would like) situations below.


#1. A perfect world lets say you have voice recordings, zBrush, unlimited funding and whatever software you want. What would you use and why?


#2 More likely situation if you were a one man show with very limited funding what would your limited budget solution / workflow be?

Are any of the open source solutions worth learning like Blender and MakeTalk?

Are there any other open source or affordable methods worth investing the time in learning you want to mention?


When i look online i see the results others get with much of the same software (often less) and I either cant believe they used what they say they used or wonder what i am missing.

Replies

  • Mark Dygert
    Options
    Offline / Send Message
    1) Facial motion capture.

    2) I do a lot of facial animation by hand using a mix of bones and morph targets wired to a control board. For speech I use 9 morphs (Ah, E, Fv, L, MBP, Oo, S, Th, UWQ). Once you get the hang of it, you can knock in lip sync really fast.

    I set zero keys on every other frame for all 9 morphs, then go frame by frame listen to the sound and dial in the shape.

    I've used automatic speech recondition tools before to do lip sync but they often fall apart and require a lot of clean up. Almost so much clean up that you might as well do it from scratch. Most of them have a lot of problem with accents or foreign languages.
  • bp72
    Options
    Offline / Send Message
    bp72 polycounter lvl 4
    Now that is the type of information I am looking for.

    The current situation i am in is after i modified all the morphs the best i could to clean it all up with max 2012 and zbrush i made a facial bone rig using BonyFace thinking i could use the bone rig to make the rest of the morph targets for morpher trying to do it manual. With Morph-O-Matic and Voice-O-Matic there are just not many tutorials out there so i am sure it is my lack of knowledge but for every one morph cleaned up two new adjustments needed to be made.

    The targets captured were just the face mesh lacking eyes, upper teeth, lower teeth and tongue.

    So if you don't mind me asking with you using bones and morphs are you using compound morph targets to get a morph from the bone rig or a different method?
  • Mark Dygert
    Options
    Offline / Send Message
    I mostly use morphs in speech and I do skin up the face before making morphs so I do use the bones to roughly deform the mesh, but I do a CRAP-LOAD of vert tugging to make the morphs.

    Depending on the character I might use specific morphs around the eyes or brows to get specific effects like a brow crinkle or scrunchy eyes. These are typically tied to specific wrinkle maps that kick on when the morphs do.

    The jaw bone is mostly for an offset to the jaw, like if someone gets punched in the face I can slip and rotate their jaw to distort their face. I don't touch the jaw bone for speech, unless its to pull off a specific affect like the person is talking through clenched teeth and the morphs keep opening the jaw, but that's rare.

    I use bones in the brows, the eyes, around the nose, cheeks and 8 offset bones around the lips. I don't use the lip bones for speech, but I use them to carry disgust or lower/raise the lips, smile and frown, that kind of stuff. So there are times that the lip bones and morphs are going at the same time but for the most part it's one or the other driving.

    It would take a lot longer to do speech with just bones unless I had a quick way to capture certain shapes and dial them in quickly. I'm not sure I could get the same range of motion out of an all bone rig that I can get out of morphs.

    I used voice-o-matic I kind of liked it but mostly fought with it. Its not bad but it doesn't process a lot of seperate files, very well. It processes best at 30fps we animate at 15 so I had some extra steps in there. It has trouble unless people talk slowly and clearly at audio book pace and don't have an accent, which most of our games have people with accents. Its best if you use the "text type in" which helps it figure out what shapes to make, but is tedious.
  • bp72
    Options
    Offline / Send Message
    bp72 polycounter lvl 4
    Really helpful information and I sincerely appreciate it. I never thought about the fps (someone else renders) but i will check into it. If i remember correctly in the past renders were at 15fps and another was at 20fps skipping every 5th frame and i am sure that wont do the smoothness of the animation or timing any favor.

    So what i am taking from this is is it might be good for me to experiment with only partial facial bone rig to add emotion to morph shapes and not a full facial bone rig (BonyFace) to try and drive all the morph shapes.

    What i thought was a good idea was with having to clean up so many morph targets (female voice, southern accent and inconsistent pitch) was that i could make a full face bone rig and from the bone rig create all primary morph shapes for lip sync along with any alternative or additional morphs that might be needed.

    The reality has proven that a bone rig takes about twice as long to adjust with stiffer movement mouth shapes in comparison to adjusting a morph with just a few brush strokes in zBrush.

    I am more than willing to admit my lack of knowledge is not helping either.
  • bp72
    Options
    Offline / Send Message
    bp72 polycounter lvl 4
    Anyone else have any suggestions or tips?
Sign In or Register to comment.