Hi everyone,
I’m building an interactive AI avatar system in Unity using a Character Creator 5 (CC5) character exported via CC AutoSetup for Unity.
The pipeline I’m targeting is:
LLM generates text → TTS converts to audio → Unity plays audio in real-time → CC5 character performs lip sync
The TTS audio is generated dynamically at runtime (not pre-baked), so I need a real-time runtime lip sync solution that works with CC5 rigs inside Unity.
My questions:
- Does Reallusion offer any native Unity runtime lip sync SDK or plugin** for CC5 characters? I couldn’t find one in the store, but wanted to confirm.
- Is AccuLips usable at Unity runtime, or is it strictly an iClone authoring-time tool?
- What is the officially recommended approach for real-time lip sync with CC5 characters in Unity when the audio source is dynamic (e.g., streamed TTS)?
- Has anyone successfully integrated SALSA LipSync Suite (CrazyMinnow) or a similar third-party plugin with a CC5 character rig in Unity? Any tips on blend shape mapping or setup?
Any guidance or product recommendations from the Reallusion team or community would be greatly appreciated!
Thanks in advance.