[Official] 🎬 Consistent Characters and Precise Control for AI Films and Commercial Production

In AI films and commercials, character consistency is essential — when faces or poses shift, the story loses impact.

Character Creator and iClone already offer a powerful environment for building original IP characters and developing stories around them. Whether for games, animation, or merchandise, they give creators full control over design, rigging, and performance.

Now, with AI Render, your characters can go even further — into AI-generated movies, commercials, and stylized storytelling, with full consistency across frames. By combining 3D animation data, LoRA, and ControlNet inputs, AI Render lets you build a creative universe where your character looks and moves like a real person — across any medium.


:dna: Keep Characters Consistent with LoRAs You Own

LoRAs (Low-Rank Adaptation models) are essential for keeping your character’s identity consistent across scenes, angles, and styles. While LoRA training is done externally, creating a good training dataset is often the hardest part — traditionally requiring hours of manual posing, camera setup, consistent lighting, and organizing reference images.

iClone solves this with speed, structure, and automation:

  • Batch-render hundreds of images across varied angles, motions, and expressions
  • Use pose libraries, motion clips, and automated cameras for broad scene coverage
  • Control shot diversity (close-up, medium, wide) to enrich your dataset
  • Maintain consistent lighting and facial detail with intuitive scene tools

Whether you’re building one character or an entire cast, iClone gives you a fast, repeatable way to create high-quality LoRA datasets.

:small_red_triangle:This video walks you through the LoRA training workflow — from generating a dataset in iClone to training the LoRA externally on RunPod.

:page_facing_up: To make it even easier, we provide a template project that generates a full LoRA training dataset — producing 46 images automatically, with no manual setup required. Simply replace the sample character (Kevin) with your own, then go to Frame 0 and adjust your character’s height to match the original camera position. No changes to the render settings are needed — just start the render sequence.


:brain: Apply and Combine LoRAs Inside ComfyUI

Once trained, apply your LoRAs directly in AI Render to:

  • Maintain character identity across shots and styles

  • Combine multiple LoRAs to swap costumes, moods, or design variations

  • Layer LoRAs with other AI controls for expressive, style-consistent animation


:jigsaw: Beyond Prompts: Precise 3D Inputs and Camera Control

While LoRAs handles identity, ControlNet handles structure. And unlike 2D-extracted inputs, which are prone to flickering and failure, AI Render delivers precision ControlNet maps from real 3D data.

|928px;x257px;

With over 13,000 ready-to-use 3D assets — including characters, motions, cameras, and lighting — all easily searchable with the built-in Smart Search, you can drag and drop everything you need in seconds. Instead of building inputs manually, you can pose, animate, and light full 3D scenes in minutes, creating a solid, production-ready foundation for generating precise and stable 3D ControlNet inputs.

Here’s what sets each mode apart:

:mag: Depth — Shot-Adaptive Depth for Expressions and Framing

  • Generated from the actual 3D mesh — not guessed from pixels

  • Lets you switch between wide shots and close-ups with full spatial accuracy

  • Crucial for capturing subtle facial expressions for precise results when paired with iClone’s facial editing


:small_red_triangle:The depth map can be adjusted based on shot type (portrait, mid-shot, or wide scene)

|699px;x268px;
:small_red_triangle:Facial expressions applied using iClone’s Facial Expression Panel can be precisely rendered

:man_standing: Pose — Full Skeletal Tracking, Even with Occlusions

  • Maintains joint tracking even if faces, hands, or limbs are covered, turned away, or off-frame

  • Prevents skeleton drift in fast motion, group shots, or partial camera views

  • Especially critical in multi-character scenes like fight choreography, emotional staging, or crowd interactions

:small_red_triangle:In this case, Reallusion’s 3D pose accurately captured the hand gesture, even though the original input had the hand partially out of frame

:small_red_triangle:Ensures high accuracy of character interactions in motion sequences

:bulb: Normal — Clean Lighting and Form

  • Extracted from the true 3D surface

  • Delivers stable lighting and shading across frames

  • Enhances clarity and visual depth — especially when combined with iClone’s real-time light controls

|960px;x534px;
:small_red_triangle:The effect of iClone’s lighting on characters is accurately reflected in the AI-generated results.

:pencil2: Edge(Canny) — Stable Outlines for Stylized Shots

  • Retains consistent outlines across frames

  • Ideal for anime or hand-drawn effects

  • Works best when paired with Depth for layered, line-based rendering


:small_red_triangle:Edge and Depth clearly define structure and spatial relationships, even in simple layouts with just basic shapes (Prompt: Oil Painting,Bearly,masterpiece, best quality, cyberpunk alley, glowing neon signs, wet floor, vending machines, old pipes, rusty walls, robot citizens, futuristic graffiti, orange and teal lighting, cinematic atmosphere, high detail)

:dart: Pro Tip:

You can combine multiple ControlNet inputs for stronger control over both character and scene. Each input can be fine-tuned with threshold sliders to match your desired influence and visual style — giving you true creative control over both structure and look.

Pair this with iClone’s powerful camera tools — including framing, lens control, and automated camera paths — to precisely shape composition, perspective, and cinematic storytelling in your AI generation.




:small_red_triangle:This level of accuracy supports a wide range of styles, camera angles, and scene layouts

:small_red_triangle:Ensures frame-level consistency in video sequences, even with constantly moving cameras.


:gear: Custom Workflows, Custom Models — Full Creative Freedom

AI Render plugs directly into the ComfyUI ecosystem through Reallusion’s custom nodes — offering flexibility for both beginners and pros.

:white_check_mark: Build Custom Workflows

  • Thanks to the Reallusion nodes, you can freely modify your ComfyUI pipeline — add, remove, or rearrange nodes — and your changes reflect inside iClone and Character Creator. This two-way connection gives you visual feedback and full node-level control — no extra setup needed.

:blue_book: A step-by-step guide is available, showing how the Reallusion nodes works with the UI in CC and iC, and how to save your custom workflows as AI Render presets — so you can reuse your favorite setups instantly. We also provide a Flux image-to-image sample workflow, which you can use as a reference or apply directly to get started with high-quality results right away.

:white_check_mark: Use High-End AI Models via Cloud

  • AI Render runs locally by default using Stable Diffusion 1.5, Wan2.1 Fun 1.3B Control and Wan2.1 VACE 1.3B For creators seeking greater power or cinematic detail, AI Render is fully compatible with advanced third-party models like Flux, HiDream, FusionX, and more. By pairing it with cloud-based GPU platforms like RunPod or RunComfy, you can break through local hardware limits — unlocking high-resolution rendering, batch generation, and full-scale commercial production, even on modest machines.

:loudspeaker: Please review the commercial usage terms of each AI model before using them in your projects.

:globe_with_meridians: The Full Pipeline — Built Around You

From rigged character creation to expressive animation, from LoRA dataset to AI video generation — Character Creator, iClone, and AI Render provide a unified pipeline you own.

Use it to:

  • Scale your own IP with creative control

  • Build cinematic-quality characters and scenes

  • Bring high-end AI video to your studio workflow


:art: Just Getting Started or Focused on Styling?

Head over to [Fully Customizable 2D, 3D, and Photorealistic Rendering Styles]

Explore professionally tuned presets, easy prompt customization, and intuitive style tools — no node editing required. It’s the perfect place to begin if you want fast, creative results with your characters.

5 Likes

I do not get the main settings window when i click on AI Render in both CC4 and Iclone 8

OK, this is exactly what I was looking for!

So we can use SDXL?

Hi, joelraechamp85
We don’t provide built-in SDXL workflows, but you can use your own SDXL workflow through the Reallusion Custom Node and create your own custom AI Render Tool.

For tips on setting up a custom workflow, please check this guide:
:link: Official: Maximize AI Render Performance – Custom Workflow Setup Tips

Hope it works for you.

Hi,
I am having problems with Lora training, for Iclone 8 AI,
I don’t have much experience with AI,
if I want to render it in Iclone AI to train it,
I am asked to create 60 frames,
but the project seems to be locked, and only has 45 frames.
Is there a tutorial Training just for Iclone 8 AI rendering.
Without using any other external AI program.
Kind regards Robert.

Hey rosuckmedia,

If it only was that simple - train in iClone with AI :slight_smile:

That project RL included only to gather images for LoRA training externally.
You have to render them normally (no AI) as image sequence from iClone and then use them for LoRA training one way or another.

I actually tried LoRA training with one of the recommended methods in the other thread, which was to use ComfyUI locally, but just wasted time and miserably fail. After following all the steps as directed, I started getting errors as I Run it. And there was no way to troubleshoot. I suspect it was possibly because we have custom ComfyUI from RL (or I might’ve been doing something wrong due to lack of experience).

I am not wasting my time no more with this until we get an officially recommended method(s) and respective full featured tutorial from RL.

But maybe you will get more luck than I had :wink:

1 Like

@4u2ges
Thank you very much :+1: , I understand now.
Perhaps there will be tutorials on this soon.
Greetings, Robert

Hello, rosuckmedia,

Regarding the issue where the export locks at frame 45, please check the FPS and End Frame settings in the Project Panel of iClone or Character Creator (go to EditProject Settings).

AI Render requires at least one second of video to export. So, if your project FPS is set to 60, make sure the End Frame in both the Project Panel and the AI Render Panel is set to greater than 60.

If it still doesn’t work for you, please feel free to reach out.

@Maelyn_RL

Thank you very much, I will try it out.:+1:
Best regards, Robert

Hello, rosuckmedia

I believe there was some misunderstanding in my previous reply.
The information I shared was related to the general rules for rendering videos in AI Render.

However, since your project is intended for training a LoRA model, you should use the Render Panel in iClone or Character Creator to render the training images instead.

To do this, go to:
Render menu → Render Image → Sequence
This will allow you to export a sequence of images for training.

Apologies for the confusion.

Can this work if you have an existing installation of comfy?

Hello, contact_329118
Do you plan to connect your own workflow through iClone / Character Creator, or would you like to try our built-in features using your existing ComfyUI setup?

Here are two solutions for different purposes.
1. Connecting to Your Own ComfyUI Workflow
If you want to connect iClone or Character Creator to your own ComfyUI setup, you can use the LAN Server option in the Settings Panel.

You can refer to this guide to create your own AI Tool preset:
:link: [Official]📶Maximize AI Render Performance: Custom Workflow Setup & Tips - #2 by fareast1

Here is our Reallusion Custom Node:
:link: GitHub - reallusion/ComfyUI-Reallusion

2. Trying Our Built-In Features
If you want to try our built-in AI Render features, go to the RL_ComfyUI folder and copy the following files into your ComfyUI directory:

  • run_comfyui.vbs

  • check_ap_status.ps1

  • stop_comfyui.bat

After that, you can select the Local option in the Settings Panel to run ComfyUI directly.

If any embedded presets cause errors in your ComfyUI, check your history folder and review the Workflow_Pre.json or Workflow_Gen.json files to ensure all required custom nodes are installed.

We will soon provide all the necessary custom nodes for our embedded presets, along with an official workflow guide for connecting to your own ComfyUI.

3 Likes

Yes, but also keep in mind that when you install the plugin you are getting another copy of ComfyUI. The new RLComfyUI copy can be launched in standalone mode to reduce memory usage from CC4 or iClone.
Using it in this way you can also add custom nodes and models to the RLComfy.
I have added a workflow that uses a screen capture workflow that provides source images or video for Comfy to process.

2 Likes

Some link of Reallusion optimal LORA trainer (ComfyUI) workflow (*.json)?

I just wanted to know how to use runpod. I have my comfyUI running on Runpod. How can i use it with the Reallusion AI render?

Hi all,

Registration is now open for our first AI Render webinar, happening Aug 22, 2025 (PST/PDT)!
In this session, we’ll show you how to set up Flux 1 Dev on cloud GPU and connect it with iClone, unlocking production-level AI image generation with precise 3D guidance.

Webinar Outline:

  1. Launching Cloud GPU & Connecting to iClone
  2. Adding a Custom Flux 1 Dev Workflow to iClone
  3. Flux 1 Dev Overview
  4. Image Workflow Breakdown – LoRA, ControlNet, and recommended render settings for character consistency
  5. Q&A Session

This is the first of three webinars in our AI Render series, and all three are open for registration now. Each session explores a different advanced AI workflow, so be sure to join them all.

:link: Learn more and register here: Reallusion Courses - Free Online Tutorials for 2D & 3D Animations

1 Like

So where do I put these screenshots for Lora training? The video just runs over things so quickly.

1 Like

Can someone take us through the lipsync workflow. Getting great results apart from matching the lip movements, teeth and mouth movements with the character.

1 Like

Hi all,

We now have cloud-GPU templates for AI Render on RunComfy & RunPod — making it easy to skip setup and run workflows directly in the cloud. They fully support our preset workflows, plus the Flux image and WAN video workflows coming in the webinars.

:bulb: Thanks to RunComfy, there are also special offers for Reallusion users (credits & subscription discounts).

:point_right: More details here: [Cloud-GPU Templates for AI Render — Powered by RunComfy & RunPod]