[Official] 🎉 Welcome to AI Render Open Beta – Start Here!

RTX pro 6000 (96GB Vram)

3 Likes

I suppose one could always render at minimal res and then upscale using topaz. with my 2060s and paltry 8GB vram I’ll have no other choice! or there’s runpod?

Tested the AI Render Beta feature. I think it’s pretty interesting. I hope it improves a lot, but it’s a good start!

Well that just plain unfair … :joy:

3 Likes

After used the AI render for a while, it is found below issues.

  1. If the character holds a sword, the sword direction cannot reflect on the AI rendered video correctly. It may because the controlnet pose and canny are not able to control it, only able to control the character. The other thing is the charactor looks like holding a gun rather than a sword.

  1. The initial image for the video creation is by leosamsFilmgirlUltra_ultraBaseModel model. It is not selectable, maybe I do not know how to select. It caused the initial image for video generation are not good enough. If this model can change to Flux or other better model, the words showing on the video will be correct and overall video charactor appearance and scene will be much better and more accurate, although it sacrificed some speed.

  2. Although normal, pose and depth are output correct, the AI render cannot follow the character motion when the character jump up. The AI changed to other motion.

  3. The Wan model for video generation is very good. Even original non-AI render charactor’s hair did not moved, the AI render video will move and shake lively.

Wahnsinn❣️
Will ick ooch…

2 Likes

That is true :crazy_face: But I did all the videos above with a 4090 and quantized models. ( and I OOMed a lot ) OOM=Out Of Memory
Now I can use the full models, and I am not using it for a hobby :sunglasses:

BTW, SuperRob is currently on vacation :grin:

Rob_vacation

3 Likes

Testing Model Wan2.1 Fantasy Portrait :joy:

4 Likes

Since we’re comparing AI renders, what about this one created in Sora based on the same image I rendered in iClone? It looks pretty decent, except for the guitar. The woman in the picture is playing a left-handed Fender Strat, but Sora made the guitar head up instead of down.

I play the guitar left-handed, and the digital character I made from Genesis using FaceGen is my “digital avatar” that I sometimes use for official video clips for my band, Cynthia & the Digital Bunnies.

I love the concept of rendering AI images directly from iClone, which is my one-stop tool for making animations. However, to get the results I need, some work is required.

I’m currently rendering an AI video from an iClone scene and will share it with you when it’s ready.

LOL. You’re getting better results than I am. Check out the result of my attempt with a custom-made character (Genesis 9).


The shirt should read: Cynthia & The Digital Bunnies, but that’s all messed up.

The second picture I tried was the same digital woman (I call her Sybil), along with another custom-made character who plays guitar.

As you can see, it was a total mess. Here’s a comparison of the original image rendered directly in iClone:

Hi guys, as always, you’re pushing the industry forward! I’ve been actively building my pipelines on Comfy, integrating with your software—and boom, you went ahead and made the integration yourselves.

Thanks a lot.

The only current inconvenience with your installer is that when installing the plugin, there’s no checkbox saying “I already have ComfyUI.” I had to install your version of Comfy alongside the one I already had on my system. :slight_smile:

1 Like

You can try to play with an option called AI creativity and turn it down. I had to do that with one of my images that I wasn’t happy with.

I’m referring to my post here:

Exactly what @animagic says. Turn it down and balance.
If there are no LoRAs it’s extremely hard to balance where you have perfect fingers (and tuning machines for that matter) and at the same time create a distinct new look for the characters. Also add lights to fingers. Camera angle also adds a lot. In the end, you may have to spend hours and then still have to photoshop for detailing to get good results.

This is unedited AI result from your image. I added a spotlight in Photoshop to your iClone image, then made it 4K in Topaz.
In Clone added it as Image Layer, Cinema Mode, Canny (Edge) - balanced
Positive: two girls in shiny leather cloth standing together in an abandoned alleyway during sunset.
Negative: gloves, glasses

12 samples
Prompt: 3
Denoise: 0.5

Render: 2560x1440

iClone - image layer

AI

Now here is a very cool video on enhancing the workflow along with some key information on what models and LOras to use based on your VRAM.

Wan 2.2 ComfyUI Guide: The ULTIMATE Speed vs Quality Tutorial

LookingGlassGraphics has discovered something interesting: [Official] 🪄 AI Makeover Community Challenge – Free CC5 Awaits! - #55 by LookingGlassGraphics

That’s amazing, thanks!
And I was wondering, why is it sometimes it renders super fast and very slow other times.

Basically when you render AI with focus on iClone window it does it almost 3 times slower than when you keep the focus away from iClone.

Here I made a quick demo. This is something RL should make a note of.

2 Likes

Nice! Thanks for the share. Try scaled to 100% iteration detail as well. I haven’t tried it with video rendering yet, but I’m about to.

1 Like

Thanks again :slight_smile:
About steps. A default Steps is enough for all of the models (12 for Photo/Cinema/Film and 30 for the rest I believe). Test yourself. Set the fixed Seed and render 12 and then 100. You would barely see any difference in details. Moreover for video you want to keep steps at bare minimum. Or else it would take hours to render 10 sec clip.

Thanks for the heads up. I’m going to give it a shot now. There may be a few more tricks and tips to do with this new rendering system.

Angry Lego. The image of human face in the corner is a must, otherwise everything comes out distorted.

iClone shot

AI short

2 Likes

in simple, to maintain on screen consistency you need 1+ frame prior and 1+ frames after the shot you running though image re-render with AI. The more frames to approximate the better (especially for a dynamic scene).

Basically, just like in Unreal with DLSS on it takes few frames, scales those to 720p, overlays those to approximate a final current image, and this new frame gets upscaled to 4k or what ever screen resolution.

1 Like