Rendering in Cinema 4D with Redshift

To prevent too many posts in the “iClone 8 Render Engine” thread, I’m starting this new topic in the hope that C4D and Redshift related posts can be better gathered here.

In view of the discussion about various global illumination (GI) modes in Redshift, I have run a few renders with different settings. The scene is an interior view of a diner, with the following features:

  • 2+ million polys without instancing, 6+ million with
  • There are a number of (practical) lights in the scene, plus a sun-and-sky rig for the exterior
  • The exterior visible through the windows is mesh-based (not an HDRi)
  • For the following renders, the same output size (3840x1644) and render settings were used; the only change is to the GI mode for the primary and secondary method
  • The render settings include post-render denoising with OptiX (built into RS); there is also a little application of RS post effects. No further post work outside of C4D
  • All render times are in seconds on a single RTX4090

Image #1, No GI, 84 seconds

Image #2, Brute Force for primary and secondary method, 149 seconds

Image #3, Brute Force for primary, Irradiance Point Cloud for secondary, 242 seconds

Image #4, Irradiance Cache for primary, Irradiance Cache Point Cloud for secondary, 296 seconds (i.e., almost 5 minutes)

Render settings:


RS_SS_02

Conclusions:

  • No GI is obviously the fastest, but also looks the “worst” (relatively speaking)
  • Brute Force for primary and secondary GI beats any other combination of GI methods for THIS scene hands down

I guess, however, I “fix” a range of samples and, based on the (quality) Threshold parameter I set, RS will then decide how many samples will be used within the range set. If I want a quick preview, I can increase the threshold (higher values mean lower quality/more noise in the image, and thus faster renders), for final renders I decrease the value and/or use denoising.

I have no personal experience with Arnold, so I can only speak in general terms:

  • I’d rather have a render engine that I can tweak according to my or the scene’s requirements, rather than one that goes the whole hog all the time (if that is what Arnold does).
  • Without having done any comparisons myself, I would still think that Redshift’s render settings could also be tweaked for “Hollywood film” level (at the cost of renderspeed, of course), however, if I’m planning to publish my content on platforms like YouTube or Vimeo, the question is whether such quality is actually necessary. (In my experience, YouTube cannot even give me the same quality I see from my original (non Hollywood quality) renders once the video is uploaded and re-compressed by YT—haven’t uploaded anything to Vimeo in ages.) So, yeah, as soon as Hollywood starts beating a path to my door, I’ll consider engines like Arnold. :wink:

I have no personal experience with Arnold, so I can only speak in general terms:

I’d rather have a render engine that I can tweak according to my or the scene’s requirements, rather than one that goes the whole hog all the time (if that is what Arnold does).

Arnold was literally built for use on massive render farms but while it is unbiased by default us single user
have a wide variety of setting we can adjust based on the type of scene we are rendering.

I guess, however, I “fix” a range of samples and, based on the (quality) Threshold parameter I set, RS will then decide how many samples will be used within the range set. If I want a quick preview, I can increase the threshold (higher values mean lower quality/more noise in the image, and thus faster renders), for final renders I decrease the value and/or use denoising.

That is what I love about Blender cycles in that we can not only set the number of samples independently for diffuse, specular, reflections volume( like with Arnold)
But we have two really good Denoising option for even more speed as well as the option to literally
set the seconds/minutes to “cook” per frame making it easier to predict how long our machine will be tied up. with a particular render.

But personally I am fine with EEVEE for my personal 3D animation work for youtube etc
and these days I am mostly doing super fast NPR 3D and/or outright 2D for comics graphic novels.

I don’t have a render farm… But if you can adjust the settings to your needs as a single user, all is good.

So does RS. On the Sampling tab, under Overrides, you can either define a multiplier (e.g. 4x the standard sample rate) or the samples in absolute terms (for SSS, I usually go to 4096 or 8192, while the regular sample rate tops out at 512).

OK, the time to “cook” per frame cannot be set in RS (don’t think I’d be using that one if it were available).

RS has 3+ denoisers: OIDN, OptiX, Altus (as single and dual). Personally, I only ever use OptiX or OIDN depending on the scene.

Just speaking from an animators perspective I don’t understand why this scene needs to be 2 million+ polys.

I see alot of fairly simple geometry with no object with highly “sculpted”
details with the possible exception of the vinyl booth seat cushioning details and that could have easily been accomplished with normal maps.

Also I don’t see the point of having detailed geometry outside of windows
unless there was some vital story narrative reason.

Also even though I have multiple machines and could theoreticly commit to “days long” render times, All of the render times posted here would still be unacceptably long for my personal preference on a minutes per frame scale
in an animation project.

Well, I guess Evermotion (the company that made this asset/scene) figured that everything should also look good up close. It works for me, I absolutely hate the low-poly look I find in so many iClone renders. And I appreciate being able to get close to things if I want to, without them “breaking apart” visually. This is an asset for “photorealistic” arkviz, not a stylized, low-poly set for iClone.
For example, the bottle lamps have 47K each (instanced, of course) and are refractive—even the filaments are modeled; the napkins on the table are 17K each, and so on.

And, no, you couldn’t get the same look for the upholstery with normal maps, at least not up close, unless those maps were used for deformation at render time (via hardware tessellation).

Although the stuff outside the windows is not all that detailed, I’m happy Evermotion did it with meshes. If they had used an HDRi or some other kind of “flat” backdrop out there, parallax would not work properly, once the camera is in motion. Also, let’s say I had people walking in the street outside the windows, their reflections (e.g. in the windows on the other side of the street) and shadows wouldn’t work properly either if the backdrop was a “flat” plane.
If you want to use DOF, the background elements also cannot all be the “same” distance from the camera for DOF to look realistic.

So, yeah, except for skies (devoid of any buildings, mountains, trees, etc.), I always use mesh-based backgrounds and not HDRis.

BTW: I did not put this scene together; I just converted the materials for use in Redshift and did a little optimization with regard to render times. Other than that, the scene is pretty much the way it came out of the “box”.

Unless I have an absolutely trivial scene at 1080x1920 (for YT shorts), I don’t see 1-minute-per-frame render times (not even on the RTX4090). I have learned to accept that, because what you call NPR renders are not really an option for me.

Of course anything imported from iclone (including the characters to be honest)
are not ever going to look truly “realistic” in a pro render engine like Redshift, Vray Arnold, Octane.
and Unreal engine.

Even the higher quality Daz genesis figures fail the “realism” test despite being brute force path traced in the old NON RTX version of Iray that Daz users cling to.

I was not even aware that Evermotion was still in business but I can see why thier Arch vis products are so heavy as they are not really for animation/games.

Render times are relative depending on your objective.

But I can see why Aurora Trek’s latest stylized production is not
being rendered with Redshift.

I don’t get that. OK, I suppose for games, low-poly assets may be of some benefit to keep things at 150+ FPS, but for non-real-time animations? Who cares if a render takes a few seconds longer; I’d much rather have the detail that a higher poly count gives me. So, I’m happy that Evermotion has not felt the need to produce low-poly junk (at least at the time when they created this diner scene a few years ago). There are enough low-poly assets out there (not just on the RL platforms but also on other 3D platforms).

The poly-count in this scene is IMO not the decisive factor with regard to render times. I have had higher poly scenes that rendered faster. The thing here is stuff like refractions (including lights in a refractive “container”), the number of individual lights, etc. not the resolution of the meshes and, of course, the quality setting. Also, interiors with GI almost always render slower than exterior shots (with similar poly-count and render settings).

Image #1, same settings, refractions off, 32 seconds

Image #2, same settings, reflection off, 121 seconds (i.e. 28 seconds faster than with reflections)

Image #3, same settings like Image #1 in the first, original post of this thead, but threshold way higher (i.e., fewer samples, lower quality), 37 seconds

The poly-count is unchanged in all three examples and while the 37-second render is 4x as fast as the “standard” version, the quality is not what I would consider really acceptable (it does not show too much in the image here, but that is because (a) these images are just 25% of the original resolution (downscaled by the forum software) and (b) the denoiser got rid of a lot of noise in the original render.

Unless you have a monster CPU (say 128 cores, 256 threads) and a low-end GPU, Redshift is probably going to run circles around the Standard and Physical engines in C4D.
I think @Auroratrek is just getting his feet wet in Redshift and that is why he hasn’t switched, not because Redshift is slower than Standard or Physical or less suitable for a “stylized production”. I’d be cautions, too, if I were contemplating switching render engines in mid-production. We’ll see.

Edit: Since Redshift can also run on the CPU, I re-did Image #3 using the 16-core CPU only: It took 12 times as long as the RTX4090 (445 vs. 37 seconds).