CG Rendering and ACES

Steve Agland, Animal Logic - September 2014

Introduction

We've been looking at implementing an ACES-based color pipeline, for both VFX and CG animation.

ACES is intended to be a universal storage encoding because it can cover the gamuts of all existing (and probable future) cameras and display devices. For the sake of simplicity we decided to try rendering in the ACES color space, since inputs and outputs would be in ACES in the new pipeline.

But it seems there are some difficult problems with rendering CG elements in ACES space. I'll try to summarise those here.

Let's have a look at the size of some relevant gamuts:

  • sRGB: standard gamut for most monitors and TVs
  • DCI-P3: standard gamut for digital cinema and our look/lighting/comp artists' monitors.
  • Rec. 2020: recently standardised for upcoming Ultra HD devices.
  • ACES
In [151]:
%matplotlib inline
from colour.plotting import *  # Thanks to http://colour-science.org/
colourspaces_CIE_1931_chromaticity_diagram_plot(
  ['sRGB', 'DCI-P3', 'Rec. 2020', 'ACES RGB', 'Pointer Gamut'])
Out[151]:
True

Color Interaction in CG Rendering

Switching to an ACES rendering space was initially quite confusing for artists for a number of reasons, many of which could be resolved once we adapted various parts of the workflow and color pipeline.

But one objection was difficult to counter, and it was that indirect lighting on brightly-colored objects was not behaving as desired (especially tinted metallic surfaces like gold or copper).

We realised that, depending on which space you're in, the rendering process generates differently-perceived colors from the same inputs. In particular, a major part of the rendering equation involves multiplying light (emitted) colors by surface (reflectance) colors. This operation can change the saturation of colors, and the result might have a different chromaticity depending on which RGB space you consider the colors to be in.

A particularly problematic case of this is a brightly colored object casting indirect light onto itself, or a nearby object of a similar color. Assuming white light and a simple shading model, this means the resulting radiance is roughly equivilant to the surface color multiplied by itself. This self-multiplication results in darker, but more saturated colors.

This leads to the question:

If you start with color in your display gamut, but multiply it by itself in the ACES color space, what happens?

The results are below. In the digram you can see the behaviour of repeatedly multiplying a color by itself in ACES space vs DCI-P3 space.

In [141]:
import pylab, random, numpy as np
from colour.models import RGB_to_RGB, XYZ_to_xyY

# Draw the DCI-P3 and ACES gamut boundaries.
CIE_1931_chromaticity_diagram_plot(standalone=False)
aces  = get_RGB_colourspace('ACES')
dcip3 = get_RGB_colourspace('DCI-P3') 
pylab.fill(aces.primaries[:,0], aces.primaries[:,1],
           color='red', label='ACES Gamut', fill=False)
pylab.fill(dcip3.primaries[:,0], dcip3.primaries[:,1],
           color='green', label='DCI-P3 Gamut', fill=False)

# Pick some reasonably saturated sample
# colors in the P3 color-space.
samples = [[0.2, 0.4, 0.7], [0.6, 0.9, 0.4], [0.7, 0.2, 0.6]]

for i, p3Col in enumerate(samples):
    # Convert to ACES without chromatic adaption, for clarity.
    acesCol  = np.dot(np.dot(aces.to_RGB, dcip3.to_XYZ), p3Col)
    
    # Repeatedly multiply the color in each space by itself.
    p3Cols   = [[pow(c, e) for c in p3Col] for e in range(1,5)]
    acesCols = [[pow(c, e) for c in acesCol] for e in range(1,5)]
    
    # Convert to xyY and plot the results
    resultsP3   = np.array([XYZ_to_xyY(np.dot(dcip3.to_XYZ, c)) for c in p3Cols])
    resultsACES = np.array([XYZ_to_xyY(np.dot(aces.to_XYZ,  c)) for c in acesCols])
    pylab.plot(resultsP3[:,0], resultsP3[:,1],
               'o-', color='green', linewidth=2,
               label='' if i else "DCI-P3 Self-multiplication")
    pylab.plot(resultsACES[:,0], resultsACES[:,1],
               'o-', color='red', linewidth=2,
               label='' if i else "ACES Self-multiplication")
    
settings = {}
settings.update({'legend': True, 'standalone': True,
                 'x_tighten': True, 'y_tighten': True,
                 'margins': [-0.1, 0.05, -0.15, 0.05]})
bounding_box(**settings); aspect(**settings); display(**settings)
Out[141]:
True

These results were a cause of concern. It shows that depending on which color space you choose to render in, you'll get different visual results. Generally, ACES renders maybe be more saturated and often a different hue too. Now, maybe that means ACES is a preferable space, since allows a wider variety of colors to be a produced.

But this is not necessarily a good thing. Large chunks of the ACES space are outside of the spectral locus. But more importantly (in my opinion), it's difficult not to generate colors outside of your display gamut, even if all your inputs are inside that gamut. This makes look development and lighting tricky because colors outside of your display gamut behave less intuitively, and it can result in a loss of detail.

Note: this all naturally leads into a discussion of the shortcomings of the RGB lighting color model and the benefits of spectral rendering. I'll leave this out in the interests of brevity, and because don't have such a renderer available for production purposes.

Visual Consequences of Choosing a Rendering Color Space

Okay, so how does this actually affect our renders? The initial expectation was that rendering ACES will be producing more super-saturated colors and that, if anything, the problem would be about grappling with out-of-gamut indirect light.

In practise it seems that the opposite problem is the case. This is because in order to render in ACES color space, you need to start with relatively desaturated colors (with respect to the ACES primaries). If you're using whiteish light, then this desaturation needs to be done on the reflectance textures of your surfaces.

In other words, in order to achieve the same look for your direct lighting on brightly colored objects, your reflectance parameters may need to be significantly numerically lower if you're specifying them in ACES, compared to (say), DCI-P3. This means that your objects are now reflecting significantly less light. (More on this below.)

The upshot is that when rendering in ACES, your colorful objects can appear to generate less indirect lighting, at least for some colors.

Here are some example renders.

The image below rendered in the DCI-P3 color space and is annotated with the key points.

Note: "Well behaved blues" is a relative value judgement (and a bad idea for a country song). It's clear that there are problems with the blues in all these renders because there's an apparent tendency to "saturate towards" the nearest primary, resulting in hue shifts (you can see this tendency in the diagram above). This is a physically innaccurate artifact of having only 3 primaries. In a spectral renderer, they would presumably saturate towards the nearest point on the spectral locus and maintain their hues. Perhaps there's a way to improve this behaviour even in a non-spectral renderer but that's going a little off-topic.

In [131]:
from IPython.display import Image
Image('../nuke/p3_vs_aces/p3_render_p3_texture_annotated.png')
Out[131]: