LUT, ASC CDL, IFF/ACES and other abbreviations

Color Management is one of those post-production disciplines or techniques that are not valued until the well-known “browns” arrive, with their great dramas and heroic solutions.

Color Management is nothing more than getting the final result to look more or less as expected. The first premise we have to accept is that the process will never be one hundred percent reliable. It will be possible to achieve approximations with great reliability, but not with total color fidelity.

Why is this so? Well, because in any production process there is a mix of devices, each of them working in different color spaces, and even in small variations of the same color space. All this is added to the fact that the final result is going to be seen in another medium with its own colorimetric characteristics and in environments that, in many cases (TV, DVD, BluRay, etc…), are going to escape from our control.

Video cameras, film cameras, or DSLRs each work with a sensor from their father and mother, with different codecs that impose their own restrictions on color reproduction, etc… In traditional digital video environments, these cameras usually work in YUV, and in Rec709 color space in HD or 601 for standard resolution, while most of the “postpro” software works in RGB in any of its variants (sRGB). In the case of digital cinema cameras and more modern video cameras, this is further complicated by such evils as the use of proprietary gamma curves, logarithmic curves, de-bayering, and other such nonsense.

So, only in the acquisition phase we are already “falsifying” or translating the color information. To avoid this, different approaches have been used over the last few years.

 

LUT: The LUTs (Look Up Tables), are nothing more than a “nifty” to facilitate color conversions, or any other operation with data streams. A LUT is nothing more than a table of values, in which correlations are established between an input value and an output value, i.e., this value in the input image will be this one in the output image after applying the LUT.

In this environment, the simplest LUTs are just an ASCII text file describing a color correction. Logically, the number of values and the quantities they indicate depend on the parameters of the image, such as color depth, number of channels, etc…

LUTs have been and still are used for all kinds of color operations in post production, such as color space conversions, monitor and projector calibration, etc… LUTs can be applied in different parts of the production process and on a permanent or temporary basis, i.e. they can be used to convert the material or they can be used just to preview how the material will look in another color space or media. The most common examples are LUTs for previewing LOG (logarithmic) material on a LIN (linear) device, such as a computer monitor, or to see how the footage would look and thus have some reliability in color correction.

There are basically two types of LUTs, 1-dimensional LUTs (1D LUTs) and 3-dimensional LUTs (3D LUTs). The difference is basically the accuracy in the color transformation between one and the other and the amount of colors they can describe. Three-dimensional LUTs describe a cube with each of the primary colors on one axis and all possible variations as points within that cube; they are the most complex and the most widely used today.

Where and how are LUTs applied? Well this is a question with almost infinite answers, because there are many possible variations on the theme, depending on each workflow. The most common is to apply it in editing, color, compositing, or even 3D software, which in many cases support the loading of LUTs. Some of them allow it to be applied as a permanent transformation and/or only in preview, leaving the original material untouched.

Another option is to use hardware devices that can load LUTs and apply them to a video signal such as the Blackmagic HDLink(http://www.blackmagic-design.com/products/hdlink/), Cinetal’s Davio(http://www.cine-tal.com/) or TCube’s Fig(http://www.tcube.tv/products/fig), among others.

LUTs are the basis of CMS systems (Color Managent Systems), which allow us to measure the colorimetry response of all or almost all the devices involved in our workflow and generate LUTs or ICC profiles that are applied to calibrate them, compensate color space changes or simply mimic the appearance of the color in the final master on film in photochemical support, video, DCP digital cinema, etc…

As we said before, the simplest LUTs are text files, but each manufacturer has implemented its own way of describing these text tables and therefore there are multiple LUT formats, both 1D and 3D. The conversion in many cases is not very complex between formats, and can be done manually with an old acquaintance, Microsoft Excel, and a little bit of skill.

However, there are manufacturers that generate binary and even encrypted LUTs, so you may encounter problems when trying to open them; a well-calculated LUT is an important asset for a company that depends on color, such as a color correction studio or a laboratory, and therefore it is normal that they are kept as a treasure, and access to the information they contain is very limited.

There is a lot of documentation on the subject, but you can start here:

http://en.wikipedia.org/wiki/3D_LUT

This lack of standards, the limitations of LUTs and the problems that all this generates have led to the creation of two different initiatives:

 

ASC CDL: It is not a system for color management “per se”, but it is closely related to it, because it is a standard that allows to carry primary color correction data from the shooting to all stages of the post-production process.

The name CDL stands for Color Decision List. For those of you who come from editing, this will sound familiar, it’s something similar to an EDL, but with color correction information. It was created by the ASC (American Society of Cinematographers) in an attempt to unify a way to pass color correction information between different systems from different manufacturers and between shooting and post.

CDL uses 3 basic parameters, Slope, Offset and Power, corresponding to the low, medium and high lights, for each of the three primary colors, giving a total of 9 values to describe the correction. Subsequently a revision 1.2 was made adding a tenth value, the combined Saturation of the three primary colors.

All this is saved as an XML file that also contains information about the images to which the correction has been applied, type of signal, display devices, etc… Theoretically, all this information is easily translatable to the parameters of each color correction software, edition, CMS, etc…

It is currently supported by several Autodesk softwares, Avid, Nucoda, Mistika, DaVinci, Filmlight, etc…

For more information you can check the Wikipedia entry or consult the ASC website (if you are lucky and their presentations work :)).

http://en.wikipedia.org/wiki/Color_Decision_List

http://www.theasc.com/clubhouse/ASC_CDL_Flash_Presentation.html

 

This brings us to the newborn, the IIF ACES:

IIF ACES: It is an attempt to unify and standardize in a single color space all those used in the industry, from acquisition to final viewing, through all intermediate post-production processes. In other words, it is an attempt to use the same color space from the camera to the projection or generation of the master.

Who creates this rule? The Academy of Motion Picture Arts and Sciences ( AMPAS), i.e. the same Academy that gives out the Oscar awards; it is supposed to have some decision-making power in the industry :). As you can see in the name, it has a scientific branch, the Committee on Science and Technology, which is the one that pushed for this new color coding standard and workflow to preserve the maximum color information available on set.

Modelo de Workflow según la especificación IIF ACES

If you have noticed these acronyms, it has two parts, the first one is IIF or Image Interchange Format. IIF is nothing more than the file format that has been proposed to carry image information throughout the production and post-production process; it is intended to be the replacement for DPX.

The IIF format is based on ILM’s OpenEXR, which is an open source file that allows carrying high dynamic range color information (Half-Float at 16 bits per channel in this case) and multiple channels of information in a single file.

Open EXR (or EXR for friends) is “de facto” a standard in digital visual effects and 3D compositing and, best of all, it is an open source format, very extensible and customizable, so we have chosen to make a restricted version of EXR to contain the information specified by ACES. Thus IIF will carry the image information itself, encoded according to ACES, plus metadata information. Another advantage of using EXR is that it is very easy to implement IIF in software and systems that already support EXR.

 

For more information about OpenEXR you can check this:

 

http://en.wikipedia.org/wiki/OpenEXR

 

I also intend to do an article on OpenEXR and other very specialized industry Open Source initiatives in the near future.

 

The other acronym, ACES, stands for Academy Color Encoding Specification. IIF files contain information in a proprietary color space called ACES, which contains a much larger amount of color information than currently existing color spaces (mainly sRGB with Linear or Logarithmic encoding and HD video rec709).

How is this color space? First and foremost, it is Linear, according to the specification it is Scene Linear, i.e. it is assumed that the pixel values have a linear correlation with respect to the color information reaching the sensor, unlike the logarithmic encodings used by many cameras, which are nothing more than a way to get the most color information with the minimum bit depth, but altering the colors in the image in some way. This is the opposite of the current model in which it is converted to a color space that is fixed by the viewing device (such as HD rec709 for viewing on HD monitors), since its limitations are supposed to be the ones that will mark how far we can manipulate the image.

El espacio de color ACES comparado con los espacios de color de monitores y proyectores

As mentioned before, the information is stored at 16 bits per channel, in Half-float, i.e., the possible values for each of the channels vary between -65.504 and +65.504. All this means a huge color space, with a tremendously large dynamic range, much larger than what current color spaces allow, the documentation speaks of a range of more than 25 stops, capable of covering all perceivable colors.

El espacio de color ACES comparado con otros espacios de color habituales

What is it not? An image encoded according to ACES is not a RAW image, i.e. to encode it in ACES the RAW image must be “de-bayerized” (take a word) and converted to the ACES color space. The advantage is that this color space is practically infinite (in terms of color) and therefore we will still have information to manipulate the image without the appearance of problems or artifacts.

How does this apply to a real workflow? Well, let’s start from a common workflow nowadays, the capture is done with a digital camera, then it is edited offline, conformed, effects are integrated and color corrected, all in digital, with a view to generate a final master for digital and photochemical exhibition.

In a case like this a transformation or IDT would be applied to the recorded material, this IDT is nothing more than a color space transformation of the camera material, using a transformation (similar to a LUT) specific for each camera and that converts that material into an image in ACES color space. That is, an image is created that is supposed to be consistent with any other image taken by any other camera under the same conditions.

This means that many of the current problems and headaches of working with different cameras on the same project can be eliminated, since the images in ACES will be encoded taking into account the characteristics of each camera; a specific curve is applied for each camera, which will result in an image in a standardized color space.

In addition, software that generates computer-generated material (CGI, mainly 3D) can easily generate its images in this color space for later integration with the footage.

Esquema del workflow IIF ACES para cámaras digitales

 

The other advantage is that it works in a virtually unlimited color space, making the correction processes much less susceptible to degradations or limitations due to color coding.

The next part of the process is the visualization and generation of final masters, using two new transformations, the RRT and the ODT.

The RRT or Reference Rendering Transform is just another transformation that converts the image into an ideal image that is the basis for all the images that we are going to see in the different display media.

Subsequently, this RRT image is transformed by another transformation that is specific to each viewing or mastering media, the ODT or Output Device Transform. That is to say, analogously to what was done in camera, now we apply a transformation for each monitor, projector, etc… that ensures a great color consistency between the different media used to view the material.

Another transformation, similar to the ODT, but specific to Digital Cinema projectors, the RDT or Reference Device Transform, is also designated.

In the case of photochemical material, the standard also proposes a process analogous to IDT to bring the density information from the negative to the digital file in ACES color space, the ADX or Academic Density eXchange, which would be specific to each scanner or scanners that support the APD calibration standard.

But is this real or is it just one more American “ida de olla”? Yes, it is so real that all or almost all camera manufacturers, color software and others have already rushed to release systems or revisions that are “ACES compliant”, that is, that comply or allow to follow a production process with ACES encoding or color space. For example, the new Sony F65 already supports this standard and several software like Nuke or RedCineX, or shooting systems like Codex or Cinedeck already support or have announced support for IIF ACES.

You can see more here:

http://pro.sony.com/bbsccms/assets/files/show/highend/includes/F65_Camera_CinemaPDF.pdf

http://www.codexdigital.com/news/docs/Codex%20Digital%20to%20Demonstrate%20IIF,%20ACES%20Workflow%20at%20NAB%202011.pdf

http://www.cinedeck.com/#!subpage/the-ghost-of-goodnight-lane

Moreover, it is already being put into practice in some American series, such as the FX series “Justified” (totally recommended, on the other hand), whose second season was already made following IIF ACES specifications. More info here:

http://www.studiodaily.com/2011/02/is-justifieds-new-workflow-the-future-of-cinematography/

 

Another implication of this standard is the adoption of a standardized file format for audiovisual material, which is already being discussed with the SMPTE. However, it is worth remembering that this standard is still under development, and its implementation will require an agreement between manufacturers of cameras, software, recording systems, archiving, etc…

Two very good articles on IIF ACES:

http://mikemost.com/?p=235

http://www.fxguide.com/featured/the-art-of-digital-color/

 

And more info and a PDF from AMPAS:

http://www.oscars.org/science-technology/council/projects/iif.html

http://www.oscars.org/science-technology/council/projects/pdf/iif.pdf

Leave a Reply