Custom ACES IDT for Resolve

Note that DCTL scripts require the studio version of Resolve.

I was just involved in a discussion on creating IDTs for color grading in ACES cct in Resolve 14 when there is no built-in IDT. Or if the existing IDT isn't ideal.

Here's the process that I worked out: 

When there is no IDT specified in the project and the clip, then Resolve expects the data to be in linear gamma and AP0 gamut. An input LUT can be applied before Resolve translates the clip to ACES cct color space. Thus a LUT that can translate the camera data into linear / AP0 is a proper replacement for an IDT.

This can easily be achieved in LUTCalc ( To test this theory I took a clip and set the default Slog3/S-Gamut3.Cine IDT and took a screen grab for reference. I then created a custom LUT with these settings in LUTCalc. I set the clip's IDT to None and added the new LUT as the input LUT for the clip.

As seen below, the resulting image matches the reference images from the original IDT, suggesting that the two operations are equivalent.

This now opens the opportunity to make additional customizations to this input LUT to taste, or create a LUT that matches the camera specifics of unqiue cameras.

Based on this article, the other option is to create a DCTL script: A DCTL script has the advantage that it's precise math rather than interpolated lookup table. The code in a DCTL script matches the math precision the built-in IDT use.

Using the Sony SLog-3 IDT and converting it to a DCTL file, which then is placed into the LUT folder and used instead of IDT or Input LUT also creates an equivalent image. In fact it creates an exact match when using a reference wipe, whereas the input LUT yields minor variations, presumably based on the less precise math or LUTCalc having slightly different input values.

Note that DCTL scripts require the studio version of Resolve.

// SLog3 / S-Gamut3 DCTL for ACES IDT replacement
__CONSTANT__ float color_xform[9] =
   0.6387886672f,  0.2723514337f,  0.0888598991f,
  -0.0039159060f,  1.0880732309f, -0.0841573249f,
  -0.0299072021f, -0.0264325799f,  1.0563397820f

__DEVICE__ float slog3_to_linear(float v) {
  float result;

  if(v >= 171.2102946929f / 1023.0f)
    result = _powf(10.0f,(v*1023.0f-420.0f)/261.5f)*(0.18f+0.01f)-0.01f;
    result = (v*1023.0f-95.0f)*0.01125000f/(171.2102946929f-95.0f);

  return result;

__DEVICE__ float3 transform(int p_Width, int p_Height, int p_X, int p_Y, float p_R, float p_G, float p_B)
  // Convert from SLog3 to Linear
  float3 linear;

  linear.x = slog3_to_linear(p_R);
  linear.y = slog3_to_linear(p_G);
  linear.z = slog3_to_linear(p_B);

  // Convert from S-Gamut3 to AP0
  float3 aces;

  aces.x = color_xform[0]*linear.x + color_xform[1]*linear.y + color_xform[2]*linear.z;
  aces.y = color_xform[3]*linear.x + color_xform[4]*linear.y + color_xform[5]*linear.z;
  aces.z = color_xform[6]*linear.x + color_xform[7]*linear.y + color_xform[8]*linear.z;

  return aces;

Here is the same frame with all three different methods: the built-in IDT, the input LUT, and the DCTL script.

Importing Odyssey 7Q+ Metadata into Avid Media Composer

I've been using the Odyssey 7Q+ as an external recorder for quite some time and with much success. The reasons for using it are a separate conversation for another time.

One of the things though that has always bothered me that the 7Q+ has menu options to record a variety of meta data for each take, such as Reel #, Scene, good/bad take, etc. But none of these data get stored into the ProRes files the recorder produces. Instead these data get saved into an FCP 7 XML file on a per clip basis in a subfolder on the SSD. That's allo good if you use an NLE where you can import these XML files. I don't have FCP itself. I do work with Resolve, Avid, and Premiere in that order. The only one of these that can interpret the XML files from the recorder is Premeiere. Resolve complains that the files do not contain a timeline (as they're per clip files). And Avid doesn't read them at all, since Avid has standardized around the ALE file format.

After a bit of coding I came up with a Python script that can ingest a folder of Odyssey 7Q+ XML files and translate them into a single ALE file that Avid consumes happily.

I've successfully used the script with the latest version of the 7Q+ firmware, the 3.0 version of the transfer utility, and MC 8.9.x. I've only tested this on a small set of files, so further refinement for special cases and debugging may be in order. But for anyone willing to give it a try and report back any success or issues, there's a link for the Python script at the end of the article. This is written for Phyton 3 which is readily available for Mac or Windows.

To use the script, run the script and point it at the FCP 7 XML folder after the media has been transferred with the CD utility:

cd /Volumes/Jobs/TestJob/Camera\ Files/Card\ 001
/usr/local/bin/python3 /Volumes/Workspace/ALE/ -d fcp\ 7\ xml/

This should process all the XML files for each clip and then producer one CD.ALE file. The syntax for ale files is described in the MC documentation.

It will translate the following fields:

  • In and Out mark if set in the Odyssey Play mode for the clip
  • Good/Bad flag for the clip
  • Description (derived from the Project field)
  • Camera
  • Reel #
  • Scene #
  • Take #
  • Shoot Day
  • LUT Name

Then on the Avid side, first import all the .mov files through the usual means (linking them to a bin in the source browser).

In a second step, select all the clips in the bin, go back to the source browser, select 'import' rather than 'link', and in the import options, go to the 'Shot Log' tab and select 'Merge events with known master clips'. Then select the newly create ALE file and import. This will read the ALE file and merge any new meta data with the selected clips that where just imported previously.

Here is my test ALE file:

FPS	23.98

Name	Tracks	Start	End	Tape ID	Source File	Source Path	Description	Comments	Camera	Reel	Scene	Take	Shoot Day	LUT	Mark IN	Mark OUT	Good

CLIP0000001	VA1A2	00:00:00:01	00:00:07:23	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	008	001	View-SONY_EE_SL3C_L709A-1			
CLIP0000002	VA1A2	00:00:07:19	00:00:31:16	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	009	001	View-SONY_EE_SL3C_L709A-1			
CLIP0000003	VA1A2	00:00:31:12	00:04:43:09	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	010	001	View-SONY_EE_SL3C_L709A-1			
CLIP0000004	VA1A2	00:04:43:05	00:09:43:22	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	011	001	View-SONY_EE_SL3C_L709A-1			
CLIP0000005	VA1A2	00:09:43:18	00:10:07:03	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	012	001	View-SONY_EE_SL3C_L709A-1			
CLIP0000006	VA1A2	00:10:06:23	00:11:08:11	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	013	001	View-SONY_EE_SL3C_L709A-1			
CLIP0000007	VA1A2	00:11:08:07	00:12:10:00	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	014	001	View-SONY_EE_SL3C_L709A-1			
CLIP0000008	VA1A2	00:12:09:21	00:12:33:13	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	015	001	View-SONY_EE_SL3C_L709A-1			
CLIP0000009	VA1A2	00:12:33:09	00:13:28:02	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	016	001	View-SONY_EE_SL3C_L709A-1			
CLIP0000010	VA1A2	00:13:27:22	00:14:18:06	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	017	001	View-SONY_EE_SL3C_L709A-1	00:13:43:00	00:13:54:02	
CLIP0000011	VA1A2	00:14:18:03	00:15:09:14	/Volumes/Workspace/ALE/Test Data	1674    	FEATURE	A	R001	S1      	018	001	View-SONY_EE_SL3C_L709A-1			yes
CLIP0000012	VA1A2	00:15:09:10	00:17:15:06	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	019	001	View-SONY_EE_SL3C_L709A-1			
CLIP0000013	VA1A2	00:17:15:02	00:18:10:16	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	020	001	View-SONY_EE_SL3C_L709A-1			

And this is how the bin looks after a successful meta data import of this file:

Link to the Python script:

Sony FS7 RAW DR Test

There was a good debate on the SIC FB group about Sony FS7 RAW, recording with the Odyssey 7Q+, the issue of getting 14 stops of dynamic range into 12 bit linear RAW output, and banding in the shadows. As I recently upgraded from the Sony F3 to the FS7 and using it with the XDCA RAW extension and the Odyssey 7Q+ in RAW mode most of the time, I thought it would be good to get to the bottom of it and make sure I use the ideal settings.

This video has a series of clips shot with different Codecs and Settings to see the results. The Camera was setup on a tripod with Odyssey 7Q+ and XDCA extension. The camera was configured to CineEI mode, S-Gamut3.Cline/SLog3. Noise Suppression was enabled at mid-level. The camera has firmware 4.0. The Odyssey 7Q+ has firmware 16.10. There was a dual color target setup near the camera lit with a Arri L5C first in daylight, later in Tungsten. There was a second target setup about 10 feet further back, originaly unlit, later ever so slightly lit with a small tungsten LED light. The goal was to keep the exposure between the two targets at 14 stops, as measured on a lightmeter in EV mode / spot meter. On the back target the EV was measured on the 2nd darkest grayscale chip (bottom row, 2nd from right).

This was not done on a single target that has a 14 DR range, but in a real life scene where a front element was light by a key light and further back in the room was an object that was supposed to be lifted out of the black with at least minimal separation. I believe that is a more realistic way of judging DR for everyday use, though may not be as scientific as a single target.

The ProRes and XVAC-I clips were brought into Resolve and the standard Sony SLog3SGamut3.Cine.ToLC709 LUT was applied at clip level. For the DNG clips, the instructions from this article by Convergent Design were followed by first applying the compensation LUT to bring the DNG into SLOG3 and then the same SLOG3 to LC709 LUT was applied. The timeline was in LC709 color space.

There are 11 clips with different settings (title at bottom). The clips are repeated a second time with the upper half of the screen having an offset of 256 on the scope applied to see deep shadows the screen cannot display.

The most telling clips are #8 (Internal XVAC-I recording at 11.5 stops DR) and clip #11 (Odyssey 7Q+ DNG recording at 13 stops DR). In both clips in non-adjusted version the highlights have ever so little headroom (picker values can be found at 253/254 8-bit), while the raised version of the clip allows ever so little detail visible in the back target to be made out (background noise is at 64, target starts showing a few 65). 

Thus the conclusion of this test is that with current firmware and these settings, shadows and black are quite clean. However, the effective dynamic range is more in the 12 to 13 stop range with no significant difference between internal recording and external recording, giving the Odyssey RAW option the upper hand because of its much more flexible Codec choices and many other useful features.

Screenshot: Scope and Color Grab showing ever so slight head room, and very faint back target at 13 stop DR:


Color Grab showing back target slightly better at 11.5 DR:

If anyone has suggestion on how to improve this test for additional conclusions or verification, feel free to comment or reach out at


Resolve and Color Checker Video Passport

A recent conversation on inspired me to work out a better technique for calibrating footage with the Color Checker Video Passport. I previously hadn't taken the time to fully understand the arrangement of the individual color chips until Adam Wilt's explanation made it click.

Here's a quick and dirty clip recorded on my Sony F3 in s-log in mixed lighting conditions:

The top has a shiny black, 40% IRE, and bright white target. On the bottom, the top row aligns with the vector scope (the big aha moment) and the second row is different skin color targets.

This is how the clip looks in Resolve when imported as is:

For this experiment a couple of quick nodes - a couple of garbage mattes that allows us to isolate individual aspects of the target on the scopes for easier workflow. And the last node with all the adjustments:


For step 1 we need to adjust the curve to offset the slog and bring the white and black into their legal ranges and set the middle gray around 40% IRE, using a curves adjustments. Once the ranges are sitting properly, decoupling the curves for the whilte balance on the RGB parade:


For step 2 on to the color calibration. Changing the garbage matte to the top row chroma chips bring up the star pattern nicely. On the right the RGB parade, which is impossible to interprete for that...

Because the white balance was already dialed in with the curves the color vectors are almost spot on. A small hue rotation adjustment of 3 degrees and some extra saturation refines the settings:


Lastly, switching to the last garbage matte highlighting just the skin color chips and turning on the skin color indicator on the vector scope confirms that the skin color is sitting perfectly:


Here is the final color checker with all adjustments:

From this clip we could now export a 3D LUT to be applied to the project or select clips, or the correction could be copied onto a group pre-clip node to apply to all clips that were shot under the same lighting conditions / camera settings.


A Well Formatted End Crawl

A basic end crawl can be done with built-in title generators in Resolve or Premiere. 

But formatting a complex and good looking end crawl can be an exercise in frustration. After several different attempts I settled on designing it in Illustrator and animating it in Fusion.

Using Fusion gives more control over the timing and animation. Yet the text controls in Fusion are also limited. Nothing really comes close to a real design application like Illustrator when you need font and placement control.

So it starts with a vertically oversized artboard with transparent background. A layer of black can be added for ease of formatting and then disabled prior to export. For this endcrawl the text object was about 8,000px tall:

That is then exported as a transparent PNG image and imported into a Fusion comp via loader:


The trick to a good render of an end crawl is to animate it at an even multiple of pixels per frame. So a bit of math is required. In this case we wanted to the end crawl to finish just under one minute. At 23.976 frame rate and an animation height of 8,043px the closest multiple to stay within one minute was 1/7th, which gets to 1,149 frames on the comp timeline. Rounded up to 1,170 to let it run a few extra frames to allow the last line to end in the middle of the screen rather than at the bottom.

The animation then happens by adding a transform tool and setting the Y center as an expression of frame number and 1/x of the frame count:  Point(0.5,time * 0.0008547008547 - 0.5).

That advances the animation 7 pixels at every frame. It's actually quite fast, but getting such a long end crawl into less than a minute ends up in a fast crawl.

Render out and then bring into the NLE for final assembly.


The other challenges - finding fonts that render well on different screens and resolution. This endcrawl uses a one pretty thin font which leads to uneven anti-aliasing on smaller screens. Word is that at times different end crawl fonts have to be rendered for different screens. Which is why people build an entire business around this:


Recreating Sky

On a recent grade I was faced with sizeable number of clips who had blown out sky and that needed to be made look good. If the sky is just peaking through in a few places, bringing down exposure and adding some color may be enough. But if the sky is prominent in the shot the lack of any texture will be glaring.

For one clip in I went down a more complicated path and it was worth it because it was one clip the client upon review called out as being beautiful.

This is the final clip, nicely highlighting the parrot in full color:

This is what the original footage looked like:


This type of work is beyond what can be easily done with Resolve and effects. So I used Fusion Connect to bring this clip into VFX software where it's easier to layer different parts together. The first step was to put a luma keyer on it to isolate the blown out sky:


Then I used the DaySky tool which can create a natural looking sky by date and latitutde/longtitude. But it's a blue sky with horizon color distortions. For a bit more realism I threw in some fast noise to create moving clouds, do some color tweaking and merge it with the keyed clip:


A little color and exposure matching in Resolve, a tracked vignette on the main bird, and things look a lot better...