Between Grade and VFX

I recently worked on the color grade for the feature film The Runaways. There were a few areas where changes were discussed with the client that went beyond your typical color grading tasks and fall more in the realm of visual effects. I had the opportunity to work with advanced VFX software to work out fixes for three of these.

Aging an Actor

The most fascinating of these was the need to age one of the actors in several shots as they were filmed several years before the rest of the coverage and this age gap wasn't part of the story. I did some reasearch online on the changes to the human face and experimented with these. 

 

The changes were made with SilhouetteFX and Resolve. In Silhouette a pin-warp node helped elongate the face, pinch the nose and open the eyes. The face was tracked with a planar tracker linked to the pin warp to make sure the warp effect was honoring face movements.

 

(pin warp and overall nodegraph in SilhouetteFX)

(node graph in Resolve putting all the external mattes together)

In addition the SilhouetteFX roto tools were used to create external mattes for all the key sections of the face, including lips, eyeshadows, cheeks, and forehead so they could be re-colored separately and adjust the make-up to be more in line with the rest of the shots - specifically in the eye shadow and the lips.

And finally, the SilhouetteFX paint node along with a Mocha tracker and auto-paint was used to remove all the acne in the face to avoid having to soften the image too much to smoothen the skin for an older appearance.

In the process I got to learn about the SilhouetteFX software which has some really amazing features. The paint node is the only clone brush I have found that allows for complex inter-frame painting, cloning from neighboring frames and also grading and warping the clone source of a neighboring face while matching everything.

Complex Dust Busting

Another set of clips had some lens dust on them. Generally that's not a huge problem, and Resolve has tools to deal with that. Except in this case the dust was smack in the actors face and was moving around with her movement, traversing her neckline, and moving over her mouth and nose, delicate borders where the usual clone & blend just creates a mess without more individualized control.

 

(left actor, in this frame dust spot is center of cheek)

I had tried numerous tools from Resolve Patch to other automated tools, and to paining frame-by-frame in Photoshop. Nothing would really be satisfactory when the spot moved over the boundary areas, or was creating too obvious a clean-up.

So it was fortuante that I listened to the webinar on the new release of Mocha Pro 2019 and its remove tool. I was familair with the Mocha tracker but not the remove tool. It was clear that this cleanup would require significant cleanplates. And while Silhouette can handle cleanplates and cloning from a single cleanpalte to multiple frames to avoid the boiling of just cloning the frame by frame, it's a very complex node tree that is very manual. Mocha Pro's remove tool makes it every easy to work with numerous clean plates and then blending the frames in between. Out of the 200 some odd frames, I think I ended up sing 40 clean plates and the rest was blended together and automated.

Recoloring Hair

The last of the problems involved some hair coloring. There were several clips where the hair color was off and too dull, mostly due to lighting on set. But in one scene the actress' hair color is part of the dialog, so it was important to get it closer. The attempts to to key the hair were unsatisfactory because there were too many similar tones in the frame. And constraining the key with a tracked power window in Resolve was still not getting me quite there.

Since I had just worked with Mocha Pro 2019 on the remove tool, I created an external matte for the hair that was much more accurate. By using multiple tracked rotos for the hair and bouncing locks, while also exclusing the hair band, making full use of the planar nature Mocha, I got a detailed matte within a reasonable amount of time.

 

Screenshot of Mocah Pro drawing the different roto masks.

In the end it takes a lot of tools to work together to make this all happen. A fun but very time consuming endavour. And a reminder that while 'fix it in post' is often possible, a few minutes on set to fix something can spare hours in post to work around it. Fixing it on set is always the preferred option, unless it's literally impossible or it means not getting the shot at all due to other constraints.

Joining Clips from Cameras & Recorders

Some cameras and external recorders use older versions of filesystems which have file size limitations that are a bit out of data for todays content. To work around that, they split a clip into what is sometimes a huge number of subclips. One that I regularly use is the Odyssey 7Q+ in combination with the Sony FS7 camera in 4K mode. At that resolution it can only store 40s of footage per file before it has to create a new subclip.

The app that comes from Convergent Design has the ability to re-join these clips into a single file during copy. But today's workflows often rely on other data copy apps like ShotPut Pro, Hedge, and others that make exactl replicate of the camera cards and create transfer logs and checksums.

Importing these subclips into the NLE is cumbersome unless the NLE recognizes subclips and virtually joins them. Resolve is not one of them. It has the ability to manually create a compound clip, but then the source timecode is lost, which creates a new set of problems.

For a while I had been using EditReady to join files among other meta data operations. But the latest version has a small race condition that can kill the software if it's run in pass-through mode.

When dealing with another 1.5TB of footage from yesterday's shoot I was determined to find a more efficient solution and so I figured the right settings for ffmpeg out. ffmpeg is quite fast because it also supports the options of passthrough (i.e. not re-encoding the video, but just rewrapping it in a new container). 

The result is the following short python script. It's run in the folder where the clip exists, creates a subclip list and then runs ffmpeg to create a new joined clip at the path indicated by the first argument. On my older MacPro it runs at about 3x speed, or about 300MB/s which is decent enough.

Not a super polished script, still has hardcoded paths and doesn't check commandline argument errors. But you get the gist :-)

Enjoy...

import os
import glob
import sys

print "Joining clips in current folder"

filelist = open("files.txt","w")
clips = list(glob.glob(os.path.join('.','*.mov')))
for clip in clips:
  filelist.write("file '%s'\n" % os.path.basename(clip))
filelist.close()

cmd = "/Users/janklier/Desktop/AllKlier\ Data/Tools/ffmpeg -f concat -i files.txt -c copy " + sys.argv[1]
os.system(cmd)

 

Custom ACES IDT for Resolve

Note that DCTL scripts require the studio version of Resolve.

I was just involved in a discussion on creating IDTs for color grading in ACES cct in Resolve 14 when there is no built-in IDT. Or if the existing IDT isn't ideal.

Here's the process that I worked out: 

When there is no IDT specified in the project and the clip, then Resolve expects the data to be in linear gamma and AP0 gamut. An input LUT can be applied before Resolve translates the clip to ACES cct color space. Thus a LUT that can translate the camera data into linear / AP0 is a proper replacement for an IDT.

This can easily be achieved in LUTCalc (https://cameramanben.github.io/LUTCalc/). To test this theory I took a clip and set the default Slog3/S-Gamut3.Cine IDT and took a screen grab for reference. I then created a custom LUT with these settings in LUTCalc. I set the clip's IDT to None and added the new LUT as the input LUT for the clip.

As seen below, the resulting image matches the reference images from the original IDT, suggesting that the two operations are equivalent.

This now opens the opportunity to make additional customizations to this input LUT to taste, or create a LUT that matches the camera specifics of unqiue cameras.

Based on this article, the other option is to create a DCTL script: http://acescentral.com/t/adding-idts-to-resolve/161/2. A DCTL script has the advantage that it's precise math rather than interpolated lookup table. The code in a DCTL script matches the math precision the built-in IDT use.

Using the Sony SLog-3 IDT and converting it to a DCTL file, which then is placed into the LUT folder and used instead of IDT or Input LUT also creates an equivalent image. In fact it creates an exact match when using a reference wipe, whereas the input LUT yields minor variations, presumably based on the less precise math or LUTCalc having slightly different input values.

Note that DCTL scripts require the studio version of Resolve.

// SLog3 / S-Gamut3 DCTL for ACES IDT replacement
__CONSTANT__ float color_xform[9] =
{
   0.6387886672f,  0.2723514337f,  0.0888598991f,
  -0.0039159060f,  1.0880732309f, -0.0841573249f,
  -0.0299072021f, -0.0264325799f,  1.0563397820f
};

__DEVICE__ float slog3_to_linear(float v) {
  float result;

  if(v >= 171.2102946929f / 1023.0f)
  {
    result = _powf(10.0f,(v*1023.0f-420.0f)/261.5f)*(0.18f+0.01f)-0.01f;
  }
  else
  {
    result = (v*1023.0f-95.0f)*0.01125000f/(171.2102946929f-95.0f);
  }

  return result;
}

__DEVICE__ float3 transform(int p_Width, int p_Height, int p_X, int p_Y, float p_R, float p_G, float p_B)
{
  // Convert from SLog3 to Linear
  float3 linear;

  linear.x = slog3_to_linear(p_R);
  linear.y = slog3_to_linear(p_G);
  linear.z = slog3_to_linear(p_B);

  // Convert from S-Gamut3 to AP0
  float3 aces;

  aces.x = color_xform[0]*linear.x + color_xform[1]*linear.y + color_xform[2]*linear.z;
  aces.y = color_xform[3]*linear.x + color_xform[4]*linear.y + color_xform[5]*linear.z;
  aces.z = color_xform[6]*linear.x + color_xform[7]*linear.y + color_xform[8]*linear.z;

  return aces;
}

Here is the same frame with all three different methods: the built-in IDT, the input LUT, and the DCTL script.

Importing Odyssey 7Q+ Metadata into Avid Media Composer

I've been using the Odyssey 7Q+ as an external recorder for quite some time and with much success. The reasons for using it are a separate conversation for another time.

One of the things though that has always bothered me that the 7Q+ has menu options to record a variety of meta data for each take, such as Reel #, Scene, good/bad take, etc. But none of these data get stored into the ProRes files the recorder produces. Instead these data get saved into an FCP 7 XML file on a per clip basis in a subfolder on the SSD. That's allo good if you use an NLE where you can import these XML files. I don't have FCP itself. I do work with Resolve, Avid, and Premiere in that order. The only one of these that can interpret the XML files from the recorder is Premeiere. Resolve complains that the files do not contain a timeline (as they're per clip files). And Avid doesn't read them at all, since Avid has standardized around the ALE file format.

After a bit of coding I came up with a Python script that can ingest a folder of Odyssey 7Q+ XML files and translate them into a single ALE file that Avid consumes happily.

I've successfully used the script with the latest version of the 7Q+ firmware, the 3.0 version of the transfer utility, and MC 8.9.x. I've only tested this on a small set of files, so further refinement for special cases and debugging may be in order. But for anyone willing to give it a try and report back any success or issues, there's a link for the Python script at the end of the article. This is written for Phyton 3 which is readily available for Mac or Windows.

To use the script, run the script and point it at the FCP 7 XML folder after the media has been transferred with the CD utility:

cd /Volumes/Jobs/TestJob/Camera\ Files/Card\ 001
/usr/local/bin/python3 /Volumes/Workspace/ALE/fcp_to_ale.py -d fcp\ 7\ xml/

This should process all the XML files for each clip and then producer one CD.ALE file. The syntax for ale files is described in the MC documentation.

It will translate the following fields:

  • In and Out mark if set in the Odyssey Play mode for the clip
  • Good/Bad flag for the clip
  • Description (derived from the Project field)
  • Camera
  • Reel #
  • Scene #
  • Take #
  • Shoot Day
  • LUT Name

Then on the Avid side, first import all the .mov files through the usual means (linking them to a bin in the source browser).

In a second step, select all the clips in the bin, go back to the source browser, select 'import' rather than 'link', and in the import options, go to the 'Shot Log' tab and select 'Merge events with known master clips'. Then select the newly create ALE file and import. This will read the ALE file and merge any new meta data with the selected clips that where just imported previously.

Here is my test ALE file:

Heading
FIELD_DELIM	TABS
VIDEO_FORMAT	1080
AUDIO_FORMAT	48khz
FPS	23.98

Column
Name	Tracks	Start	End	Tape ID	Source File	Source Path	Description	Comments	Camera	Reel	Scene	Take	Shoot Day	LUT	Mark IN	Mark OUT	Good

Data
CLIP0000001	VA1A2	00:00:00:01	00:00:07:23		CLIP0000001.mov	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	008	001	View-SONY_EE_SL3C_L709A-1			
CLIP0000002	VA1A2	00:00:07:19	00:00:31:16		CLIP0000002.mov	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	009	001	View-SONY_EE_SL3C_L709A-1			
CLIP0000003	VA1A2	00:00:31:12	00:04:43:09		CLIP0000003.mov	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	010	001	View-SONY_EE_SL3C_L709A-1			
CLIP0000004	VA1A2	00:04:43:05	00:09:43:22		CLIP0000004.mov	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	011	001	View-SONY_EE_SL3C_L709A-1			
CLIP0000005	VA1A2	00:09:43:18	00:10:07:03		CLIP0000005.mov	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	012	001	View-SONY_EE_SL3C_L709A-1			
CLIP0000006	VA1A2	00:10:06:23	00:11:08:11		CLIP0000006.mov	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	013	001	View-SONY_EE_SL3C_L709A-1			
CLIP0000007	VA1A2	00:11:08:07	00:12:10:00		CLIP0000007.mov	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	014	001	View-SONY_EE_SL3C_L709A-1			
CLIP0000008	VA1A2	00:12:09:21	00:12:33:13		CLIP0000008.mov	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	015	001	View-SONY_EE_SL3C_L709A-1			
CLIP0000009	VA1A2	00:12:33:09	00:13:28:02		CLIP0000009.mov	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	016	001	View-SONY_EE_SL3C_L709A-1			
CLIP0000010	VA1A2	00:13:27:22	00:14:18:06		CLIP0000010.mov	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	017	001	View-SONY_EE_SL3C_L709A-1	00:13:43:00	00:13:54:02	
CLIP0000011	VA1A2	00:14:18:03	00:15:09:14		CLIP0000011.mov	/Volumes/Workspace/ALE/Test Data	1674    	FEATURE	A	R001	S1      	018	001	View-SONY_EE_SL3C_L709A-1			yes
CLIP0000012	VA1A2	00:15:09:10	00:17:15:06		CLIP0000012.mov	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	019	001	View-SONY_EE_SL3C_L709A-1			
CLIP0000013	VA1A2	00:17:15:02	00:18:10:16		CLIP0000013.mov	/Volumes/Workspace/ALE/Test Data	1674    		A	R001	S1      	020	001	View-SONY_EE_SL3C_L709A-1			

And this is how the bin looks after a successful meta data import of this file:

Link to the Python script: https://drive.google.com/open?id=1h95nnBZ_RquNODVa54o8u_z8TZjtpG-Z

Sony FS7 RAW DR Test

There was a good debate on the SIC FB group about Sony FS7 RAW, recording with the Odyssey 7Q+, the issue of getting 14 stops of dynamic range into 12 bit linear RAW output, and banding in the shadows. As I recently upgraded from the Sony F3 to the FS7 and using it with the XDCA RAW extension and the Odyssey 7Q+ in RAW mode most of the time, I thought it would be good to get to the bottom of it and make sure I use the ideal settings.

This video has a series of clips shot with different Codecs and Settings to see the results. The Camera was setup on a tripod with Odyssey 7Q+ and XDCA extension. The camera was configured to CineEI mode, S-Gamut3.Cline/SLog3. Noise Suppression was enabled at mid-level. The camera has firmware 4.0. The Odyssey 7Q+ has firmware 16.10. There was a dual color target setup near the camera lit with a Arri L5C first in daylight, later in Tungsten. There was a second target setup about 10 feet further back, originaly unlit, later ever so slightly lit with a small tungsten LED light. The goal was to keep the exposure between the two targets at 14 stops, as measured on a lightmeter in EV mode / spot meter. On the back target the EV was measured on the 2nd darkest grayscale chip (bottom row, 2nd from right).

This was not done on a single target that has a 14 DR range, but in a real life scene where a front element was light by a key light and further back in the room was an object that was supposed to be lifted out of the black with at least minimal separation. I believe that is a more realistic way of judging DR for everyday use, though may not be as scientific as a single target.

The ProRes and XVAC-I clips were brought into Resolve and the standard Sony SLog3SGamut3.Cine.ToLC709 LUT was applied at clip level. For the DNG clips, the instructions from this article by Convergent Design were followed by first applying the compensation LUT to bring the DNG into SLOG3 and then the same SLOG3 to LC709 LUT was applied. The timeline was in LC709 color space.

There are 11 clips with different settings (title at bottom). The clips are repeated a second time with the upper half of the screen having an offset of 256 on the scope applied to see deep shadows the screen cannot display.

The most telling clips are #8 (Internal XVAC-I recording at 11.5 stops DR) and clip #11 (Odyssey 7Q+ DNG recording at 13 stops DR). In both clips in non-adjusted version the highlights have ever so little headroom (picker values can be found at 253/254 8-bit), while the raised version of the clip allows ever so little detail visible in the back target to be made out (background noise is at 64, target starts showing a few 65). 

Thus the conclusion of this test is that with current firmware and these settings, shadows and black are quite clean. However, the effective dynamic range is more in the 12 to 13 stop range with no significant difference between internal recording and external recording, giving the Odyssey RAW option the upper hand because of its much more flexible Codec choices and many other useful features.

Screenshot: Scope and Color Grab showing ever so slight head room, and very faint back target at 13 stop DR:

 

Color Grab showing back target slightly better at 11.5 DR:

If anyone has suggestion on how to improve this test for additional conclusions or verification, feel free to comment or reach out at jan@janklier.com.

Tags: 

Resolve and Color Checker Video Passport

A recent conversation on cinematography.net inspired me to work out a better technique for calibrating footage with the Color Checker Video Passport. I previously hadn't taken the time to fully understand the arrangement of the individual color chips until Adam Wilt's explanation made it click.

Here's a quick and dirty clip recorded on my Sony F3 in s-log in mixed lighting conditions:

The top has a shiny black, 40% IRE, and bright white target. On the bottom, the top row aligns with the vector scope (the big aha moment) and the second row is different skin color targets.

This is how the clip looks in Resolve when imported as is:

For this experiment a couple of quick nodes - a couple of garbage mattes that allows us to isolate individual aspects of the target on the scopes for easier workflow. And the last node with all the adjustments:

 

For step 1 we need to adjust the curve to offset the slog and bring the white and black into their legal ranges and set the middle gray around 40% IRE, using a curves adjustments. Once the ranges are sitting properly, decoupling the curves for the whilte balance on the RGB parade:

 

For step 2 on to the color calibration. Changing the garbage matte to the top row chroma chips bring up the star pattern nicely. On the right the RGB parade, which is impossible to interprete for that...

Because the white balance was already dialed in with the curves the color vectors are almost spot on. A small hue rotation adjustment of 3 degrees and some extra saturation refines the settings:

 

Lastly, switching to the last garbage matte highlighting just the skin color chips and turning on the skin color indicator on the vector scope confirms that the skin color is sitting perfectly:

 

Here is the final color checker with all adjustments:

From this clip we could now export a 3D LUT to be applied to the project or select clips, or the correction could be copied onto a group pre-clip node to apply to all clips that were shot under the same lighting conditions / camera settings.

Tags: 

A Well Formatted End Crawl

A basic end crawl can be done with built-in title generators in Resolve or Premiere. 

But formatting a complex and good looking end crawl can be an exercise in frustration. After several different attempts I settled on designing it in Illustrator and animating it in Fusion.

Using Fusion gives more control over the timing and animation. Yet the text controls in Fusion are also limited. Nothing really comes close to a real design application like Illustrator when you need font and placement control.

So it starts with a vertically oversized artboard with transparent background. A layer of black can be added for ease of formatting and then disabled prior to export. For this endcrawl the text object was about 8,000px tall:

That is then exported as a transparent PNG image and imported into a Fusion comp via loader:

 

The trick to a good render of an end crawl is to animate it at an even multiple of pixels per frame. So a bit of math is required. In this case we wanted to the end crawl to finish just under one minute. At 23.976 frame rate and an animation height of 8,043px the closest multiple to stay within one minute was 1/7th, which gets to 1,149 frames on the comp timeline. Rounded up to 1,170 to let it run a few extra frames to allow the last line to end in the middle of the screen rather than at the bottom.

The animation then happens by adding a transform tool and setting the Y center as an expression of frame number and 1/x of the frame count:  Point(0.5,time * 0.0008547008547 - 0.5).

That advances the animation 7 pixels at every frame. It's actually quite fast, but getting such a long end crawl into less than a minute ends up in a fast crawl.

Render out and then bring into the NLE for final assembly.

 

The other challenges - finding fonts that render well on different screens and resolution. This endcrawl uses a one pretty thin font which leads to uneven anti-aliasing on smaller screens. Word is that at times different end crawl fonts have to be rendered for different screens. Which is why people build an entire business around this: https://endcrawl.com/.

Tags: 

Recreating Sky

On a recent grade I was faced with sizeable number of clips who had blown out sky and that needed to be made look good. If the sky is just peaking through in a few places, bringing down exposure and adding some color may be enough. But if the sky is prominent in the shot the lack of any texture will be glaring.

For one clip in I went down a more complicated path and it was worth it because it was one clip the client upon review called out as being beautiful.

This is the final clip, nicely highlighting the parrot in full color:

This is what the original footage looked like:

 

This type of work is beyond what can be easily done with Resolve and effects. So I used Fusion Connect to bring this clip into VFX software where it's easier to layer different parts together. The first step was to put a luma keyer on it to isolate the blown out sky:

 

Then I used the DaySky tool which can create a natural looking sky by date and latitutde/longtitude. But it's a blue sky with horizon color distortions. For a bit more realism I threw in some fast noise to create moving clouds, do some color tweaking and merge it with the keyed clip:

 

A little color and exposure matching in Resolve, a tracked vignette on the main bird, and things look a lot better...