Sony FS7 RAW DR Test

There was a good debate on the SIC FB group about Sony FS7 RAW, recording with the Odyssey 7Q+, the issue of getting 14 stops of dynamic range into 12 bit linear RAW output, and banding in the shadows. As I recently upgraded from the Sony F3 to the FS7 and using it with the XDCA RAW extension and the Odyssey 7Q+ in RAW mode most of the time, I thought it would be good to get to the bottom of it and make sure I use the ideal settings.

This video has a series of clips shot with different Codecs and Settings to see the results. The Camera was setup on a tripod with Odyssey 7Q+ and XDCA extension. The camera was configured to CineEI mode, S-Gamut3.Cline/SLog3. Noise Suppression was enabled at mid-level. The camera has firmware 4.0. The Odyssey 7Q+ has firmware 16.10. There was a dual color target setup near the camera lit with a Arri L5C first in daylight, later in Tungsten. There was a second target setup about 10 feet further back, originaly unlit, later ever so slightly lit with a small tungsten LED light. The goal was to keep the exposure between the two targets at 14 stops, as measured on a lightmeter in EV mode / spot meter. On the back target the EV was measured on the 2nd darkest grayscale chip (bottom row, 2nd from right).

This was not done on a single target that has a 14 DR range, but in a real life scene where a front element was light by a key light and further back in the room was an object that was supposed to be lifted out of the black with at least minimal separation. I believe that is a more realistic way of judging DR for everyday use, though may not be as scientific as a single target.

The ProRes and XVAC-I clips were brought into Resolve and the standard Sony SLog3SGamut3.Cine.ToLC709 LUT was applied at clip level. For the DNG clips, the instructions from this article by Convergent Design were followed by first applying the compensation LUT to bring the DNG into SLOG3 and then the same SLOG3 to LC709 LUT was applied. The timeline was in LC709 color space.

There are 11 clips with different settings (title at bottom). The clips are repeated a second time with the upper half of the screen having an offset of 256 on the scope applied to see deep shadows the screen cannot display.

The most telling clips are #8 (Internal XVAC-I recording at 11.5 stops DR) and clip #11 (Odyssey 7Q+ DNG recording at 13 stops DR). In both clips in non-adjusted version the highlights have ever so little headroom (picker values can be found at 253/254 8-bit), while the raised version of the clip allows ever so little detail visible in the back target to be made out (background noise is at 64, target starts showing a few 65). 

Thus the conclusion of this test is that with current firmware and these settings, shadows and black are quite clean. However, the effective dynamic range is more in the 12 to 13 stop range with no significant difference between internal recording and external recording, giving the Odyssey RAW option the upper hand because of its much more flexible Codec choices and many other useful features.

Screenshot: Scope and Color Grab showing ever so slight head room, and very faint back target at 13 stop DR:


Color Grab showing back target slightly better at 11.5 DR:

If anyone has suggestion on how to improve this test for additional conclusions or verification, feel free to comment or reach out at


Resolve and Color Checker Video Passport

A recent conversation on inspired me to work out a better technique for calibrating footage with the Color Checker Video Passport. I previously hadn't taken the time to fully understand the arrangement of the individual color chips until Adam Wilt's explanation made it click.

Here's a quick and dirty clip recorded on my Sony F3 in s-log in mixed lighting conditions:

The top has a shiny black, 40% IRE, and bright white target. On the bottom, the top row aligns with the vector scope (the big aha moment) and the second row is different skin color targets.

This is how the clip looks in Resolve when imported as is:

For this experiment a couple of quick nodes - a couple of garbage mattes that allows us to isolate individual aspects of the target on the scopes for easier workflow. And the last node with all the adjustments:


For step 1 we need to adjust the curve to offset the slog and bring the white and black into their legal ranges and set the middle gray around 40% IRE, using a curves adjustments. Once the ranges are sitting properly, decoupling the curves for the whilte balance on the RGB parade:


For step 2 on to the color calibration. Changing the garbage matte to the top row chroma chips bring up the star pattern nicely. On the right the RGB parade, which is impossible to interprete for that...

Because the white balance was already dialed in with the curves the color vectors are almost spot on. A small hue rotation adjustment of 3 degrees and some extra saturation refines the settings:


Lastly, switching to the last garbage matte highlighting just the skin color chips and turning on the skin color indicator on the vector scope confirms that the skin color is sitting perfectly:


Here is the final color checker with all adjustments:

From this clip we could now export a 3D LUT to be applied to the project or select clips, or the correction could be copied onto a group pre-clip node to apply to all clips that were shot under the same lighting conditions / camera settings.


A Well Formatted End Crawl

A basic end crawl can be done with built-in title generators in Resolve or Premiere. 

But formatting a complex and good looking end crawl can be an exercise in frustration. After several different attempts I settled on designing it in Illustrator and animating it in Fusion.

Using Fusion gives more control over the timing and animation. Yet the text controls in Fusion are also limited. Nothing really comes close to a real design application like Illustrator when you need font and placement control.

So it starts with a vertically oversized artboard with transparent background. A layer of black can be added for ease of formatting and then disabled prior to export. For this endcrawl the text object was about 8,000px tall:

That is then exported as a transparent PNG image and imported into a Fusion comp via loader:


The trick to a good render of an end crawl is to animate it at an even multiple of pixels per frame. So a bit of math is required. In this case we wanted to the end crawl to finish just under one minute. At 23.976 frame rate and an animation height of 8,043px the closest multiple to stay within one minute was 1/7th, which gets to 1,149 frames on the comp timeline. Rounded up to 1,170 to let it run a few extra frames to allow the last line to end in the middle of the screen rather than at the bottom.

The animation then happens by adding a transform tool and setting the Y center as an expression of frame number and 1/x of the frame count:  Point(0.5,time * 0.0008547008547 - 0.5).

That advances the animation 7 pixels at every frame. It's actually quite fast, but getting such a long end crawl into less than a minute ends up in a fast crawl.

Render out and then bring into the NLE for final assembly.


The other challenges - finding fonts that render well on different screens and resolution. This endcrawl uses a one pretty thin font which leads to uneven anti-aliasing on smaller screens. Word is that at times different end crawl fonts have to be rendered for different screens. Which is why people build an entire business around this:


Recreating Sky

On a recent grade I was faced with sizeable number of clips who had blown out sky and that needed to be made look good. If the sky is just peaking through in a few places, bringing down exposure and adding some color may be enough. But if the sky is prominent in the shot the lack of any texture will be glaring.

For one clip in I went down a more complicated path and it was worth it because it was one clip the client upon review called out as being beautiful.

This is the final clip, nicely highlighting the parrot in full color:

This is what the original footage looked like:


This type of work is beyond what can be easily done with Resolve and effects. So I used Fusion Connect to bring this clip into VFX software where it's easier to layer different parts together. The first step was to put a luma keyer on it to isolate the blown out sky:


Then I used the DaySky tool which can create a natural looking sky by date and latitutde/longtitude. But it's a blue sky with horizon color distortions. For a bit more realism I threw in some fast noise to create moving clouds, do some color tweaking and merge it with the keyed clip:


A little color and exposure matching in Resolve, a tracked vignette on the main bird, and things look a lot better...