Swapping EXIF information

A lot of my images are panoramic stitches, and increasingly I’ve been trying to use Lightroom where possible to keep things in a full-DNG workflow as long as possible. I’ve found that Lightroom is great when it works, but not so great when it doesn’t. 

One of the things that Lightroom simply refuses to do (which isn’t a problem in dedicated stitching packages) is combine images taken at “different” focal lengths. I say “different” here because occasionally, if you’re shooting with a zoom lens, you can have the zoom set in just such a way that it doesn’t constantly report the same focal length to the camera in EXIF data. 

This happened in a nasty way last Saturday. I lucked out and had a great sunset on the Blue Ridge Parkway, but came home to find that about 2/5 of my images were “shot” at 48mm, and about 3/5 of them “shot” at 50mm. 

If this happens to you, do not despair – there Phil Harvey has written a program called exiftool that will save the day. The code to execute what you need is:

exiftool -n -EXIF:FocalLength=50 -EXIF:FocalLengthIn35mmFormat=50 <<Folder with images>>

Obviously replace the 50 with whatever focal length you shot at. Exiftool will write new files with the modified EXIF information, and save the old files for you as well. You’ll have to either re-import the files into Lightroom, or you can force Lightroom to re-read the metadata for the affected images by going to “Metadata->Read Metadata From Files” (note: this will destroy any keywords you’ve given to the images). 

From there, you’re good to move on and process the file:

Thankfully this wasn’t ruined. Though there were many processing shenanigans to salvage it.

Processing wide dynamic range images in Lightroom, Luminar, and ON1 Photo RAW: Adobe is still the king

It’s that time of the year again – Skylum and ON1 are spinning up their marketing machines to convince you that this year, really, their software is going to take things to the next level. Seriously. Trust us. A revolution is on the way. Again.

I’ve been pretty critical of Skylum and ON1 in the past for their business model, failure to get promised features into their software, their tendency to miss release dates, and their focus on gimmicks over core functionality. A lot of that could be forgiven, of course, if the result coming out of the product was superior to the alternatives. Unfortunately, in my experience, I haven’t found that to be the case.

Over the coming weeks, I want to put a bit more flesh on that argument by looking at a variety of images I’ve shot in each of the three packages. I need to issue my standard caveats here: 1) I’m a lot more familiar with Lightroom than I am with either Luminar or Photo RAW, so on some level I should get better results with Lightroom. 2) I’m not a paid spokesman for any of these companies. I bought my own copies of this software, just like any member of the public, and 3) your results might be different than mine. I’m not suggesting that any of these programs is incapable of producing decent images, but it may highlight some areas where each raw engine is likely to fail. 

OK. So here’s the scene for today. It’s a sunset image of Prague Castle with Charles Bridge in the foreground, shot on my D750 in 2016. This is a single raw file, shot at ISO400 and properly exposed to the right (+2EV) to capture as much shadow detail as possible.  No analog filters were used. I’m posting a relatively quick edit of the file – my “main” version of this scene is a stitched panorama, rather than the single frame image.

Full scene, quickly processed in Lightroom

As with many sunsets, this is a scene that’s got quite a bit of dynamic range. But it’s also a scene that the 14-bit converters in most modern cameras – including the D750 – shouldn’t have a problem capturing.

So that’s the final product. Let’s start by loading the image into each of the three packages to see what we get.

First, two notes for procedural purposes:

  • Where possible, I’ve exported the images directly from the programs and uploaded them. There are a few places where I needed to take screenshots (for example, to show highlight clipping, or to show 1:1 crops), and I’ll note that in the caption.
  • WordPress does some silly stuff with resizing, so you may end up with a blurry-ish image. I’m saving the individual images, and I’ll put a link to all the jpgs at the bottom of the post.
Lightroom CC Classic 7.5
Luminar 1.3.1
ON1 Photo RAW 2018.5

This all looks pretty reasonable, and similar between the three packages, at least on a surface level. When we dig in, though, we find something disturbing in Luminar (screenshot):

Luminar with highlight clipping on

When we turn highlight clipping on, we see that Luminar’s default import settings are clipping large portions of the sky. Photo RAW has a tiny amount clipped, but not really enough to bother with or complain about. Of the three packages, Lightroom is the only one whose default demosaic leaves plenty of headroom, and clearly shows that the file is not clipped. Analysis of the actual raw file itself shows that none of the values are clipped: D750 raw files have four channels (RGB+G). The maximum values for those four channels on this image are [12,123, 13,777 6,918 13,854], with a clip value of 15,520. In other words, this isn’t a problem with the underlying file. Adobe’s got it right in this case. Luminar (and to a lesser extent Photo RAW) are either demosaicing or applying default settings in such a way that they are clipping the highlights in the default rendering.

Next, let’s see how all three packages do when we bring down the global exposure of the image. Because the three programs apply local adjustments quite differenly, I want to spend the first part of the post focusing on global adjustments. I’m not trying to use all of the fancy bells and whistles in each package – rather, I’m trying to get a baseline comparison for how the raw engines themselves perform before applying all the bells and whistles. If the converter can’t get the basics right, all of the filtering applied after the fact will be less effective. For the next set of images, all I’m doing is taking the exposure slider for each package and moving it to -3EV. Remember: this photo is exposed to the right by about 2EV, so this should identify any areas where the raw converter clipped our channels when it wasn’t supposed to.

Lightroom (-3EV)
Luminar (-3EV)
Photo RAW (-3EV)

Again, things look somewhat similar on the surface, but there’s an interesting twist when looking at the histograms:

Lightroom and Photo RAW both have a similar looking red channel histogram, but Luminar does not. Specifically, it has a spike on the right side – an indication that the red channel was clipped, and subsequently scaled down. This appears to be something associated with the camera profile selection, as you can see in the following image.

What this basically says is that certain camera profiles – including the Luminar default profile – may clip channels that are exposed to the right when they shouldn’t and may not deal properly with those channels when the exposure values are reduced. This is basically the same result as overexposing the shot in camera. The peak on the right is an indication that detail in the red channel highlights has been lost permanently. It can be “fixed” by switching to another camera profile, but this may indicate a deeper problem with Luminar’s raw processing path. (Note: I loaded the same image into Skylum’s other flagship product, Aurora HDR 2018. Aurora does enough processing to the image that it’s hard to say whether the channel is clipping after the tone map. It’s possible that Luminar and Aurora use a different raw engine, or it’s possible that Aurora uses a default profile that didn’t clip. In any case, results from Aurora were inconclusive.)

To be fair, the effect of this is not extremely obvious in this particular image, but it’s still concerning. It might be especially concerning if, for example, you had a camera that didn’t have great profile support in Luminar. This particular image doesn’t have a ton of highlights, especially for a sunset image. While I haven’t processed a lot of other images in Luminar with this level of scrutiny, I suspect there are other images for which this would matter more. 

Now let’s go the other direction and try to see how they do in the shadows. For this, I’m still using only the exposure slider, and I’ve dialed things up to 1.5EV in each program. Here’s what we get:

Lightroom (+1.5EV)
Luminar (+1.5EV)
Photo RAW (+1.5 EV)

Now we can start to see some differences. First, while the sky looks bright now in all three, Lightroom retains significantly more detail in the brightest parts of the images:

Lightroom (+1.5EV, clipping enabled)
Luminar (+1.5EV, clipping enabled)
Photo RAW (+1.5EV, clipping enabled)

In fact, you have to increase the exposure slider in Lightroom to almost 3EV before you get a similar level of clipping in the sky. There are a couple of reasons this might be the case. Luminar and Photo RAW’s exposure sliders might simply be more sensitive than Lightroom’s, such that a setting of 1.5EV in Luminar or Photo RAW translated to something closer to 3EV in Lightroom. But in comparing luminescence values for the darker portions of the images at similar exposure settings, I don’t think that’s the case. I do think there’s some difference, but it’s relatively small – probably more on the order of 1/3 – 1/2 EV, not 1.5EV. Instead, I think Lightroom’s raw engine is just more sophisticated in how it processes files, and how it saves detail information in the highlights compared to either Luminar or Photo RAW.

In examining the 1:1 crops (screenshots) we can also see that Lightroom gives superior results in shadow detail:

Lightroom, +1.5 EV, 1:1
Luminar, +1.5 EV, 1:1
Photo RAW, +1.5 EV, 1:1

It’s easier still to see this when taking a 3:1 (300%) zoom on the cathedral:

Lightroom, +1.5 EV, 3:1
Luminar, +1.5 EV, 3:1
Photo RAW, +1.5EV, 3:1

ON1 clearly does the worst here, but it’s also easy to see that Luminar is not as good as Lightroom. 

When processing the images with more “advanced” sliders – even something as simple as contrast – getting an apples-to-apples comparison becomes more difficult. Moreover, processing images like this one generally requires lots of local adjustments, and each of these programs has a very different approach and philosophy to these adjustments. Additionally, I simply wasn’t willing to spend the time in any of these packages to get a gallery-level result for the purposes of this post. I did my best to get them somewhat similar, but didn’t obsess over every little detail. So where did I end up? Here’s what I came up with:

Lightroom
Luminar
Photo RAW

All in all, the results aren’t bad for any of the packages, but in my personal opinion Lightroom has by far the easiest workflow for local adjustments, which allowed me to get my LR result in about 2 minutes, while the others took considerably longer. I’m probably least happy with the Photo RAW output, particularly in the clouds, though I suspect with some time I might be able to get a better output.

Looking at each of these files at 1:1, we see a pretty big difference in the color rendering and detail in the shadow sections between Lightroom and the other two. Again, this probably shouldn’t surprise us with what we saw on 1:1 crops above, but the effect seems even more pronounced after local adjustments are applied. 

Lightroom
Luminar
Photo RAW

One important note about Photo RAW that I’ll try to research more in the coming weeks: I’ve always had a suspicion that they have random “gates” in their workflow that reduce the image (or parts of the image) to 8-bits, even though Photo RAW claims a 16-bit workflow. Obviously from the images above, there’s plenty of detail in the shadows and the highlights in the file, and when adjusting only the exposure slider, Photo RAW seems to have no problem getting the extremes of both ranges – it’s able to avoid clipping highlights and there’s no problem rendering the shadow detail. But here’s what happens if you reduce the overall exposure in the general settings, then try to bring it up in localized areas using the Local Adjustments tab:

Washed out colors in Photo RAW when using local adjustments

Yikes. That’s ugly. Most of the color and detail in the shadow region is lost. And to be clear, The only difference between this 1:1 section and the one above is that I reduced the overall exposure on the general adjustments, and increased the exposure on the local adjustments. This, again, strongly makes me suspect there’s some 8-bit process in the workflow, probably related to local adjustments or the handoff from global to local. Like I said, I’ll try to see if I can think of ways to poke around on this, but for now I would beware: something looks rotten in the state of Prague.

Everyone has different needs, but when I’m choosing a photo package, my sine qua non is overall image quality. I’m willing to pay more for a program that delivers consistently better results – and in my opinion, Lightroom fits that description at the moment. I have serious, quantifiable concerns with the raw processing pipeline for both Luminar (clipping issues) and Photo RAW (possible 8-bit steps in workflow). I’m sure that Luminar and Photo RAW have some applications where they excel – Photo RAW seems like it’s probably geared toward portrait photographers more than landscape photographers, for example – but as a landscape photography tool, neither of these packages are on par with Lightroom today.

Link to images used in this post.

Dynamic range and bits of resolution: these are not the same thing

There’s a lot of information online talking about 12-bit files, 14-bit files, and how one is or isn’t better than the other. Most of these posts feature some sort of subjective discussion showing that under specific test conditions they created, 14-bit files either do or don’t offer some advantage, which not surprisingly often seems to confirm whatever position the author had before they conducted their experiment. People start using terms like “smooth gradients” or “dynamic range” seemingly without understanding how all of these factors (sensor, exposure, bit-depth) are all related to each other.

Here’s the thing: bit-depth *does* matter, but it doesn’t always matter in the way people think it does. And it doesn’t always matter in the same way. Sometimes you need those bits. Other times they’re just storing noise. Sometimes they’re not storing anything at all. Anybody telling you that you should definitely, absolutely shoot in 14-bits, or alternatively that shooting in 14-bits gives you no real advantage probably don’t understand the full picture. Let’s cook up a quick experiment to illustrate.

First, a refresher: extra bits help us store information. Most people are probably aware that digital images are, at their core, numbers. Vastly oversimplified, image sensors count the number of photons (packets of light) that arrive at each pixel. If no photons arrive, and the pixel is black. If a lot of photons arrive, and that pixel gets brighter and brighter. If photons keep arriving after the pixel is already fully “white,” those additional photons won’t be counted – the pixel will “clip,” and we’ll lose information in that portion of the image. 

When we talk about the number of bits in a raw file, what we’re really talking about is the precision we have between those darkest black values and white values. If we had a 1-bit sensor, we’d have only two options: black and white. If we have a 12-bit sensor, we have 4,096 possible values for each individual pixel. 14-bit sensors have 16,384, and 16-bit sensors 65,536. Each bit you add doubles the amount of precision you have, or the amount of information you can store. But it’s critical to remember here that the way we store information is decoupled from what the sensor can actually detect. The deepest black and the brightest white the sensor can detect are whatever they happen to be. The bit-depth is how many finite steps we’re able to chop that into.

So here’s the setup: a couple of days ago I got my a new toy – a DJI Mavic 2 Pro with a Hasselblad-branded (probably Sony-produced) 1″ sensor. There’s not a lot of information about this sensor online, and various people have been asking about what bit-depth the RAW files are out of the camera. Fortunately, this is easy enough to figure out. Let’s just download Rawshack – our trusty (free) raw analysis tool – and run a file through it:

               File: d:\phototemp\MavicTest\2018\2018-09-20\DJI_0167.DNG
             Camera: Hasselblad L1D-20c
    Exposure/Params: ISO 100 f/8 1/800s 10mm
   Image dimensions: 5472x3648 (pixel count = 19,961,856)
Analyzed image rect: 0000x0000 to 5472x3648 (pixel count = 19,961,856)
Clipping levels src: Channel Maximums
    Clipping levels: Red=61,440; Green=61,440; Blue=61,440; Green_2=61,440
        Black point: 0

Interesting. The DJI is giving us 16-bit raw files. That’s two whole bits more than my D850 or A7Rmk2! Four times the information and WAY MORE DYNAMIC RANGE! SCORE!

Not so fast. 

Just because our data is stored in a 16-bit-per-pixel file doesn’t mean we’ve actually got 16-bits of information. Data and information are different things after all. So let’s dig in a little more deeply. Let’s take three different cameras with three different sensor sizes that produce raw files with three different bit depths and compare their outputs on a common scene:

  • The DJI Mavic 2 Pro – a 1″ (13.2×8.8mm) sensor producing 16-bit raw files
  • The Panasonic GH4 – a Micro Four-Thirds (18×13.5mm) sensor producing 12-bit raw files
  • The Nikon D850 – a full frame (36x24mm) sensor producing 14-bit raw files.

We don’t really have to run this test to know which one is going to win: it’s going to be the D850, and my guess is that it’s not even going to be close. For the other two, though, it’s harder to know which will perform better. The Panasonic has a larger sensor and a lower pixel count, which should give it some advantage. But it’s also a 5-year old design, and we know it’s a 12-bit sensor. A decent 12-bit sensor, it must be said, but we’re not expecting D850 or Sony A7Rmk3 levels of performance. The DJI may have a smaller sensor, but it’s likely to be a more modern design, and we’re at least getting 16-bit values from the camera, even if the underlying information may be something less than that. Let’s dive deeper.

To do this analysis in full would require a little more time than I have on my hands, so I’m going to do it quickly, then followup later if necessary. I took all three of these cameras to the sunroom in my house and shot a few quick images. I used the base-ISO for each camera (100 for the DJI, 200 for the Panasonic, and 64 for the D850) of approximately the same scene. It turns out that it’s hard to precisely position the Mavic exactly where I wanted it to be indoors, so the composition is a bit off. But it should be close enough. I bracketed shots on all cameras and tried to pick the ones that had the most similar exposure. I then selected the shot for each camera where I’ve got the brightest exposure without really clipping the highlights (though the Mavic did have slightly clipped highlights – more on which below).

From there, I’m running all the files through Rawshack with the –blacksubtraction option. What that does is basically take the lowest value for each channel and set that to zero, then reference the other values to that. You want to do that because there’s always going to be a little bit of bias in the sensor (it will never truly output zero), and this helps you see the “true” distribution of values.

Here’s what we get for each camera:

Nikon D850:

Pixel counts for each raw stop:
    Stop #/Range         Red        Green         Blue      Green_2
 ---------------  ----------   ----------   ----------   ----------
 0 [00000-00001]     140,756       13,885      138,964       13,810
 1 [00002-00003]     126,935       73,206      558,197       73,104
 2 [00004-00007]     544,177      353,258    1,240,469      352,003
 3 [00008-00015]   1,741,924    1,274,645    1,967,536    1,266,982
 4 [00016-00031]   2,752,687    2,104,326    1,793,851    2,102,555
 5 [00032-00063]   3,071,078    2,407,673    2,117,626    2,412,529
 6 [00064-00127]   1,014,943    2,224,467    1,270,294    2,227,906
 7 [00128-00255]     481,901    1,021,033      785,625    1,021,787
 8 [00256-00511]     607,242      560,244      580,620      560,625
 9 [00512-01023]     632,401      536,646      699,715      536,833
10 [01024-02047]     272,303      613,343      251,394      613,667
11 [02048-04095]      49,551      236,836       30,016      237,642
12 [04096-08191]       1,520       16,755        2,960       16,902
13 [08192-16383]          22        1,123          173        1,095
14 [16384-32767]           0            0            0            0
15 [32768-65535]           0            0            0            0

Panasonic GH4:

Pixel counts for each raw stop:
    Stop #/Range         Red        Green         Blue      Green_2
 ---------------  ----------   ----------   ----------   ----------
 0 [00000-00001]     287,124       59,435      664,336       61,725
 1 [00002-00003]     381,373      139,899      560,799      143,261
 2 [00004-00007]     747,955      451,142      565,381      452,576
 3 [00008-00015]   1,111,057      821,821      645,533      820,871
 4 [00016-00031]     721,705      812,997      606,981      809,753
 5 [00032-00063]     214,788      823,544      406,585      821,248
 6 [00064-00127]     160,735      333,134      189,947      332,785
 7 [00128-00255]     206,345      169,835      207,674      170,031
 8 [00256-00511]     131,895      181,100      128,517      181,018
 9 [00512-01023]      42,474      154,883       33,249      154,631
10 [01024-02047]       7,845       57,408        3,857       57,236
11 [02048-04095]         336        8,434          773        8,497
12 [04096-08191]           0            0            0            0
13 [08192-16383]           0            0            0            0
14 [16384-32767]           0            0            0            0
15 [32768-65535]           0            0            0            0

So the first two cameras are basically what we would expect, though it is impressive to see how much more detail the Nikon captures in the mid tones. But the Mavic gives us a bit of a different picture:

DJI Mavic Pro 2:

    Stop #/Range         Red        Green         Blue      Green_2
 ---------------  ----------   ----------   ----------   ----------
 0 [00000-00001]      90,499       13,282       20,690       13,193
 1 [00002-00003]         206           60       95,658           59
 2 [00004-00007]           0            0            0            0
 3 [00008-00015]           0            0            0            0
 4 [00016-00031]     148,885       36,053      226,561       35,664
 5 [00032-00063]     480,523      161,326      661,729      159,561
 6 [00064-00127]   1,015,016      519,604    1,012,456      518,136
 7 [00128-00255]   1,367,088      973,210      716,503      964,446
 8 [00256-00511]     940,418    1,097,044      881,097    1,105,995
 9 [00512-01023]     243,864      874,212      521,857      872,455
10 [01024-02047]     202,575      518,261      205,307      522,762
11 [02048-04095]     166,856      187,300      253,954      187,689
12 [04096-08191]     145,652      216,935      186,091      216,392
13 [08192-16383]     105,548      138,800      119,127      139,628
14 [16384-32767]      60,372      136,129       65,635      135,767
15 [32768-65535]      22,962      118,248       23,799      118,717

Ok, so what’s happening here?

The first thing to notice is that on the DJI, the black level isn’t *really* the black level. In other words, the smallest pixel values in the file are between 4 and 5 bits below where the “real” black point is. To be clear, I’ve looked at several DJI raw files now with a high dynamic range scene and this pattern holds true. In all cases, there are some pixels at range 0 and 1, none at 2 and 3, and then the rest of the file resumes at “bits” 4-15. This suggests that even though we may have a 16-bit file (that has real, 16-bit values), we don’t have a 16-bit signal path. In particular, “bits” 2 and 3 are telling here: we could remove them entirely and have zero loss of information.

So how does this translate into actual images? Again, keeping in mind that these are shots that were taken quickly, not synced up in either exposure or exact positioning, and were all processed quickly:

D850 (ISO 64, f4, 1/125 – pushed +3.3EV in post):

Panasonic GH4 (ISO 200, f2.8, 1/400, pushed +4.2EV in post):

DJI Mavic 2 (ISO 100, f2.8, 1/120, pushed +2.4EV in post):

We can see a couple of things here. First, there’s really no comparison in terms of shadow detail – the D850 absolutely cleans up. Sure, it’s a bit OOF, but when you start looking at the gradients on the wall on either side of the frame, there’s really no comparison – especially at 1:1. Part of this comes down to better DR, and part of it comes down to a 45MP sensor, but the result is what’s important – as we expected, the D850 wins and it’s not close.

A little more surprising to me is the fact that the Panasonic is pretty clearly better than the DJI, in spite of being a much older design. I’m not sure this will be universally true, and I’ll have to do some more experimentation over the coming weeks, but the DJI definitely has significantly more noise in the shadows, even though it’s been pushed significantly less (1.5 stops) in post, *and* it has a base ISO that’s a stop better. In other words, the deck is actually stacked against the Panasonic here – the original DJI exposure was 1EV greater than the Panasonic, and yet its performance in the shadows is visually worse. The GH4 doesn’t fly (unless it’s mounted on a *much* more expensive drone), but it does appear to have the better sensor, at least in my initial tests.

So what have we learned? 

For starters, just because you’ve got a 16-bit raw file doesn’t mean you’ve got 16 bits of information. The DJI absolutely has an “advantage” if you’re just looking at the bit-depth of the raw file, but that doesn’t mean the underlying hardware or signal path is actually giving you 16 bits. DJI does have more precision, but it’s precision isn’t really being used. That becomes obvious when we subtract out the black level and see that the bottom four bits are essentially being thrown away. We’re storing 16-bits, but only 12 of them are being used in a meaningful way. 

We can also see that sensor size has a pretty major impact on dynamic range and shadow detail – the GH4 with its 6 year old sensor is able to beat the DJI pretty easily. The D850, with its modern, full frame, BSI sensor, gives an even more impressive performance.

The bottom line is this: the number of bits you’re getting out of the camera doesn’t necessarily tell you anything about how well it’s going to perform in difficult lighting tasks. Having more bits means you can theoretically store additional information, but obviously that’s only possible if the additional information is there to store. Whether there’s actually additional information is something a lot more complicated than you’re going to get from just looking at a spec-sheet.