Luminar 3 – First Impressions

Skylum’s Luminar 3 was released today, and it’s supposed to be a serious contender to Lightroom. I’d like to give you some first impressions, but I opened the programĀ eight hours ago and this is how far I’ve gotten:

Make it WOW! indeed…

True, I’ve got a somewhat large photo library, but this isn’t exactly an auspicious start.

Aerial sunset photography

Chasing sunsets is a hobby of mine, especially in the fall. This year, I brought along my new toy – a DJI Mavic 2 Pro – for the ride as I drove around east Tennessee and western North Carolina in hopes of capturing both fall color and a decent sunset in the same place on the same night. In doing so, I made plenty of mistakes (as usual), and thought I’d summarize some of what I learned and messed up so you hopefully don’t make the same mistakes I did.

  1. Scout, scout, scout. This is always important in terms of landscape photography, but is especially important in terms of drone photography. It’s important to know rules and regs about where you can and can’t fly / launch from, etc., and you’ll want to have a good idea of what you might be able to see from various vantage points. Google Maps (and specifically the 3D terrain feature) is a huge help here. Because you can fly, you don’t have to be as concerned about whether the view from a particular overlook is overgrown with trees such that a photographer with a tripod couldn’t get a decent shot – you just have to fly out 50 meters and you’ve got the perfect shot. The 3D Google Maps view does a great job of letting you see what’s possible.
  2. Check the weather. Nobody wants to drive 2 hours each way only to end up on-site and have the sky be solid overcast. Sometimes you can’t fully predict how the clouds are going to look (especially around here), but weather reports should be able to give you at least some idea. Check them frequently.
  3. Make sure your gear is in order. Multiple times I showed up on site with the phone I normally use for flying completely dead. Another time I ended up with a Mavic battery only charged to 60% because it had been knocked off the charger the night before. These are totally my fault. Don’t do these things. Check and double check to make sure you’ve got everything, and that everything is fully charged.
  4. Show up early. Again, this is important in any kind of landscape photography, but especially so for aerial photography. I try (where possible) to show up 1-1.5 hours before sunset, which gives me plenty of time to adjust to conditions on the fly. 
  5. Take a test flight. No matter how well you’ve done your scouting, there’s a good chance things look different when you get there. I usually burn my first battery flying around the area, testing different compositions, looking for the shot I wanted. While in the air, I also look for different launch spots in the area that might be closer to the my final photo location I choose to minimize launch / recover time (see below).
  6. Have a backup plan. Even if you scouted well, there’s a good chance conditions won’t be perfect when you arrive. Maybe the clouds are further to the east or west than predicted. Before setting out, have some alternate spots you can reach (in time) if it looks like conditions will be better somewhere else (see also: show up early). 
  7. Bring a computer to do some initial processing. This is one I didn’t do this season, but wish I had. If you shoot panoramas a lot (which I do) it can be hard to know how the overall composition is going to turn out by looking at the individual images. Is that road going to be framed where I want it? Should I move the drone a few hundred meters left or right? Would a different altitude help the composition? These are questions that can be hard to answer unless you have access to a computer and some quick photo editing. It’s worth having that ability, even if you don’t use it every time.
  8. Use the map on the controller. Once you’ve found your “final” spot for the night, erase the tracking on your map and note your altitude so you know where to go back to after battery swaps. This way, you can keep a fairly consistent look to your images, even though you’ll have to change batteries at least once.
  9. Minimize launch / recover time. This seems obvious, but is easy to forget in the moment. The more time you spend flying to / from your final photo spot during launch / recover, the less time you’re actually taking pictures. If you can go straight up in the air, or just 25-30m out beyond your launch site, you can spend almost the entire 25 minutes of battery life focused on photography, not on battery management / recovery. Sure, you fan fly half a mile to get to the perfect spot if you need to, but staying as close as possible helps tremendously with battery management (more on which below).
  10. Know your camera. Because the DJI’s sensor isn’t the most amazing in terms of dynamic range, exposing to the right is critical (even if you’re going to use HDR / exposure bracketing). Unfortunately, DJI seems to be quite conservative on their highlight clipping presentation in the app, which means you’re less likely to overexpose a shot, but more likely to underexpose one. This may be related to the 16-bit files the Mavic produces (even though I’m fairly sure it has only a 12-bit signal path), but highlight recovery is surprisingly good – good enough that I’m starting to push the exposures 0.7-1.3 EV over what I normally would, both for the center point of an AEB, or a single automatic panorama. The histogram (instead of the zebras) can be helpful here, though not definitive. More research needed :).
  11. Plan your battery swaps ahead of time, and keep a close eye on the clock. The worst thing you can do is be on the ground when the clouds light up overhead. Actually, worse than that would be to run out of battery and start recovery on battery 2 just as the clouds light up overhead. Don’t do that. Most of the action is going to be concentrated in the 10 minutes either side of the published sunset time. Therefore knowing what time the sun sets is crucial. From there, you can back out when you want to launch and recover. If, for example, sunset is at 1850, I would plan to launch my first flight sometime in the 1730-1745 range to fly the area and find my final photography spot for the evening. This would give me plenty of time to fly, shoot some test images, recover, analyze, and drive to another location if necessary. I don’t feel the need to use up my entire battery on this flight – in fact, if I don’t have to, I’d prefer to save it, just in case. But taking the test flight is usually a good idea. My second flight would probably launch at around 1815-1820. My goal in this flight would be to get on station, make any final adjustments, and get some initial shots of the sunset. This would depend somewhat on how the clouds are laid out – if it looked like a sunset that would have more action after the sun goes down, and if I knew my launch / recover time was very short (e.g. 1 minute) I might delay a bit and plan on a quick battery swap near sunset to give me more time between sunset and dusk. Usually, though, I’m timing this flight to be done at about 10 minutes before sunset. I try to time my final flight of the evening to launch about 7-10 minutes before published sunset. This gives me a good 10-15 minutes after sunset in case the clouds light up, and leaves me landing basically in the dark.
  12. Don’t leave too early. The real show often starts after the sun goes down. In a lot of cases, sunsets get worse before they get better. Sometimes they don’t get better, but in many cases the most rich and vibrant colors happen after the sun goes down (and after the color has diminished somewhat), not before. With a little experience you’ll learn when this is likely to be the case, but always stay on station several minutes after the sun “officially” goes down – you’re likely to be rewarded.
  13. Shooting panoramas: I typically tried to hedge my bets by using a couple of forms of shooting in camera. It turns out you can get some pretty good results with a single-layer, properly exposed panorama on the Mavic 2 Pro. The key there is “properly exposed” – and by properly exposed, I mean exposed as far to the right as you can get it without blowing out the highlights. The easiest way to get a decently exposed shot is to use AEB with a slightly over-exposed bias, as I mentioned above. My typical procedure is to take several AEB shots, interspersed with some shots taken using DJI’s automatic panorama mode (both horizontal and, more frequently, the 180-degree mode). For my own shots, I would typically use gimbal angles of +17 and -15, starting with the upper left shot, then zig-zagging across the frame (upper left, lower left, lower center, upper center, upper right, lower right), resulting in 2 rows, 3 columns of images. Using a 5 bracket AEB, this gives you 30 images in total. DJI’s auto panorama function can be used to shoot either a 9-image bracket (3×3) or a 21-image bracket (3×7) 180-degree panorama. DJI uses +18, -2, and -22 for its gimbal angles, so you can get approximately the same coverage with two rows (there are advantages and disadvantages to doing this). In the end, there’s no “right” way here – both work, and both have different strengths and weaknesses when it comes to processing.

Fall Season 2018

Fall is always an exciting season in East Tennessee. We had quite a bit of late and muted color this year, and I’m not sure that it’s completely over, but given that a storm is blowing through as we speak, the majority of the season is probably done. 

The reason for my lack of posting has been, honestly, that I’ve been out and about trying to shoot most nights. I’ve not always been successful, but I’ve come away with a few images I think will be keepers.

The biggest change for me this year has been the addition of the Mavic 2 Pro to my gear bag. This isn’t a panacea, as you’re not technically allowed to fly it in Great Smoky Mountain National Park (which is why I haven’t been there except to ride bikes) or off the Blue Ridge Parkway (which is why I only have one “normal” set of pictures from there. That said, the M2P is game-changing for landscape photography. It allows you (in the areas where you can fly) to move to perspectives that are certainly not possible from the road, and in many cases aren’t possible even with a lengthy hike. Sure the camera has shortcomings when compared with my D850, but its ability to move 30-40 feet beyond the overlook is, in itself, an amazing thing that completely transforms the kind of images you can capture. 

Along with that, this is the first season with my D850. Last year I walked away with a few strong images, but all were with my D750. I didn’t get the D850 until November, when the color was long gone in this area. As a result, at season’s start I had only 3 images in my main photo gallery taken on my D850 – exactly the same number taken with the D90 that I owned for a hot minute in 2009 as a backup camera (which has by far the lowest shutter count of any camera I’ve owned and seriously used), and less than I took with my OM-D E-M5 in the two years I owned it. All that to say: I was excited about the possibilities the D850 offered.

October, however, turned into a rather interesting animal.

The first important variable was our trip to Park City, UT, at the turn of the month for our friends’ wedding. This lent some spectacular scenery, but because of weight reasons the D850 stayed home in favor of the M2P and A7RII. That’s not to say there weren’t some great shots, though.

DJI Mavic 2 Pro, Wastach National Forest

On Sunday September 30, we drove out to the Wastach National Forest, for some spectacular scenery, including an incredible display of turning aspen trees. The DJI performed great here, though not to be outdone, the A7RII and Voigtlander 15 put in a great show (currently on display in our downstairs bedroom):

Aspens in Wastach National Forest. Sony A7Rmk2, Voigtlander 15mm.

I’ve said several times that the A7R2 platform is worth owning just for the ability to use the Voigtlander 15. I stand by that statement.

Returning to Tennessee, there was still a pretty significant learning curve with the Mavic 2 Pro. One of my first attempts to capture a scene came just after we returned from Utah, and had almost nothing to do with nature’s beauty:

Traffic crosses under a bridge on Interstate 40 in Knoxville, TN.

After dinner with some friends, I tried a couple of shots I’d been wanting to capture on I40, which is only a mile or so from our house. In the process – and completely unplanned – I ended up with the above, which will probably become one of my signature shots from the drone going forward.

As mentioned above, the legalities of drone flying present a challenge in my area. While there are many areas that are totally kosher, other areas – particularly areas like Smoky Mountain National Park and the Blue Ridge Parkway – are off limits. As a result, I spent more of this season than usual on the Cherohala Skyway – an area which is drone friendly – and which turned out to be a fortunate coincidence.

The main highway in our area that gets press is US-129 between Maryville, TN and Robbinsville, NC – often called the “Tail of the Dragon” because of its many curves. The better road – for both driving and scenery – is the Cherohala Skyway, which runs from Robbinsville to Tellico Plains, TN. While the Skyway doesn’t have the best overlooks – that is, from the perspective of traditional photography, it does have some incredible vistas if you happen to have a drone. And because it’s not a National Park, you’re perfectly clear to launch there.

Sunset on the Cherohala Skyway
Color changes on the Cherohala Skyway

While there are plenty of spots on the Cherohala Skyway, exploring the Blue Ridge Parkway is an essential part of October in this area. Because launching drones is verboten on the parkway itself, some creativity must be had. To that end, there are several perfectly legal launch sites on NC-215, both north and south of the parkway proper. With some scouting, I was able to identify one that turned out to have a great vantage point:

NC-215 cuts through the Western North Carolina wilderness as the sun sets over the Blue Ridge Mountains.

Clearly it would be better if just a bit more color were showing, but all in all, I’ll take it.

Not to be outdone, the D850 had a reasonable October too – not in terms of quantity, but rather quality. For a while, I’ve been toying with what I call the “Sunset Project” – a view of the same sunset through time on a single evening. Perhaps, at some point, I’ll get around to making good on the promise. With that in mind, my wife and I headed out to the Blue Ridge Parkway and captured a spectacular sunset that continued in its glorious hues for almost an hour on October 20:

Let me be totally transparent: I made a lot of mistakes (including some rookieā„¢ mistakes) that night, but the D850 managed to salvage my incompetence. All in all it was a great sunset, made all the more great because I shared it with someone special (

The season may not be over, but it likely is. I’ll update this post in November if it turns out something interesting came up…Ā  Otherwise I’ll see you over the winter with (hopefully) some other interesting pictures.

How to learn editing photos

In my Facebook absence, I’ve started frequenting a couple of online forums, including one for people who own / fly the DJI Mavic series of drones. A sincere and interesting question was posted there a couple of days ago asking how to go about learning the process of photo editing. I spent a while typing out a response, some of which includes things I’ve thought about posting here. Here are my thoughts:

  1. Composition and good technique in camera is more important than editing, usually. I think this is even more true with a slightly less forgiving camera like the M2P (compared to modern SLR/ILCs). 
  2. Getting something perfect SOOC is nice, but it also probably means you’re leaving detail on the table, especially in the shadows. It’s always good to not have to process a shot more than your typical import preset, but if you want to get the absolute best results, some processing is probably necessary. (See also: ETTR).
  3. It’s not clear since you posted the JPG here, but always shoot in RAW, if you can. JPEGs are great for presentation purposes, but they also discard a significant amount of tonal information (even for a 12-bit signal path, the jpeg contains less than 10% of the tonal information that a RAW file does – for a 14-bit path it’s closer to 2%).
  4. At the end of the day, digital images are numbers and image processing is math. 
  5. Don’t skimp on software – it’s as important as your hardware. It blows my mind when photographers spend thousands (or tens of thousands) of dollars on cameras and lenses, then grouse about paying $10/month to Adobe, as if Photoshop and Lightroom don’t provide any value to the whole process. Here’s the truth: with a few exceptions, the best photo processing software costs money. My personal opinion would be to avoid doing things halfway – in other words, spending money on programs like (but not limited to) products from ON1 and Skylum (i.e. Luminar/Aurora), which are cheaper, yes, but are also objectively not as good when it comes to the actual output they produce. If cost is really an issue, my suggestion would be to go with FLOSS packages (Darktable, RawTherapee, GIMP) that are capable of producing decent outputs and cost literally nothing. 
  6. Watch videos (YouTube) about some of the different tools in particular packages. This is where using a common package (Lightroom) really helps, because there are going to be a ton of tutorials on how to do pretty much anything you want. Some people on YouTube are great and provide a lot of insight, some are paid shills trying to sell you their presets and courses. All of them can teach you something.
  7. Along with 6: avoid presets and filters (though not profiles, if you’re using Lightroom, which are a different thing). The best way to get a distinctive look for your photos is not to use someone else’s presets, but to really learn how all the various tools work in a particular package. Start to play around with various sliders and see how they affect the image. See which ones have a bigger impact, and which ones are more subtle. 
  8. There are a lot of fads in image processing. Some of them are useful. Others less so. Almost every major package has added support for LUTs in the last year and a half, which certainly has uses – and especially has uses if you’re trying to get your still images to look like some video footage you shot. LUTs are really powerful for certain applications, but they’re not all that great as a general photo editing tool. When a new feature comes out, learn what it’s doing, learn how to use it, then evaluate whether it’s actually an improvement over your current process.
  9. Local adjustments are critical. This is a more recent thing for me, but I am increasingly likely to not touch the “main” sliders for my landscape images at all, instead using local adjustments in Lightroom to target specific areas of the image. Overall this is kind of a style / preference thing – I could also edit one area of the image (e.g. the sky) to where I wanted it using the global settings, then go back and edit the ground with local adjustments based on those settings. 
    • There are a lot of cool things you can do with advanced masking tools like luminance and color masking. Learn how to use them. 
  10. Don’t be afraid to experiment. Two great features in Lightroom: 1) the history, which shows you exactly what you’ve done to the image and lets you return there at any point, as well as save snapshots, and 2) the virtual copy features, which lets you have a variety of different looks for the same image. Processing is free, and you’ll get better at it the more that you do it.
  11. Tying in with 7, but try to understand what the actual sliders are doing, rather than just moving them around. So, for example, it’s important to know that vibrance and saturation will both affect the intensity of colors in your image, but they do so in different ways (vibrance is basically non-linear and affects the most muted colors first, where saturation is linear and affects everything evenly). But it’s also important to know that you can change the saturation for individual channels of your images (if, say, you want to bring down the saturation of the yellows). Or you could add a bit of yellow in just the highlights by using the split toning tool. In other words, knowing what tools you have available to you and what they do is tremendously helpful in assessing what you can do creatively with your images.  
  12. Reprocess your images frequently. New tools come out that have the potential to make work you shot years ago look better. Moreover, as you learn new techniques, you’ll be better at processing than you were when you first looked at the images. I typically go back once a year or so and reprocess some of the shots in my gallery that I took years before, applying new tools and techniques to those old images. Sometimes I like the result more, sometimes I don’t. But I always learn something about how to process going forward that feeds back into my workflow.

Swapping EXIF information

A lot of my images are panoramic stitches, and increasingly I’ve been trying to use Lightroom where possible to keep things in a full-DNG workflow as long as possible. I’ve found that Lightroom is great when it works, but not so great when it doesn’t.Ā 

One of the things that Lightroom simply refuses to do (which isn’t a problem in dedicated stitching packages) is combine images taken at “different” focal lengths. I say “different” here because occasionally, if you’re shooting with a zoom lens, you can have the zoom set in just such a way that it doesn’t constantly report the same focal length to the camera in EXIF data.Ā 

This happened in a nasty way last Saturday. I lucked out and had a great sunset on the Blue Ridge Parkway, but came home to find that about 2/5 of my images were “shot” at 48mm, and about 3/5 of them “shot” at 50mm.Ā 

If this happens to you, do not despair – there Phil Harvey has written a program called exiftool that will save the day. The code to execute what you need is:

exiftool -n -EXIF:FocalLength=50 -EXIF:FocalLengthIn35mmFormat=50 <<Folder with images>>

Obviously replace the 50 with whatever focal length you shot at. Exiftool will write new files with the modified EXIF information, and save the old files for you as well. You’ll have to either re-import the files into Lightroom, or you can force Lightroom to re-read the metadata for the affected images by going to “Metadata->Read Metadata From Files” (note: this will destroy any keywords you’ve given to the images).Ā 

From there, you’re good to move on and process the file:

Thankfully this wasn’t ruined. Though there were many processing shenanigans to salvage it.

Processing wide dynamic range images in Lightroom, Luminar, and ON1 Photo RAW: Adobe is still the king

It’s that time of the year again – Skylum and ON1 are spinning up their marketing machines to convince you that this year, really, their software is going to take things to the next level. Seriously. Trust us. A revolution is on the way. Again.

I’ve been pretty critical of Skylum and ON1 in the past for their business model, failure to get promised features into their software, their tendency to miss release dates, and their focus on gimmicks over core functionality. A lot of that could be forgiven, of course, if the result coming out of the product was superior to the alternatives. Unfortunately, in my experience, I haven’t found that to be the case.

Over the coming weeks, I want to put a bit more flesh on that argument by looking at a variety of images I’ve shot in each of the three packages. I need to issue my standard caveats here: 1) I’m a lot more familiar with Lightroom than I am with either Luminar or Photo RAW, so on some level I should get better results with Lightroom. 2) I’m not a paid spokesman for any of these companies. I bought my own copies of this software, just like any member of the public, and 3) your results might be different than mine. I’m not suggesting that any of these programs is incapable of producing decent images, but it may highlight some areas where each raw engine is likely to fail. 

OK. So here’s the scene for today. It’s a sunset image of Prague Castle with Charles Bridge in the foreground, shot on my D750 in 2016. This is a single raw file, shot at ISO400 and properly exposed to the right (+2EV) to capture as much shadow detail as possible.  No analog filters were used. I’m posting a relatively quick edit of the file – my “main” version of this scene is a stitched panorama, rather than the single frame image.

Full scene, quickly processed in Lightroom

As with many sunsets, this is a scene that’s got quite a bit of dynamic range. But it’s also a scene that the 14-bit converters in most modern cameras – including the D750 – shouldn’t have a problem capturing.

So that’s the final product. Let’s start by loading the image into each of the three packages to see what we get.

First, two notes for procedural purposes:

  • Where possible, I’ve exported the images directly from the programs and uploaded them. There are a few places where I needed to take screenshots (for example, to show highlight clipping, or to show 1:1 crops), and I’ll note that in the caption.
  • WordPress does some silly stuff with resizing, so you may end up with a blurry-ish image. I’m saving the individual images, and I’ll put a link to all the jpgs at the bottom of the post.
Lightroom CC Classic 7.5
Luminar 1.3.1
ON1 Photo RAW 2018.5

This all looks pretty reasonable, and similar between the three packages, at least on a surface level. When we dig in, though, we find something disturbing in Luminar (screenshot):

Luminar with highlight clipping on

When we turn highlight clipping on, we see that Luminar’s default import settings are clipping large portions of the sky. Photo RAW has a tiny amount clipped, but not really enough to bother with or complain about. Of the three packages, Lightroom is the only one whose default demosaic leaves plenty of headroom, and clearly shows that the file is not clipped. Analysis of the actual raw file itself shows that none of the values are clipped: D750 raw files have four channels (RGB+G). The maximum values for those four channels on this image are [12,123, 13,777 6,918 13,854], with a clip value of 15,520. In other words, this isn’t a problem with the underlying file. Adobe’s got it right in this case. Luminar (and to a lesser extent Photo RAW) are either demosaicing or applying default settings in such a way that they are clipping the highlights in the default rendering.

Next, let’s see how all three packages do when we bring down the global exposure of the image. Because the three programs apply local adjustments quite differenly, I want to spend the first part of the post focusing on global adjustments. I’m not trying to use all of the fancy bells and whistles in each package – rather, I’m trying to get a baseline comparison for how the raw engines themselves perform beforeĀ applying all the bells and whistles. If the converter can’t get the basics right, all of the filtering applied after the fact will be less effective. For the next set of images, all I’m doing is taking the exposure slider for each package and moving it to -3EV. Remember: this photo is exposed to the right by about 2EV, so this should identify any areas where the raw converter clipped our channels when it wasn’t supposed to.

Lightroom (-3EV)
Luminar (-3EV)
Photo RAW (-3EV)

Again, things look somewhat similar on the surface, but there’s an interesting twist when looking at the histograms:

Lightroom and Photo RAW both have a similar looking red channel histogram, but Luminar does not. Specifically, it has a spike on the right side – an indication that the red channel was clipped, and subsequently scaled down. This appears to be something associated with the camera profile selection, as you can see in the following image.

What this basically says is that certain camera profiles – including the Luminar default profile – may clip channels that are exposed to the right when they shouldn’t and may not deal properly with those channels when the exposure values are reduced. This is basically the same result as overexposing the shot in camera. The peak on the right is an indication that detail in the red channel highlights has been lost permanently. It can be “fixed” by switching to another camera profile, but this may indicate a deeper problem with Luminar’s raw processing path. (Note: I loaded the same image into Skylum’s other flagship product, Aurora HDR 2018. Aurora does enough processing to the image that it’s hard to say whether the channel is clipping after the tone map. It’s possible that Luminar and Aurora use a different raw engine, or it’s possible that Aurora uses a default profile that didn’t clip. In any case, results from Aurora were inconclusive.)

To be fair, the effect of this is not extremely obvious in this particular image, but it’s still concerning. It might be especially concerning if, for example, you had a camera that didn’t have great profile support in Luminar. This particular image doesn’t have a ton of highlights, especially for a sunset image. While I haven’t processed a lot of other images in Luminar with this level of scrutiny, I suspect there are other images for which this would matter more. 

Now let’s go the other direction and try to see how they do in the shadows. For this, I’m still using only the exposure slider, and I’ve dialed things up to 1.5EV in each program. Here’s what we get:

Lightroom (+1.5EV)
Luminar (+1.5EV)
Photo RAW (+1.5 EV)

Now we can start to see some differences. First, while the sky looks bright now in all three, Lightroom retains significantly more detail in the brightest parts of the images:

Lightroom (+1.5EV, clipping enabled)
Luminar (+1.5EV, clipping enabled)
Photo RAW (+1.5EV, clipping enabled)

In fact, you have to increase the exposure slider in Lightroom to almost 3EV before you get a similar level of clipping in the sky. There are a couple of reasons this might be the case. Luminar and Photo RAW’s exposure sliders might simply be more sensitive than Lightroom’s, such that a setting of 1.5EV in Luminar or Photo RAW translated to something closer to 3EV in Lightroom. But in comparing luminescence values for the darker portions of the images at similar exposure settings, I don’t think that’s the case. I do think there’s some difference, but it’s relatively small – probably more on the order of 1/3 – 1/2 EV, not 1.5EV. Instead, I think Lightroom’s raw engine is just more sophisticated in how it processes files, and how it saves detail information in the highlights compared to either Luminar or Photo RAW.

In examining the 1:1 crops (screenshots) we can also see that Lightroom gives superior results in shadow detail:

Lightroom, +1.5 EV, 1:1
Luminar, +1.5 EV, 1:1
Photo RAW, +1.5 EV, 1:1

It’s easier still to see this when taking a 3:1 (300%) zoom on the cathedral:

Lightroom, +1.5 EV, 3:1
Luminar, +1.5 EV, 3:1
Photo RAW, +1.5EV, 3:1

ON1 clearly does the worst here, but it’s also easy to see that Luminar is not as good as Lightroom. 

When processing the images with more “advanced” sliders – even something as simple as contrast – getting an apples-to-apples comparison becomes more difficult. Moreover, processing images like this one generally requires lots of local adjustments, and each of these programs has a very different approach and philosophy to these adjustments. Additionally, I simply wasn’t willing to spend the time in any of these packages to get a gallery-level result for the purposes of this post. I did my best to get them somewhat similar, but didn’t obsess over every little detail. So where did I end up? Here’s what I came up with:

Lightroom
Luminar
Photo RAW

All in all, the results aren’t bad for any of the packages, but in my personal opinion Lightroom has by far the easiest workflow for local adjustments, which allowed me to get my LR result in about 2 minutes, while the others took considerably longer. I’m probably least happy with the Photo RAW output, particularly in the clouds, though I suspect with some time I might be able to get a better output.

Looking at each of these files at 1:1, we see a pretty big difference in the color rendering and detail in the shadow sections between Lightroom and the other two. Again, this probably shouldn’t surprise us with what we saw on 1:1 crops above, but the effect seems even more pronounced after local adjustments are applied. 

Lightroom
Luminar
Photo RAW

One important note about Photo RAW that I’ll try to research more in the coming weeks: I’ve always had a suspicion that they have random “gates” in their workflow that reduce the image (or parts of the image) to 8-bits, even though Photo RAW claims a 16-bit workflow. Obviously from the images above, there’s plenty of detail in the shadows and the highlights in the file, and when adjusting only the exposure slider, Photo RAW seems to have no problem getting the extremes of both ranges – it’s able to avoid clipping highlights and there’s no problem rendering the shadow detail. But here’s what happens if you reduce the overall exposure in the general settings, then try to bring it up in localized areas using the Local Adjustments tab:

Washed out colors in Photo RAW when using local adjustments

Yikes. That’s ugly. Most of the color and detail in the shadow region is lost. And to be clear, The only difference between this 1:1 section and the one above is that I reduced the overall exposure on the general adjustments, and increased the exposure on the local adjustments. This, again, strongly makes me suspect there’s some 8-bit process in the workflow, probably related to local adjustments or the handoff from global to local. Like I said, I’ll try to see if I can think of ways to poke around on this, but for now I would beware: something looks rotten in the state of Prague.

Everyone has different needs, but when I’m choosing a photo package, my sine qua non is overall image quality. I’m willing to pay more for a program that delivers consistently better results – and in my opinion, Lightroom fits that description at the moment. I have serious, quantifiable concerns with the raw processing pipeline for both Luminar (clipping issues) and Photo RAW (possible 8-bit steps in workflow). I’m sure that Luminar and Photo RAW have some applications where they excel – Photo RAW seems like it’s probably geared toward portrait photographers more than landscape photographers, for example – but as a landscape photography tool, neither of these packages are on par with Lightroom today.

Link to images used in this post.

Learning to shoot from the air

My Christmas present (yeah, yeah, I know it’s a long way off…) is a DJI Mavic 2 Pro. It’s been fun to play with so far, but the learning curve is steep. Here’s a quick shot from a sunset that fizzled last Friday night…

Middlebrook Sunset, ~300ft, 6-image stitch

Hopefully more to come as I learn more and have additional opportunities.

Dynamic range and bits of resolution: these are not the same thing

There’s a lot of information online talking about 12-bit files, 14-bit files, and how one is or isn’t better than the other. Most of these posts feature some sort of subjective discussion showing that under specific test conditions they created, 14-bit files either do or don’t offer some advantage, which not surprisingly often seems to confirm whatever position the author had before they conducted their experiment. People start using terms like “smooth gradients” or “dynamic range” seemingly without understanding how all of these factors (sensor, exposure, bit-depth) are all related to each other.

Here’s the thing: bit-depth *does* matter, but it doesn’t always matter in the way people think it does. And it doesn’t always matter in the same way. Sometimes you need those bits. Other times they’re just storing noise. Sometimes they’re not storing anything at all. Anybody telling you that you should definitely, absolutely shoot in 14-bits, or alternatively that shooting in 14-bits gives you no real advantage probably don’t understand the full picture. Let’s cook up a quick experiment to illustrate.

First, a refresher: extra bits help us store information. Most people are probably aware that digital images are, at their core, numbers. Vastly oversimplified, image sensors count the number of photons (packets of light) that arrive at each pixel. If no photons arrive, and the pixel is black. If a lot of photons arrive, and that pixel gets brighter and brighter. If photons keep arriving after the pixel is already fully “white,” those additional photons won’t be counted – the pixel will “clip,” and we’ll lose information in that portion of the image.Ā 

When we talk about the number of bits in a raw file, what we’re really talking about is theĀ precision we have between those darkest black values and white values. If we had a 1-bit sensor, we’d have only two options: black and white. If we have a 12-bit sensor, we have 4,096 possible values for each individual pixel. 14-bit sensors have 16,384, and 16-bit sensors 65,536. Each bit you addĀ doubles the amount of precision you have, or the amount of information you can store. But it’s critical to remember here that the way we store information is decoupled from what the sensor can actually detect. The deepest black and the brightest white the sensor can detect are whatever they happen to be. The bit-depth is how many finite steps we’re able to chop that into.

So here’s the setup: a couple of days ago I got my a new toy – a DJI Mavic 2 Pro with a Hasselblad-branded (probably Sony-produced) 1″ sensor. There’s not a lot of information about this sensor online, and various people have been asking about what bit-depth the RAW files are out of the camera. Fortunately, this is easy enough to figure out. Let’s just download Rawshack – our trusty (free) raw analysis tool – and run a file through it:

               File: d:\phototemp\MavicTest\2018\2018-09-20\DJI_0167.DNG
             Camera: Hasselblad L1D-20c
    Exposure/Params: ISO 100 f/8 1/800s 10mm
   Image dimensions: 5472x3648 (pixel count = 19,961,856)
Analyzed image rect: 0000x0000 to 5472x3648 (pixel count = 19,961,856)
Clipping levels src: Channel Maximums
    Clipping levels: Red=61,440; Green=61,440; Blue=61,440; Green_2=61,440
        Black point: 0

Interesting. The DJI is giving us 16-bit raw files. That’s two whole bits more than my D850 or A7Rmk2! Four times the information and WAY MORE DYNAMIC RANGE! SCORE!

Not so fast.Ā 

Just because our data is stored in a 16-bit-per-pixel file doesn’t mean we’ve actually got 16-bits of information. Data and information are different things after all. So let’s dig in a little more deeply. Let’s take three different cameras with three different sensor sizes that produce raw files with three different bit depths and compare their outputs on a common scene:

  • The DJI Mavic 2 Pro – a 1″ (13.2×8.8mm) sensor producing 16-bit raw files
  • The Panasonic GH4 – a Micro Four-Thirds (18×13.5mm) sensor producing 12-bit raw files
  • The Nikon D850 – a full frame (36x24mm) sensor producing 14-bit raw files.

We don’t really have to run this test to know which one is going to win: it’s going to be the D850, and my guess is that it’s not even going to be close. For the other two, though, it’s harder to know which will perform better. The Panasonic has a larger sensor and a lower pixel count, which should give it some advantage. But it’s also a 5-year old design, and we know it’s a 12-bit sensor. A decent 12-bit sensor, it must be said, but we’re not expecting D850 or Sony A7Rmk3 levels of performance. The DJI may have a smaller sensor, but it’s likely to be a more modern design, and we’re at least getting 16-bit values from the camera, even if the underlying information may be something less than that. Let’s dive deeper.

To do this analysis in full would require a little more time than I have on my hands, so I’m going to do it quickly, then followup later if necessary. I took all three of these cameras to the sunroom in my house and shot a few quick images. I used the base-ISO for each camera (100 for the DJI, 200 for the Panasonic, and 64 for the D850) of approximately the same scene. It turns out that it’s hard to precisely position the Mavic exactly where I wanted it to be indoors, so the composition is a bit off. But it should be close enough. I bracketed shots on all cameras and tried to pick the ones that had the most similar exposure. I then selected the shot for each camera where I’ve got the brightest exposure without really clipping the highlights (though the Mavic did have slightly clipped highlights – more on which below).

From there, I’m running all the files through Rawshack with the –blacksubtraction option. What that does is basically take the lowest value for each channel and set that to zero, then reference the other values to that. You want to do that because there’s always going to be a little bit of bias in the sensor (it will never truly output zero), and this helps you see the “true” distribution of values.

Here’s what we get for each camera:

Nikon D850:

Pixel counts for each raw stop:
    Stop #/Range         Red        Green         Blue      Green_2
 ---------------  ----------   ----------   ----------   ----------
 0 [00000-00001]     140,756       13,885      138,964       13,810
 1 [00002-00003]     126,935       73,206      558,197       73,104
 2 [00004-00007]     544,177      353,258    1,240,469      352,003
 3 [00008-00015]   1,741,924    1,274,645    1,967,536    1,266,982
 4 [00016-00031]   2,752,687    2,104,326    1,793,851    2,102,555
 5 [00032-00063]   3,071,078    2,407,673    2,117,626    2,412,529
 6 [00064-00127]   1,014,943    2,224,467    1,270,294    2,227,906
 7 [00128-00255]     481,901    1,021,033      785,625    1,021,787
 8 [00256-00511]     607,242      560,244      580,620      560,625
 9 [00512-01023]     632,401      536,646      699,715      536,833
10 [01024-02047]     272,303      613,343      251,394      613,667
11 [02048-04095]      49,551      236,836       30,016      237,642
12 [04096-08191]       1,520       16,755        2,960       16,902
13 [08192-16383]          22        1,123          173        1,095
14 [16384-32767]           0            0            0            0
15 [32768-65535]           0            0            0            0

Panasonic GH4:

Pixel counts for each raw stop:
    Stop #/Range         Red        Green         Blue      Green_2
 ---------------  ----------   ----------   ----------   ----------
 0 [00000-00001]     287,124       59,435      664,336       61,725
 1 [00002-00003]     381,373      139,899      560,799      143,261
 2 [00004-00007]     747,955      451,142      565,381      452,576
 3 [00008-00015]   1,111,057      821,821      645,533      820,871
 4 [00016-00031]     721,705      812,997      606,981      809,753
 5 [00032-00063]     214,788      823,544      406,585      821,248
 6 [00064-00127]     160,735      333,134      189,947      332,785
 7 [00128-00255]     206,345      169,835      207,674      170,031
 8 [00256-00511]     131,895      181,100      128,517      181,018
 9 [00512-01023]      42,474      154,883       33,249      154,631
10 [01024-02047]       7,845       57,408        3,857       57,236
11 [02048-04095]         336        8,434          773        8,497
12 [04096-08191]           0            0            0            0
13 [08192-16383]           0            0            0            0
14 [16384-32767]           0            0            0            0
15 [32768-65535]           0            0            0            0

So the first two cameras are basically what we would expect, though it is impressive to see how much more detail the Nikon captures in the mid tones. But the Mavic gives us a bit of a different picture:

DJI Mavic Pro 2:

    Stop #/Range         Red        Green         Blue      Green_2
 ---------------  ----------   ----------   ----------   ----------
 0 [00000-00001]      90,499       13,282       20,690       13,193
 1 [00002-00003]         206           60       95,658           59
 2 [00004-00007]           0            0            0            0
 3 [00008-00015]           0            0            0            0
 4 [00016-00031]     148,885       36,053      226,561       35,664
 5 [00032-00063]     480,523      161,326      661,729      159,561
 6 [00064-00127]   1,015,016      519,604    1,012,456      518,136
 7 [00128-00255]   1,367,088      973,210      716,503      964,446
 8 [00256-00511]     940,418    1,097,044      881,097    1,105,995
 9 [00512-01023]     243,864      874,212      521,857      872,455
10 [01024-02047]     202,575      518,261      205,307      522,762
11 [02048-04095]     166,856      187,300      253,954      187,689
12 [04096-08191]     145,652      216,935      186,091      216,392
13 [08192-16383]     105,548      138,800      119,127      139,628
14 [16384-32767]      60,372      136,129       65,635      135,767
15 [32768-65535]      22,962      118,248       23,799      118,717

Ok, so what’s happening here?

The first thing to notice is that on the DJI, the black level isn’t *really* the black level. In other words, the smallest pixel values in the file are between 4 and 5 bits below where the “real” black point is. To be clear, I’ve looked at several DJI raw files now with a high dynamic range scene and this pattern holds true. In all cases, there are some pixels at range 0 and 1, none at 2 and 3, and then the rest of the file resumes at “bits” 4-15. This suggests that even though we may have a 16-bit file (that has real, 16-bit values), we don’t have a 16-bit signal path. In particular, “bits” 2 and 3 are telling here: we could remove them entirely and have zero loss of information.

So how does this translate into actual images? Again, keeping in mind that these are shots that were taken quickly, not synced up in either exposure or exact positioning, and were all processed quickly:

D850 (ISO 64, f4, 1/125 – pushed +3.3EV in post):

Panasonic GH4 (ISO 200, f2.8, 1/400, pushed +4.2EV in post):

DJI Mavic 2 (ISO 100, f2.8, 1/120, pushed +2.4EV in post):

We can see a couple of things here. First, there’s really no comparison in terms of shadow detail – the D850 absolutely cleans up. Sure, it’s a bit OOF, but when you start looking at the gradients on the wall on either side of the frame, there’s really no comparison – especially at 1:1. Part of this comes down to better DR, and part of it comes down to a 45MP sensor, but the result is what’s important – as we expected, the D850 wins and it’s not close.

A little more surprising to me is the fact that the Panasonic is pretty clearly better than the DJI, in spite of being a much older design. I’m not sure this will be universally true, and I’ll have to do some more experimentation over the coming weeks, but the DJI definitely has significantly more noise in the shadows, even though it’s been pushed significantly less (1.5 stops) in post, *and* it has a base ISO that’s a stop better. In other words, the deck is actually stacked against the Panasonic here – the original DJI exposure was 1EV greater than the Panasonic, and yet its performance in the shadows is visually worse. The GH4 doesn’t fly (unless it’s mounted on a *much* more expensive drone), but it does appear to have the better sensor, at least in my initial tests.

So what have we learned?Ā 

For starters, just because you’ve got a 16-bit raw file doesn’t mean you’ve got 16 bits of information. The DJI absolutely has an “advantage” if you’re just looking at the bit-depth of the raw file, but that doesn’t mean the underlying hardware or signal path is actually giving you 16 bits. DJI does have more precision, but it’s precision isn’t really being used. That becomes obvious when we subtract out the black level and see that the bottom four bits are essentially being thrown away. We’re storing 16-bits, but only 12 of them are being used in a meaningful way.Ā 

We can also see that sensor size has a pretty major impact on dynamic range and shadow detail – the GH4 with its 6 year old sensor is able to beat the DJI pretty easily. The D850, with its modern, full frame, BSI sensor, gives an even more impressive performance.

The bottom line is this: the number of bits you’re getting out of the camera doesn’t necessarily tell you anything about how well it’s going to perform in difficult lighting tasks. Having more bits means you can theoretically store additional information, but obviously that’s only possible if the additional information is there to store.Ā Whether there’s actually additional information is something a lot more complicated than you’re going to get from just looking at a spec-sheet.Ā 

Sony 24-240: yikes. Avoid at all costs.

After Sony announced the A7III and A7RIII, prices on the A7RII dropped pretty dramatically – dramatically enough that I decided to pick one up as a full frame mirrorless option.

There were several reasons for this impulse buy. I had several international trips on my calendar (Thailand, Cambodia, New Zealand, Portugal), and I didn’t want to carry my full D850 kit with me, particularly as some of my destinations involved a bit of hiking. But I’d been spoiled by the ridiculous image quality of the D850, and wanted something a little more than my lightweight M43 setup could give me. The A7RII seemed like the obvious answer: a relatively small and light body with image quality that rivaled the D850. But as I’d pointed out to many people, the problem is always the lenses. Small and light full frame lenses aren’t really a thing.

I dropped by my local electronics store, which happened to have an A7 mounted with Sony’s 24-240 super zoom. Having used super zooms on other platforms in the past, I can assure you that my expectations for this lens were well under control. I was not expecting Zeiss levels of IQ. But my oh my were those expectations not met.

To be clear: I almost certainly got a bad copy of this lens. I’ve been pretty lucky with most of my lenses in the past, but this one was a disaster. It wasn’t bad at all focal lengths, and I was able to end up with some decent shots, but note, for example, this shot from Porto:

full shot
1:1 at center of the frame
1:1 at the upper right corner

Or perhaps this shot from Lisbon:

Full Shot
1:1 at the edge of the frame, showing clear transition in quality

The bigger issue for me, I think, is that it calls into question Sony’s entire QC procedure. Sure, this is one of the cheaper lenses in the FE lineup, but at a retail price of $1000, it’s not exactly inexpensive. Coupled with Roger Cicala’s statement that Sony couldn’t find a good copy of a lens to send to reviewers if their future depended on it… I’m not sure I’ll be buying their lenses moving forward.

My current plan, at least until Nikon’s new Z series is widely available and the price comes down a bit, involves getting older manual focus lenses (primarily Minolta Rokkors) to use on the Sony. It’s been a fun, interesting experiment, and I may post some of my thoughts / samples at some point in the future. 

The practical takeaway? Don’t buy this lens. Definitely don’t buy it used – there’s a good chance it’s on the second hand market for a reason.

Skylum, ON1, and half-baked products

For me, it started with ON1 Photo RAW. For those of you not familiar with ON1, it’s a company that sold a Photoshop competitor oriented toward photographers. It had a niche following, was reasonably priced and somewhat well regarded, providing a significant amount of Photoshop’s functionality in an accessible workflow for a fraction of the cost. In the wake of Adobe’s subscription-only pricing model for Lightroom and watching photographers increasingly use RAW files, ON1 saw a market opportunity. For months, they talked up their new product – ON1 Photo RAW – that would combine the features of their previous product – Perfect Photo Suite – with a powerful, modern, built from scratch RAW engine that promised lightning fast speed and editing without Lightroom’s bloat. They enlisted Matt Kloskowski – a YouTube personality who made his name selling presets in Lightroom – to talk about how ON1 was so amazing that he would never use Lightroom again (spoiler alert: he’s still using Lightroom). So what could go wrong? Everything, as it turns out.

ON1 accepted orders for Photo RAW for several months, and after customers began to suspect it might be vaporware, the company set a  release date of November 2016. When it became clear they weren’t going to meet that deadline for the final product, they issued a “prerelease” version on November 23, promising a “full” release in late December.  Calling the November version of Photo RAW “prerelease” was, in my opinion, extremely generous. It had the feel of an early alpha build, an unfinished work product that no developer could have felt good about shipping. Not only did the prerelease version not deliver on its promise of a lightning-fast workflow, it crashed frequently and lacked core functionality (e.g. the crop tool was listed as “coming soon”). An early December build improved a lot of things, and ON1 promised to fix “all” the bugs in the two weeks before the final release (there were over 100 active bugs / issues when the early build came out). When release day rolled around, it was still clear that Photo RAW wasn’t really there, and ON1 laid out a series of updates they would roll out over the following months to fix everything and add all of the features they’d promised. By the time Photo RAW 2017.6 was released in August, ON1 had, mostly, delivered what they’d promised, but by that time a lot of photographers had moved on. And by that time, the hype train for Photo RAW 2018 was well underway, and it was clear the same thing would happen again – an early order cycle, an aggressive list of new features, and a series of “updates” to add core functionality well into summer.

ON1 isn’t the only company guilty of this, and actually I’m not sure they’re even the worst offenders. Skylum – at the time Macphun – employed a similar strategy with their followup to Aurora HDR 2017. There were dozens of promises made: a Windows version, a new HDR engine, the ability to save 32-bit raw files, and more. Skylum acknowledged explicitly that the Windows version wouldn’t have feature parity with the Mac product at launch, but promised that in “early 2018” the all marketed features would be available in both versions and there would be complete feature parity. I’m writing this article in September of 2018, and it’s safe to say those promises have not been fulfilled. The latest update – version 1.2 as of this writing, is still not feature complete, and the versions do not have feature parity. Meanwhile, every month or two, Skylum sends out a new email about how they’re spinning up a new project – an AI partnership, a DAM for Luminar, Loupedeck integration! – essentially fundraising for and diverting resources to their next product without completing their last / current one. And today, I received my first announcement for Aurora HDR 2019 which may, hopefully, but may not, actually have some of the features I was supposed to get in Aurora HDR 2018. 

Here is my problem with this business model: it’s basically Kickstarter for photo processing, and not in a good way. You know those companies that pop up every now and then that are always founded by “[insert prestigious school here] engineers” offering a breakthrough! product that is going to disrupt the whole industry? They’re able to generate a ton of support and startup funding, and inevitably find out that it’s a lot harder to produce a product than it is a prototype. Their project runs into issues, delays, and funding problems, and they end up six months to a year behind schedule, if they deliver at all. This, it seems like to me, is the basic business model Skylum and ON1 are using, though they’ve managed to hang around a couple of cycles longer than I would have expected. They focus on gimmicks and flashy-sounding features while neglecting the core functionality of the product. Having an “AI powered filter” is more important than, say, building a decent demosaicing engine. 

Companies like Skylum and ON1 may say their products are “buy once, enjoy forever,” but the reality is that their business model relies on selling new versions of their software. I’m obviously not privy to the sales numbers at either of these companies, but my guess is that the majority of sales are people buying in on the pre-order (kickstart), and that the long tail is pretty low. Which means, really, that they have every incentive to talk about how awesome next year’s version is going to be, and not a lot of incentive to deliver on any missing features from this year’s version; after all, they’ve already got your money.

Here’s the deal: with both of these companies, and any others promising features “coming soon in a future update,” you should always evaluate the software as if that update will never come. Because there’s at least a chance that it won’t. You’re buying the product as-is, and the company is under no obligation to make good on their promises. Say what you will about Adobe – and there’s lots to say – but they aren’t in the habit of making wild promises about future versions of their software, or failing to deliver on those promises. 

To be clear, not all small software outfits producing photo software are like this. For example, I’d give a big shout-out to Serif, the developers of Affinity Photo. No software package or developer are perfect, but I’ve been consistently impressed with the quality and project management I’ve seen out of Serif. 

My approach to ON1 and Skylum: caveat emptor. I’ll never pre-order a package from either company again, and I wouldn’t recommend anyone else pre-order either. There’s no guarantee the “feature list” will be present at launch, or ever. Only hand over your credit card if there’s a real, shipping version of the software that does everything you want it to do. Anything else is funding a prototype that may never materialize, no matter how badly you want it to.