You don’t have to research HDR much to find the common questions about how best to process a single image into an HDR. Some people suggest taking the photo and running it straight through Photomatix, letting it do all the legwork. Others suggest taking the image into something like Lightroom and exporting different versions of the image with the exposures set at the different increments to mimic exposure bracketed shots.
For a long, long time I was in the first camp. The thinking says simply that by adjusting the exposure you’re not adding anything to the image. It suggests that since all of the data is contained in the original image, exporting differently exposed versions of the same photo is just presenting that data in a different way. However, I have recently discovered that this assumption is wrong. Amazingly, many of my assumptions on what was happening with the data in my photographs turned out to be wrong, and overly generous. I put too much faith in the software, assuming it would do things in the most efficient way because that was the ideal way to do it. Unfortunately, the truth is that Photoshop and Photomatix do terrible things to your photographic data, and I will show you this fact here.
A few months ago I was on a boat with my wife, and as is normal in San Francisco summers the fog was rolling in past the Golden Gate Bridge. There was a lot of movement on the boat so I couldn’t take a bracketed set. I didn’t care though. I assumed Photomatix would let me balance these extreme lighting conditions with no sweat. The photos I took on that boat were the beginning of my discovery that I was giving Photomatix too much credit.
Here is an original photo taken on the boat with a Canon T3i at 14bpp raw:
It’s not immediately obvious that the The Golden Gate Bridge is in the background, but you can barely see it on the left. This type of situation is where I was hoping that Photomatix would solve my exposure balance problems. When I looked at the RAW file in Lightroom and dropped the exposure down I could see the bridge. I imported the single image into Photomatix Pro 4.1 and did a default tonemap, hoping to bring out the details that were lost in the highlights. Here’s what Photomatix produced:
That’s interesting, but it’s not exactly helpful. Specifically, you still can’t see the bridge and other features that exist in the RAW data. In its place is a smooth, unrealistic, grey sludge. Clearly Photomatix has thrown away some important data that would have been very useful. At this moment I realized that the technique of exporting a single RAW image with different exposure settings in fact does allow you to create better HDR images in Photomatix because it was obvious to me at that point that Photomatix was throwing away useful data when working with data from a single RAW file.
After doing an hour or so of research on gamut and image formats, trying to get a grasp on their differences, I thought “my camera is only 14 bits per pixel, but PSD can be 16 bits per pixel, so surely it must be able to store all the extra information that Photomatix is throwing away!” I converted the DNG to a PSD, imported that into Photomatix, ran it through the default tone map settings can came out with this:
Despite the fact that many of the important details are still missing, this image is better than what I got while running the DNG file straight into Photomatix. This means that Photomatix is bad at handling what is purported to be THE raw format for digital photographs from all manufacturers, the Digital Negative. DNG supports data from more RAW images formats and more types of metadata than any other format. There is a free converter with a graphical UI and a scriptable CLI interface, a freely available SDK, and it even has the ability to store the original RAW file inside of the DNG in case you simply don’t trust what it’s doing with your RAW data. It’s remarkable that even with all of that available, Photomatix still can’t transpose the 14 bit raw data into a 32 bit radiance file without losing significant amounts of data.
I also thought it was peculiar is how Photoshop itself was unable to show the extra data. When I converted the DNG into PSD, I had similar results as I did with the DNG file. This led me to do another hour or so of research, and I eventually found out what was going on.
RAW data is storing information about all of the light that came into the camera and hit the sensor. This data includes parts of the image that are outside the perceptive range of your eyes at any, shall we say, iris exposure setting. That is, when the iris of your eye opens and closes, it allows you to see brighter or dimmer things, but you cannot see the whole range of luminance at once. Your eyes are limited to a medium to low dynamic range, thus the need for HDR and tone-mapping to display all of the information. RAW images store data that is beyond the limits of the whitest white and the blackest black that you can see in a properly exposed RAW photograph. Photomatix also has several formats in its arsenal that are also supposed to do that, and they very well may do so, but that’s beyond the scope of this article. PSD does not store this extra data. Tiff doesn’t either. JPG certainly does not. These image formats are storing color values between white and black. White is the brightest point that your eyes can see and can be displayed, and no information past that point is stored. Black is the darkest area your eyes can see, and no information beyond that point is stored. These formats are rasterized.
On a side note, this explains why you can’t gain anything if you were allowed to convert rasterized image like PSD or JPG into a DNG file. Since you’re not gaining any extra data when you attempt to convert a rasterized file to a RAW file, you might as well just use a TIFF or PSD, and in fact the DNG Converter will not let you create a DNG from anything but a RAW file format.
What’s interesting to me is that professional photo editing resources toss this rather important detail out all the time and put PSD and DNG on the same level. Matt Kloskowski from Adobe constantly gets hassled at Lightroom Killer Tips for going back and forth between Lightroom, an app with a purely RAW format based workflow, and Photoshop, which works with rasterized data which truncates lots of important information from both extremes of the radiance spectrum. Professionals like Matt are, in essence, teaching us techniques that are severely sub-optimal and disregarding this data loss issue entirely.
Moreover, Professional applications like Adobe Photoshop and Photomatix Pro are also silently truncating important data at both ends of the radiance spectrum when they are supposed to be doing exactly the opposite. Photomatix has one primary purpose, to compress images that have a wide radiance down into images of narrow radiance so our eyes can see the features beyond our visual perception. Nobody who uses this app expects it to simply throw away the parts of the image that can’t be seen. If we needed that, we’d shoot in JPG or TIFF and use shadow/highlight in Photoshop.
So, after all of this thinking and researching, I did a final experiment. I used Lightroom to create three versions of the same image and merged them in Photomatix Pro as if I had taken an exposure bracketed set. The three images are -4EV, 0EV, +4EV. Only the exposure was changed, and the watermark wasn’t included in the test.
I merged these in Photomatix and used the same default tone-mapping settings as the previous photos.
In this final set you can see the extent of the information that was being thrown out by Photoshop and Photomatix. Specifically, the entire Golden Gate Bridge was captured by the camera, but was invisible to both applications using their standard import methods.
And with that, I am now on the side of the people who recommend exporting exposure bracketed versions of a single exposure when using Photomatix to tonemap a single image. Science, logic and truth prevail, revealing to everybody the Golden Gate Bridge.