Shooting and Assembling a High Dynamic Range Image (HDRI)

In this tutorial, my goal is to give you a quick run down of why HDR Imaging is good and how it can be handy. Then I will present all the information you need to go out and shoot your own image sequence, and then take those images and assemble them into a high dynamic range image.

What is HDRI and What's so Great?

Let's begin by thinking about a photograph. If you take a photograph outside on a sunny day, some areas will be white and some areas will be black, while others, the parts the camera metered for, will be exposed properly. Here is a little example , or a particularly exagerated case, where a bright sky backlights the unlit sides of trees.
Stacks Image 2161
As you can see, in the above image, the sky is all white and there are areas that have gone or almost gone to total black (like the lower right corner). The camera has taken the light coming in through the lens, and recorded it in a way that only has a range from black to white. If we took black to be zero and white to be one, the camera is giving us an image that fits between 0-1. The catch here is that in reality, the world does not fit between 0-1. Also, on a more technical side, an 8-bit image file can only represent 16.7 million-ish colors, which, when compared with reality, is really very limited. This means that instead of seeing a very dark brown in the dark areas, you might just get black.

So, what if you took multiple pictures, with different exposures? Let's take an image with a longer exposure for the dark areas, one with the exposure that works for the mid tones and one with a fast exposure that works for the bright sky.
Stacks Image 1520
Now if only there were a way to get all that information into a single file....

Enter HDRI!

An HDR image is really just a fancy name for a 32-bit image. Generally the file format is .hdr, but you can get the same information into an .exr file, or a floating point .tiff, or a 32-bit Photoshop file. The trick here is that these files what is called floating-point numbers to remember the colors.

Quick bit-depth info break:
An 8-bit image (a standard .jpg file, for example) remembers each color channel value for each pixel using an integer value from 0 to 255. This means that there are 256 possible levels each of red, green and blue.
A 16-bit image can go two ways. The more common method is the integer one, where each channel uses an integer between 0 and 4095 to remember its color. This gives you something like 68 billion colors. The other, and far less common mode is to have a 16-bit half-float value for each channel. This means that each channel value is a floating point number (3.729 is floating point, for example, because of the stuff after the decimal point), making it have a good deal more precision in the dark and bright areas.
A 32-bit image is similar to the 16-bit half-float, in that is uses a floating point number to keep track of the color channel information, but's got a bit more precision than the 16-bit half-float format.
How does this relate to you taking a bunch of photos and putting them all together? Well the 32-bit image allows for a great deal more precision in the colors that are stored, so the darks in the image will actually contain all the subtle color changes. So when you exposed that image up in Photoshop, you could actually see all the stuff there, you wouldn't just get a few bands of color, like you would if you exposed up an 8-bit image. As you can see below, the 8-bit image, when exposed up a good ways, dissolves into digital noise, with many of the details being lost. It also begins to wash-out, with many of the darks becoming lighter. On the other hand, the 32-bit image, when exposed up, contains more then enough data, maintains the detail in the leaves, and you can see where there are still darks being preserved.
Stacks Image 2169
So that shows us how the HDR image can contain a great deal more information then your standard jpg or tif file. So, how does that apply to your 3d scene in 3dsmax, Maya, XSI, etc?

Let's start with lighting. If you are using a fancy ray-tracing renderer, that has the ability to do image-based lighting, then you should be able to make use of an HDRI in that lighting. These renderers include Mental Ray, Renderman, Brazil, V-Ray, etc... Here is an example of a simple scene lit first with a low-dynamic-range 8-bit jpg and then the same scene lit with the high-dynamic-range .hdr file. Note how much more contrast there is in the HDRI-lit image, which much more closely approximates the actual lighting that was present when I shot the image sequence for the HDRI.
Stacks Image 2173
So that shows us how the HDR image can contain a great deal more information then your standard jpg or tif file. So, how does that apply to your 3d scene in 3dsmax, Maya, XSI, etc?

Let's start with lighting. If you are using a fancy ray-tracing renderer, that has the ability to do image-based lighting, then you should be able to make use of an HDRI in that lighting. These renderers include Mental Ray, Renderman, Brazil, V-Ray, etc... Here is an example of a simple scene lit first with a low-dynamic-range 8-bit jpg and then the same scene lit with the high-dynamic-range .hdr file. Note how much more contrast there is in the HDRI-lit image, which much more closely approximates the actual lighting that was present when I shot the image sequence for the HDRI.
Stacks Image 2177
Alternatively, you can use an HDRI just for reflections. In the below example you can see how the overbrights (colors brighter then white) in the HDRI really bring the reflective shaders to life. In comparison the image where there is only an 8-bit jpg used in the reflection environment seems dull and hardly reflective at all.
Stacks Image 2188
Finally a comparison of HDRI being used for both lighting and reflections vs an 8-bit jpg.
Stacks Image 2192
Okay, Now Make Your Own!

Before you get started, here is a short list of equipment and software you will need to follow along:
- Camera with adjustable exposure. This can be a point-and-shoot camera or a full digital SLR, just so long as you can manually set the exposure.
- Tripod. To put the camera on. If you don't have a tripod, you'll just need to find a way to hold the camera very still while you adjust the exposure settings... or just go buy a cheap tripod.
- Chrome sphere. This can be a glass or steel lawn-ornament, found at many garden stores. You can also use those little chrome medicine balls, a large machine bearing, if you have one, or anything else that is chromey and spherical.
- Something to hold the chrome sphere still. This can be something simple (I've used a clear plastic cup a few times) or something complicated like a C-Stand with clamps and such. You can just use whatever you have laying around that can keep the sphere in place without covering it up too much.
- Photoshop (CS2, CS3 or CS3 Extended) This will be used to assemble our HDRI. Having CS3 Extended gives you a larger 32-bit toolset, but CS3 or CS2 will do the assembly as well.
- HDRShop 1.0 This is the only program that I know off the top of my head that does the nice chrome sphere to latitude-longitude apnoramic transformation. If you've got something else that does it and you like that better, then you can use that.
What I am using:
- Nikon D200 with a 50mm f1.4 lens, and a cable release - Bogen/Manfrotto Tripod - Steel lawn-ornament sphere sitting on a clear plastic cup
Stacks Image 2519
Stacks Image 2528
Some things to note:

During the course of shooting the image sequence, you will have to change the exposure for each photo. In most cases (get into this in a sec), this means that you will have to touch the camera. Therefore, it will make your life easier later if you can attach your camera to something fairly heavy. Before I got my big Manfrotto tripod, i had an inexpensive Sunpak tripod that had a little hook on the bottom of the center post. When shooting an HDRI or anything else with long exposures, I would find something heavy to hang from that hook (camera bag, small dumbell with a loop of wire on it, etc). This gave the tripod a bit more inertia and made it less susceptible to little bumps, wind, etc.

On some cameras, such as my Nikon D200, there is an auto-bracketing feature. This allows you to set the camera to automatically take a given number of photos (mine maxes out at nine, so that is what I use), each one being a certain exposure apart from the others. So on mine, I set it to take nine photos, each a stop apart, set the drive speed to fast and then use the cable release so I don't have to touch the camera body itself. That way, I just set my initial correct exposure, turn on the bracketing, and hold down the button on the cable release, taking all nine exposures just as fast as the camera can go. This doesn't make much of a difference if you are just doing the shooting for fun, but on a set of a film or commercial, where the line producer is yelling at everyone to get a move on and they all have to wait for you to run out there with your chrome sphere and camera, it can really be a lifesaver.

If none of the above makes any sense, then just hold tight, we'll get into the details in a bit and all of this should clear up.


Shooting the Photos: Find a good spot

First thing for you to do is to find a good spot to shoot your image sequence. This can be most anywhere, but note that in places that are surrounded by moving things, you will get some odd artifacts in your assembled HDRI. SO this means that if you set up your sphere and camera in the median of a freeway, or next to a moving train, you will get poor results. Also, remember that you will most likely be changing the camera settings by hand, resulting in a total shooting time of 2-3 minutes. So, if it is very windy outside, and the clouds overhead are moving quickly, they will also end up looking a bit odd in your final image. This also applies to sidewalks, hallways, busy rooms, etc. THe best places are those where you can set up your equipment undisturbed and be able to work there until you are satisfied with the images you have taken. In this tutorial, I am just using my back yard.

Shooting the Photos: Set up

Once you have picked a good spot out, get your camera mounted on the tripod and then do like this:

1) Clean off the chrome sphere and the camera lens. It can be hard to get a chrome sphere clean, especially the steel ones, which always seem to have micro scratches that don't come out, but if you do the best you can, then you'll ned up with a better image in the end.

2) Level everything. It will help later if you can get the camera itself level on the tripod and then also get the camera at the same height at the chrome sphere. Ideally, the center of the lens should be even with the center of the sphere, as seen from the side. If you don't have a level on your tripod, a small bubble levelfrom the hardware store will work just as well, just be sure to level the camera on both axes.
If you don't level the camera and don't shoot at the same level as the sphere, you will get a sine-wave horizon in your final image, instead of a nice straight one. That sine-wave horizon in the unwrapped image will translate into a crooked horizon when you put that HDRI onto a sphere or use it as an environment map. This isn't the end of the world, but it does take an extra step to correct later.
Stacks Image 2562
Stacks Image 2565
3) Frame your sphere up. It is always better to have some room around your sphere then to clip off an edge. If your camera has a zoom lens, you can use that to fit the sphere properly in the frame. If your camera has a digital zoom (most point-and-shoot cameras have this once they run out of optical zoom), DO NOT use that, as that will just give you a blurry, degraded photo, which is not much good to work from later. I recommend against using a wide angle lens to shoot with here, since you would need to be closer to the sphere in order to get it to fill the frame. This would result in you and your camera being reflected larger in the sphere, and therefore harder to paint out later. Also, be mindful of what is being reflected in the sphere. In the image below, I've left a bright green microfiber cloth on the table next to the sphere, which I'll have to paint out later.
Stacks Image 2571
Shooting the Photos: Shooting

4) Find your base exposure. During the shooting of the image sequence, you should be adjusting the shutter speed to change exposure rather then changing the aperture (f stop). This is because changing the aperture size also effects the cameras depth of field. Using a larger aperture size (small f-stop number) can result in the front of the sphere being in focus while the edges are out of focus, giving you a big fuzzy spot when you unwrap the sphere.
I've found that the best route is to pick an f-stop that should afford plenty of depth of field, such as f11, and then adjust the shutter speed until the cameras light meter says you have the correct exposure. Note this exposure.

5) Figure out your exposure range. In my shooting, I generally will shoot nine total images. I do this because that is the limit of my camera auto bracketing, and is therefore the fastest and easiest for me to do. If you are doing it by hand, then you can shoot a larger range, if you like. I have found that, in many cases, there isn't much info to be had beyond nine stops, as the long exposures are solid white, and the darker ones just end up being a tiny white dot where the sun in (when shooting outdoors). But use the information below to decide if you should shoot a wider range.

Figuring out what exposures to use: You will want to shoot images that are a stop apart. This means that, starting from the darkest image, each following image has twice the amount of light. Let's say that your camera tells you that at an f-stop of 11, the correct shutter speed is 1/30th of a second. If you wanted to shoot a series of nine images, you can see on the chart below what the range would be. You could then set the camera to the fastest shutter speed in your bracket, shoot an image, adjust the shutter speed, shoot another image, etc, until you have shot all nine.
Stacks Image 2575
6) Check your exposures. What you are looking for here is a bright exposure at one end of the range that is bright enough to not have any blacks left. That way you know you are seeing everything you can and you are not getting any dark areas clipped off into black. On the other end, you want a dark images that has some tiny bright spots wherever your light sources were. In theory, it's good to go all the way until there is nothing left in the dark frame, since that would mean that you managed to capture the entire dynamic range. In practice, though, I have found that this can be impossible, since camera shutters can only move so fast and apertures can only go so small. Getting an exposure that has the sun be anything except for a white dot can be nearly impossible, without the aid of some heavy-duty filters, which would, of course, make the rest of your image sequence quite dark. All that said, here is what I came up with:
Stacks Image 2579
7) Extra Credit: Shooting another image sequence from another position can allow you to easily paint out the photographer and the stretchy area on the unwrapped image quite easily. If you like, you can now move your camera around the sphere 90 degrees and shoot another sequence with the same exposure settings. I use 90 degrees here because if you go around to the opposite side of the sphere, you will find that when unwrapped, the photographer and the stretchy spot will have just swapped places in the two images, whereas with the 90 degree image, there will be clean areas on each that can be used for the other.

Also, the new camera position should be the same distance away from the sphere as the last, and if using a zoom lens, the focal distance should be the same. This will help ensure that there are no strange perspective mismatches in your final images. With that in mind, you can shoot two sequences here and do everything twice, but for the sake of simplicity, I'm just going to continue with a single sequence HDRI.

Putting It All Together


8) This is where modern software makes it all easy. Just fire up Photoshop and go to File > Automate > Merge to HDR. Then browse to your photos that you've gotten off your camera, select all nine (or however many you have in a single sequence) image, check the 'attempt to align' checkbox, which will let Photoshop try to align the images if they are off a little bit, and then hit the OK button. Photoshop should think on things for a while, and then bring up a dialog asking you how you want to see the HDRI. You can just slide the slider around until you like the way the image looks. This should only be changing the way Photoshop is showing you the image, not the actual values of the image itself. Since the image has a higher dynamic range then your monitor, Photoshop uses that little slider setting to determine which chunk of the dynamic range it shows to you.

Note: If, in the assembly process, Photoshop crashes a bunch (happens more with CS2 then CS3), and especially if you shot your image sequence on a nice solid tripod, you can choose to NOT check the 'align source images' checkbox. This will speed things up and reduce the RAM requirements.
Stacks Image 2583
9) Trim to the sphere. So now you should have a fully assembled HDRI. Just crop the image down to the edges of the sphere, resulting in a square image. If you are using Photoshop CS3, there should be a little exposure slider at the bottom of the image window that will let you see the rest of the dynamic range of the image. Now save that image out as an .hdr formatted file.
Stacks Image 2587
10) Use HDRShop to convert that to Latitude-Longitude formatting. Open up HDRShop, and open up the file you just saved. Now head to Image > Panorama > Panoramic Transformations... That should bring up a dialog with a few panorama settings. You should first choose 'Latitude/Longitude' from the format dropdown on the right side. Next set your width to 2048 (if you've got a super high res camera, you could go higher), and click OK. This should generate a new image in the Lat/Long format, which also happens to line up with the UV coordinates of a sphere, and can then be easily mapped on to one.
Stacks Image 2591
Now you should have an unwrapped HDRI that you can drop into a 3d scene to use for reflections or lighting.
Stacks Image 2595
Clean Up

So now that you've made your HDRI, you may have noticed in the complete image that there are reflections of the photographer and a wierd little puckery stretch at the edges. By doing an offset in Photoshop, you can get a better look at the stretching and it'll be easier to paint out this way. Also, in the image below, you can see some big ripples in the door and a few other areas. These are from the lumps in my steel sphere.
Stacks Image 2599
One might ask "Hey where did that stretching come from?" I also run across a number of people who are under the impression that the chrome sphere can really only capture half of the panorama, or 180 degrees worth. To both the question and the misconception, I offer this stunning diagram:
Stacks Image 2603
What I was hoping to show with the diagram is that really the only spot that the chrome sphere cannot reflect in our photos is the area directly behind itself. The panoramic transformation done by HDRShop assumes that the chrome sphere can see the whole 360 degrees, so it fills in that blind spot as best it can.

Note that you can minimize the blind spot by being further away from the sphere and using a longer lens. BUT, be careful to use a sturdy tripod if you go this route, and also use the cameras self-timer when shooting each image. This will help reduce the vibrations felt by the camera, which would then be magnified by the long lens.

Anyhow, back to cleanup. If you skipped the extra credit part above and have single HDRI, all you need to do is a bit of work with the clone tool, so cover up the stretching a bit and maybe get rid of your own reflection in there. Here's mine with the stretch cloned out, and with the bright green cloth clones out:
Stacks Image 2607
Then I offset it another 1024 pixels, back to it's original orientation, and cloned out my own reflection:
Stacks Image 2611
If you did the extra credit, shot another image sequence from a different point of view and have that one turned into an HDRI as well, then you can use that here instead of all the cloning. Note that in order to be able to use multiple layers in a 32-bit file in Photoshop, you will have to be running Photoshop CS3 Extended. If you've got all of the above, then just drop both images into the same file, offset one until the objects in each panorama are about lined up. You can then apply a layer mask to the top layer and alter the opacity of that mask until you've covered up the undesirable parts. This can be a much better wy to go if you've got a good chrome sphere and there is a great deal of detail in your environment, since the single-image cloning method could create some odd-looking areas.

All done, now use it!
Find me elsewhere: