Today most digital images and imaging devices use 8 bits per channel thus limiting the dynamic range of the device to two orders of magnitude (actually 256 levels), while human eye can adapt to lighting conditions varying by ten orders of magnitude. When we take photographs of a real world scene bright regions may be overexposed, while the dark ones may be underexposed, so we can’t capture all details using a single exposure. HDR imaging works with images that use more that 8 bits per channel (usually 32-bit float values), allowing much wider dynamic range.
There are different ways to obtain HDR images, but the most common one is to use photographs of the scene taken with different exposure values. To combine this exposures it is useful to know your camera’s response function and there are algorithms to estimate it. After the HDR image has been blended it has to be converted back to 8-bit to view it on usual displays. This process is called tonemapping. Additional complexities arise when objects of the scene or camera move between shots, since images with different exposures should be registered and aligned.
In this tutorial we show how to generate and display HDR image from an exposure sequence. In our case images are already aligned and there are no moving objects. We also demonstrate an alternative approach called exposure fusion that produces low dynamic range image. Each step of HDR pipeline can be implemented using different algorithms so take a look at the reference manual to see them all.
11 void loadExposureSeq(
String, vector<Mat>&, vector<float>&);
13 int main(
int,
char**argv)
17 loadExposureSeq(argv[1], images, times);
21 calibrate->
process(images, response, times);
25 merge_debevec->
process(images, hdr, times, response);
33 merge_mertens->
process(images, fusion);
35 imwrite(
"fusion.png", fusion * 255);
42 void loadExposureSeq(
String path, vector<Mat>& images, vector<float>& times)
44 path = path + std::string(
"/");
45 ifstream list_file((path +
"list.txt").c_str());
48 while(list_file >> name >> val) {
49 Mat img =
imread(path + name);
50 images.push_back(img);
51 times.push_back(1 / val);
Ptr< MergeMertens > createMergeMertens(float contrast_weight=1.0f, float saturation_weight=1.0f, float exposure_weight=0.0f)
Creates MergeMertens object.
bool imwrite(const String &filename, InputArray img, const std::vector< int > ¶ms=std::vector< int >())
Saves an image to a specified file.
virtual void process(InputArrayOfArrays src, OutputArray dst, InputArray times, InputArray response)=0
Merges images.
virtual void process(InputArrayOfArrays src, OutputArray dst, InputArray times, InputArray response)=0
Merges images.
Mat imread(const String &filename, int flags=IMREAD_COLOR)
Loads an image from a file.
double calibrate(InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints, const Size &image_size, InputOutputArray K, InputOutputArray D, OutputArrayOfArrays rvecs, OutputArrayOfArrays tvecs, int flags=0, TermCriteria criteria=TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 100, DBL_EPSILON))
Performs camera calibaration.
virtual void process(InputArrayOfArrays src, OutputArray dst, InputArray times)=0
Recovers inverse camera response.
Ptr< CalibrateDebevec > createCalibrateDebevec(int samples=70, float lambda=10.0f, bool random=false)
Creates CalibrateDebevec object.
Template class for smart pointers with shared ownership.
Definition: cvstd.hpp:283
Ptr< TonemapDurand > createTonemapDurand(float gamma=1.0f, float contrast=4.0f, float saturation=1.0f, float sigma_space=2.0f, float sigma_color=2.0f)
Creates TonemapDurand object.
Ptr< MergeDebevec > createMergeDebevec()
Creates MergeDebevec object.
int main(int argc, const char *argv[])
Definition: facerec_demo.cpp:67
Definition: cvstd.hpp:475
virtual void process(InputArray src, OutputArray dst)=0
Tonemaps image.