图像分解与色彩先验下的多曝光图像融合
Multi-exposure image fusion based on image decomposition and color prior
- 2021年26卷第12期 页码:2800-2812
收稿:2020-06-30,
修回:2020-12-15,
录用:2020-12-22,
纸质出版:2021-12-16
DOI: 10.11834/jig.200295
移动端阅览

浏览全部资源
扫码关注微信
收稿:2020-06-30,
修回:2020-12-15,
录用:2020-12-22,
纸质出版:2021-12-16
移动端阅览
目的
2
多曝光图像融合(multi-exposure fusion,MEF)是利用一组不同曝光度的低动态范围(low dynamic range,LDR)图像进行合成,得到类似高动态范围(high dynamic range,HDR)图像视觉效果图像的过程。传统多曝光图像融合在一定程度上存在图像细节信息受损、边界不清晰以及部分色彩失真等问题。为了充分综合待融合图像的有效信息,提出了一种基于图像分解和色彩先验的双尺度多曝光图像融合方法。
方法
2
使用快速导向滤波进行图像分解,分离出细节层对其进行增强处理,保留更多的细节信息,同时减少融合图像的光晕伪影;根据色彩先验,利用亮度和饱和度之差判断图像曝光程度,并联合亮度与饱和度之差以及图像对比度计算多曝光图像融合权重,同时保障融合图像的亮度和对比度;利用导向滤波对权重图进行优化,抑制噪声,增加像素之间的相关性,提升融合图像的视觉效果。
结果
2
在24组多曝光图像序列上进行实验,从主观评价角度来看,该融合方法能够提升图像整体对比度及色彩饱和度,并兼顾过曝光区域和欠曝光区域的细节提升。从客观评价标准分析,采用两种不同的多曝光图像序列融合结果的质量评估算法,评价结果显示融合性能均有所提高,对应的指标均值分别为0.982和0.970。与其他对比算法的数据结果比较,在两种不同的结构相似性指标上均有所提升,平均提升分别为1.2%和1.1%。
结论
2
通过主观和客观评价,证实了所提方法在图像对比度、色彩饱和度以及细节信息保留的处理效果十分显著,具有良好的融合性能。
Objective
2
Good quality images with rich information and good visual effect have been concerned in digital imaging. Due to the limitation of "dynamic range"
existing imaging equipment cannot record all the details of one scene via one exposure imaging
which seriously affects the visual effect and key information retention of source images. A mismatch in the dynamic range has been caused to real scene amongst existing imaging
display equipment and human eye's response. The dynamic range can be regarded as the brightness ratio between the brightest and darkest points in natural scene ima-ges. The dynamic range of human visual system is 10
5
: 1. The dynamic range of images captured/displayed by digital imaging/display equipment is only 10
2
: 1
which is significantly lower than the corresponding one of human visual system. Multi-exposure image fusion has provided a simple and effective way to solve the mismatch of dynamic range between existing imaging/display equipment and human eye. Multi-exposure image fusion has been used to perform the weighted fusion of multiple images via various cameras at different exposure levels. The information of source images has been maximally retained in the fused images
which ensures the fused images have the high-dynamic-range visual effect that matches the resolution of human eyes.
Method
2
Multi-exposure image fusion methods have usually categorized into spatial domain methods and transform domain methods. Spatial domain fusion methods either first divide source image sequences into image blocks according to certain rules and then perform the fusion of image blocks
or directly do the pixel-based fusion. The fused images often have different problems
such as unnatural connection of transition areas and uneven brightness distribution
which causes the low structural similarity between the fused images and source images. Transform domain fusion methods first decompose source images into the transform domain
and then perform the fusion according to certain fusion rules. Image decomposition has mainly divided into multi-scale decomposition and two-scale decomposition. Multi-scale decomposition requires up-sampling and down-sampling operations
which cause a certain degree of image information loss. Two-scale decomposition does not contain up-sampling and down-sampling operations
which avoids the problem of information loss caused by multi-scale decomposition
and avoids the shortcomings of spatial domain method to a certain extent. Two-scale decomposition can be directly decomposed into base and detail layers using filters
but the selection of filters seriously affects the quality of the fused images. A new exposure fusion algorithm based on two-scale decomposition and color prior has been proposed to obtain visual effect images like high-quality HDR (high dynamic range) images. The details of overexposed area and dark area can be involved in and the fused image has good color saturation. The main contributions have shown as follows: 1) To use the difference between image brightness and image saturation to determine the degree of exposure; To combine the difference and the image contrast as a quality measure. This method can distinguish the overexposed area and the unexposed area quickly and efficiently. The texture details of the overexposed area and the unexposed area have been considered. The color saturation and contrast of the fused image have been improved. 2) A fusion method based on two-scale decomposition has been presented. Fast guided filter to decompose the image can reduce the halo artifacts of the fused image to a certain extent
which makes the image have better visual effect. The detailed research workflows have shown as follows: First
the image has been decomposed by fast guided filtering. The obtained detail layer has been enhanced
which retains more detailed information
and reduces the halo artifacts of the fused image. Next
based on the color prior
the difference between brightness and saturation has been used to determine the degree of image exposure. The difference and the image contrast have been combined to calculate the fusion weight of multi-exposure images
ensuring the brightness and contrast of the fused images at the same time. At last
the guided filtering has been used to optimize the weight map
suppress noise
increase the pixels correlation and improve the visual effect of the fused images.
Result
2
24 sets of multi-exposure image sequences have been included. From the overall contrast and color saturation of the fused images and the details of both overexposed and underexposed areas have been improved on the perspective of subjective evaluation. In terms of the analysis of objective evaluation criteria
two different quality evaluation algorithms have used to evaluate the fused results of multi-exposure image sequences. The evaluation results have shown that the averages of corresponding indicators reach 0.982 and 0.970 each. The two different structural similarity indexes have been improved
with an average improvement of 1.2% and 1.1% respectively.
Conclusion
2
According to subjective and objective evaluations
the good fusion performance of significant processing effects on image contrast
color saturation and detail information retention has been outreached. The algorithm has three main advantages with respect to low complexity
simple implementation and relatively fast running speed
to customize mobile devices. It can be applied to imaging equipment with low dynamic range to obtain ideal images.
Goshtasby A A. 2005. Fusion of multi-exposure images. Image and Vision Computing, 23(6): 611-618[DOI: 10.1016/j.imavis.2005.02.004]
He K M, Sun J and Tang X O. 2013. Guided image filtering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(6): 1397-1409[DOI: 10.1109/TPAMI.2012.213]
Kinoshita Y and Kiya H. 2019. Scene segmentation-based luminance adjustment for multi-exposure image fusion. IEEE Transactions on Image Processing, 28(8): 4101-4116[DOI: 10.1109/TIP.2019.2906501]
Kong J, Wang R J, Lu Y H, Feng X and Zhang J B. 2007. A novel fusion approach of multi-exposure image//Proceedings of the EUROCON 2007: The International Conference on "Computer as a Tool". Warsaw, Poland: IEEE: 163-169[ DOI: 10.1109/EURCON.2007.4400468 http://dx.doi.org/10.1109/EURCON.2007.4400468 ]
Li H, Ma K D, Yong H W and Zhang L. 2020. Fast multi-scale structural patch decomposition for multi-exposure image fusion. IEEE Transactions on Image Processing, 29: 5805-5816[DOI: 10.1109/TIP.2020.2987133]
Li S T and Kang X D. 2012. Fast multi-exposure image fusion with median filter and recursive filter. IEEE Transactions on Consumer Electronics, 58(2): 626-632[DOI: 10.1109/tce.2012.6227469]
Li S T, Kang X D and Hu J W. 2013. Image fusion with guided filtering. IEEE Transactions on Image Processing, 22(7): 2864-2875[DOI: 10.1109/TIP.2013.2244222]
Liu Y and Wang Z F. 2015. Dense SIFT for ghost-free multi-exposure fusion. Journal of Visual Communication and Image Representation, 31: 208-224[DOI: 10.1016/j.jvcir.2015.06.021]
Ma K D, Duanmu Z F, Zhu H W, Fang Y M and Wang Z. 2019. Deep guided learning for fast multi-exposure image fusion. IEEE Transactions on Image Processing, 29: 2808-2819[DOI: 10.1109/TIP.2019.2952716]
Ma K D, Li H, Yong H W, Wang Z, Meng D Y and Zhang L. 2017. Robust multi-exposure image fusion: a structural patch decomposition approach. IEEE Transactions on Image Processing, 26(5): 2519-2532[DOI: 10.1109/TIP.2017.2671921]
Ma K D, Zeng K and Wang Z. 2015. Perceptual quality assessment for multi-exposure image fusion. IEEE Transactions on Image Processing, 24(11): 3345-3356[DOI: 10.1109/TIP.2015.2442920]
Mertens T, Kautz J and Van Reeth F. 2007. Exposure fusion//Proceedings of the 15th Pacific Conference on Computer Graphics and Applications. Maui, USA: IEEE: 382-390[ DOI: 10.1109/PG.2007.17 http://dx.doi.org/10.1109/PG.2007.17 ]
Mertens T, Kautz J and Van Reeth F. 2009. Exposure fusion: a simple and practical alternative to high dynamic range photography. Computer Graphics Forum, 28(1): 161-171[DOI: 10.1111/j.1467-8659.2008.01171.x]
Moriyama D, Ueda Y, Misawa H, Suetake N and Uchino E. 2019. Saturation-based multi-exposure image fusion employing local color correction//Proceedings of 2019 IEEE International Conference on Image Processing. Taipei, China: IEEE: 3512-3516[ DOI: 10.1109/ICIP.2019.8803693 http://dx.doi.org/10.1109/ICIP.2019.8803693 ]
Nejati M, Karimi M, Soroushmehr S M R, Karimi N, Samavi S and Najarian K. 2017. Fast exposure fusion using exposedness function//Proceedings of 2017 IEEE International Conference on Image Processing. Beijing, China: IEEE: 2234-2238[ DOI: 10.1109/ICIP.2017.8296679 http://dx.doi.org/10.1109/ICIP.2017.8296679 ]
Qi G Q, Chang L, Luo Y Q, Chen Y N, Zhu Z Q and Wang S J. 2020. A precise multi-exposure image fusion method based on low-level features. Sensors, 20(6): #1597[DOI: 10.3390/s20061597]
Rahman H, Soundararajan R and Babu R V. 2017. Evaluating multiexposure fusion using image information. IEEE Signal Processing Letters, 24(11): 1671-1675[DOI: 10.1109/LSP.2017.2752233]
Reinhard E, Ward G, Pattanaik S, Debevec P, Heidrich W and Myszkowski K. 2010. High Dynamic Range Imaging: Acquisition, Display and Image-Based Lighting. 2nd ed. Burlington: Elsevier
Shen J B, Zhao Y, Yan S C and Li X L. 2014. Exposure fusion using boosting laplacian pyramid. IEEE Transactions on Cybernetics, 44(9): 1579-1590[DOI: 10.1109/TCYB.2013.2290435]
Shen R, Cheng I, Shi J B and Basu A. 2011. Generalized random walks for fusion of multi-exposure images. IEEE Transactions on Image Processing, 20(12): 3634-3646[DOI: 10.1109/TIP.2011.2150235]
Wang S P and Zhao Y. 2020. A novel patch-based multi-exposure image fusion using super-pixel segmentation. IEEE Access, 8: 39034-39045[DOI: 10.1109/ACCESS.2020.2975896]
Zhang W and Cham W K. 2012. Gradient-directed multi-exposure composition. IEEE Transactions on Image Processing, 21(4): 2318-2323[DOI: 10.1109/TIP.2011.2170079]
Zhu Q S, Mai J M and Shao L. 2015. A fast single image haze removal algorithm using color attenuation prior. IEEE Transactions on Image Processing, 24(11): 3522-3533[DOI:10.1109/TIP.2015.2446191]
相关文章
相关作者
相关机构
京公网安备11010802024621