零参考样本下的逆光图像深度学习增强方法
Deep learning based backlight image enhancement method derived of zero-reference samples
- 2022年27卷第5期 页码:1589-1603
纸质出版日期: 2022-05-16 ,
录用日期: 2021-12-24
DOI: 10.11834/jig.210783
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2022-05-16 ,
录用日期: 2021-12-24
移动端阅览
王知音, 张二虎, 石争浩, 段敬红. 零参考样本下的逆光图像深度学习增强方法[J]. 中国图象图形学报, 2022,27(5):1589-1603.
Zhiyin Wang, Erhu Zhang, Zhenghao Shi, Jinghong Duan. Deep learning based backlight image enhancement method derived of zero-reference samples[J]. Journal of Image and Graphics, 2022,27(5):1589-1603.
目的
2
随着数码设备的普及,拍照成为记录生活的一种主流方式。但是周围环境的不可控因素会导致用户获取到逆光图像。传统的图像增强方法大多是全局增强,通常存在增强过度或增强程度不够的问题。而基于深度学习的图像增强方法大多是针对低照度图像增强任务,此类方法无法同时兼顾逆光图像中欠曝光区域和过曝光区域的增强问题,且在网络训练时需要成对的数据集。
方法
2
提出一个基于注意力机制的逆光图像增强网络ABIEN(attention-based backlight image enhancement network),该网络学习逆光图像与增强图像之间像素级的映射参数
解决无参考图像的问题,同时使用注意力机制使网络关注欠曝光区域和过曝光区域的增强。为了解决无法获取成对图像数据集的问题,所设计的网络学习逆光图像与恢复图像之间的映射参数,并借助该参数进行迭代映射以实现图像增强;为了在增强欠曝光区域的同时还能抑制过曝光区域增强过度的问题,通过引入注意力机制帮助网络关注这两个不同区域的增强过程;为了解决大多数图像恢复中都会出现的光晕、伪影等问题,采用原始分辨率保留策略,在不改变图像大小的情况下将主网络各个深度的特征信息充分利用,以削弱该类问题对增强图像的影响。
结果
2
通过将本文方法与MSRCR(multi-scale retinex with color restoration)、Fusion-based(fusion-based method)、Learnning-based(learning-based restoration)、NPEA(naturalness preserved enhancement algorithm)和ExCNet(exposure correction network)等方法进行对比,本文方法得到的增强图像从主观上看曝光度更好、颜色保留更真实、伪影更少;从客观指标来看,本文方法在LOE(lightness order error)上取得了最好的效果,在VLD(visibility level descriptor)和CDIQA(contrast-distorted image quality assessment)上表现也很好;从处理时间上来看,本文方法的处理时间相对较短,可以满足在现实场景中的实时应用。
结论
2
提出的逆光图像增强方法通过结合注意力机制和原始分辨率保留策略,可以帮助网络学习各个层次的特征信息,充分挖掘图像内容信息,在矫正图像亮度的同时,还能更好地恢复图像细节。
Objective
2
Digital photos have been evolved in human life. However
backlight images are captured due to its unidentified factors in the context of its scenarios. Without careful control of lighting
important objects can disappear in the backlight areas
causing backlight images to become a fatal problem of image quality degradation. The theoretical cause of backlight image is that the object being photographed is located right between the light source and the shooting lens
the overall dynamic range of the light in the same picture is extremely large. Due to the limitation of the photosensitive element
the general camera cannot incorporate all the levels of detail into the latitude range
resulting in poor shooting results
which further causes problems such as barren visual quality of the entire image
color degradation of meaningful areas and loss of detail information in the image. Current image enhancement methods are focused on the aspect of global enhancement
and there is an issue of excessive enhancement or insufficient enhancement for backlight images. Moreover
deep learning based image enhancement method is mainly related to the low-illumination image enhancement task
which cannot take the backlight images enhancement of underexposed and overexposed regions into account simultaneously. We illustrate an attention-based backlight image enhancement network (ABIEN)
which can resolve non-pairing image sets via learning the pixel-wise mapping relationship between the backlight image and the enhanced image
and facilitate network training to enhance underexposed and overexposed regions.
Method
2
First
our demonstrated network is designated to learn the mapping parameters between the backlight image and the restored image to obtain paired datasets in an iterative way
and the enhanced image is obtained based on learned mapping parameters to transform the backlight image. Pixel-level parameters avoid the disadvantages of the previous methods without distinction enhancement and achieve targeted enhancement. Next
in order to enhance the underexposed region and suppress the overexposed region
the attention mechanism is carried out to focus on the two aspects of trained network. Experiments show that the attention mechanism in the network can distinguish the underexposed region and overexposed region in the backlight image more accurately
and promote the optimal mapping parameters generated by the network. Finally
in order to solve the problems of artifact and halo in most image restoration works
we harness a strategy of retaining the original resolution to extract the features of each depth of the backboned network. The feature information extraction based on this strategy solves the problem of poor feature information caused by single scale resolution. Besides
the artifact and halo issues can be further deducted.
Result
2
In comparison of multi-scale retinex with color restoration (MSRCR)
fusion-based method(Fusion-based)
learning-based restoration(Learning-based)
naturalness preserved enhancement algorithm (NPEA) and exposure correction network (ExCNet) methods
our demonstration is focused on enhanced image exposure more
which are more real color retention and fewer artifacts. Lightness order error (LOE)
visibility level descriptor (VLD) and contrast-distorted image quality assessment (CDIQA) are utilized to evaluate the image quality restored by different methods. LOE index is used to evaluate the changes of image brightness order statistics caused by halo
artifact
contour and ringing
which affect the visual perception quality of the image. The smaller LOE value is
the better the restoration method is. VLD is computing the ratio between the gradient of the visible edges between the image before and after the restoration method
and the higher value of VLD means the better visual quality of the image. CDIQA can be regarded as an indicator to evaluate the content richness of an image. A high CDIQA value indicates better image quality. By comparing the proposed method with others
it is proved that our LOE value
VLD and CDIQA indicators have their priority both. The processing running speed of our method is relatively faster which can meet the real-time application in real scenario.
Conclusion
2
Our backlight image enhancement method can melt the attention mechanism and the original resolution retention strategy into the feature learning at all layers of the network and mine the detailed information overall. Subjective and objective evaluations were conducted to corroborate the superiority of the proposed approach over the others. The experimental results show that the proposed enhancement method can satisfactorily rectify the underexposure and overexposure problems that coexist in backlight images
demonstrating its superiority in restoring image details over the other competitors. Additionally
the artifacts in the image are effectively suppressed
both backlight and non-backlight regions without introducing annoying side effects. A high processing efficiency makes the proposed method have a good application prospect.
逆光图像图像增强卷积神经网络(CNN)注意力机制零参考样本
backlight imageimage enhancementconvolutional neural network(CNN)attention mechanismzero-referencesample
Afifi M, Abdelhamed A, Abuolaim A, Punnappurath A and Brown M S. 2020. CIE XYZ net: unprocessing images for low-level computer vision tasks [EB/OL]. [2021-06-23].https://arxiv.org/pdf/2006.12709.pdfhttps://arxiv.org/pdf/2006.12709.pdf
Buades A, Lisani J L, Petro A B and Sbert C. 2020. Backlit images enhancement using global tone mappings and image fusion. IET Image Processing, 14(2): 211-219 [DOI: 10.1049/iet-ipr.2019.0814]
Buchsbaum G. 1980. A spatial processor model for object colour perception. Journal of the Franklin Institute, 310(1): 1-26 [DOI: 10.1016/0016-0032(80)90058-7]
Guo C L, Li C Y, Guo J C, Loy C C, Hou J H, Kwong S and Cong R M. 2020. Zero-reference deep curve estimation for low-light image enhancement//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 1777-1786 [DOI: 10.1109/CVPR42600.2020.00185http://dx.doi.org/10.1109/CVPR42600.2020.00185]
Hautière N, Tarel J P, Aubert D and Dumont É. 2008. Blind contrast enhancement assessment by gradient ratioing at visible edges. Image Analysis and Stereology, 27(2): 87-95 [DOI: 10.5566/ias.v27.p87-95]
He K M, Sun J and Tang X O. 2011. Single image haze removal using dark channel prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(12): 2341-2353 [DOI: 10.1109/TPAMI.2010.168]
Hu J, Shen L, Albanie S, Sun G and Wu E H. 2020. Squeeze-and-excitation networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(8): 2011-2023 [DOI: 10.1109/TPAMI.2019.2913372]
Im J, Yoon I, Hayes M H and Paik J. 2013. Dark channel prior-based spatially adaptive contrast enhancement for back lighting compensation//Proceedings of 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. Vancouver, Canada: IEEE: 2464-2468 [DOI: 10.1109/ICASSP.2013.6638098http://dx.doi.org/10.1109/ICASSP.2013.6638098]
Jiang K, Wang Z Y, Yi P, Chen C, Han Z, Lu T, Huang B J and Jiang J J. 2021. Decomposition makes better rain removal: an improved attention-guided deraining network. IEEE Transactions on Circuits and Systems for Video Technology, 31(10): 3981-3995 [DOI: 10.1109/TCSVT.2020.3044887]
Jobson D J, Rahman Z and Woodell G A. 1997. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Transactions on Image Processing, 6(7): 965-976 [DOI: 10.1109/83.597272]
Kim N, Lee S, Chon E, Hayes M H and Paik J. 2013. Adaptively partitioned block-based backlit image enhancement for consumer mobile devices//Proceedings of 2013 IEEE International Conference on Consumer Electronics. Las Vegas, USA: IEEE: 393-394 [DOI: 10.1109/ICCE.2013.6486944http://dx.doi.org/10.1109/ICCE.2013.6486944]
Lee C, Li C and Kim C S. 2012. Contrast enhancement based on layered difference representation//Proceedings of the 19th IEEE International Conference on Image Processing. Orlando, USA: IEEE: 965-968 [DOI: 10.1109/ICIP.2012.6467022http://dx.doi.org/10.1109/ICIP.2012.6467022]
Li C Y, Guo C L, Han L H, Jiang J, Cheng M M, Gu J W and Loy C C. 2021. Low-light image and video enhancement using deep learning: a survey [EB/OL]. [2021-11-05].https://arxiv.org/pdf/2104.10729.pdfhttps://arxiv.org/pdf/2104.10729.pdf
Li X G, Yang F F, Lam K M, Zhuo L and Li J F. 2020. Blur-Attention: a boosting mechanism for non-uniform blurred image restoration[EB/OL]. [2021-08-18].https://arxiv.org/pdf/2008.08526.pdfhttps://arxiv.org/pdf/2008.08526.pdf
Li Z H and Wu X L. 2018. Learning-based restoration of backlit images. IEEE Transactions on Image Processing, 27(2): 976-986 [DOI: 10.1109/TIP.2017.2771142]
Lim S and Kim W. 2020. DSLR: deep stacked Laplacian restorer for low-light image enhancement. IEEE Transactions on Multimedia, 23: 4272-4284 [DOI: 10.1109/TMM.2020.3039361]
Lin T Y, Maire M, Belongie S, Bourdev L, Girshick R, Hays J, Perona P, Ramanan D, Zitnick C L and Dollar P. 2014. Microsoft COCO: common objects in context [EB/OL]. [2021-06-21].https://arxiv.org/pdf/1405.0312.pdfhttps://arxiv.org/pdf/1405.0312.pdf
Liu Y T and Li X. 2020. No-reference quality assessment for contrast-distorted images. IEEE Access, 8: 84105-84115 [DOI: 10.1109/ACCESS.2020.2991842]
Loh Y P and Chan C S. 2019. Getting to know low-light images with the exclusively dark dataset. Computer Vision and Image Understanding, 178: 30-42 [DOI: 10.1016/j.cviu.2018.10.010]
Lore K G, Akintayo A and Sarkar S. 2017. LLNet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recognition, 61: 650-662 [DOI: 10.1016/j.patcog.2016.06.008]
Ma K D, Zeng K and Wang Z. 2015. Perceptual quality assessment for multi-exposure image fusion. IEEE Transactions on Image Processing, 24(11): 3345-3356 [DOI: 10.1109/TIP.2015.2442920]
Mertens T, Kautz J and Van Reeth F. 2007. Exposure fusion//Proceedings of the 15th Pacific Conference on Computer Graphics and Applications. Maui, USA: IEEE: 382-390 [DOI: 10.1109/PG.2007.17http://dx.doi.org/10.1109/PG.2007.17]
Ronneberger O, Fische P and Brox T. 2015. U-Net: convolutional networks for biomedical image segmentation//Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich, Germany: Springer: 234-241 [DOI: 10.1007/978-3-319-24574-4_28http://dx.doi.org/10.1007/978-3-319-24574-4_28]
Shan C W, Zhang Z Z and Chen Z B. 2019. A coarse-to-fine framework for learned color enhancement with non-local attention//Proceedings of 2019 IEEE International Conference on Image Processing. Taipei, China: IEEE: 949-953 [DOI: 10.1109/ICIP.2019.8803052http://dx.doi.org/10.1109/ICIP.2019.8803052]
Shocher A, Cohen N and Irani M. 2018. Zero-shot super-resolution using deep internal learning//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 3118-3126 [DOI: 10.1109/CVPR.2018.00329http://dx.doi.org/10.1109/CVPR.2018.00329]
Wang Q H, Fu X Y, Zhang X P and Ding X H. 2016. A fusion-based method for single backlit image enhancement//Proceedings of 2016 IEEE International Conference on Image Processing. Phoenix, USA: IEEE: 4077-4081 [DOI: 10.1109/ICIP.2016.7533126http://dx.doi.org/10.1109/ICIP.2016.7533126]
Wang R X, Zhang Q, Fu C W, She X Y, Zheng W S and Jia J Y. 2019. Underexposed photo enhancement using deep illumination estimation//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 6842-6850 [DOI: 10.1109/CVPR.2019.00701http://dx.doi.org/10.1109/CVPR.2019.00701]
Wang S H, Zheng J, Hu H M and Li B. 2013. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Transactions on Image Processing, 22(9): 3538-3548 [DOI: 10.1109/tip.2013.2261309]
Wang W J, Wei C, Yang W H and Liu J Y. 2018. GLADNet: low-light enhancement network with global awareness//Proceedings of the 13th IEEE International Conference on Automatic Face and Gesture Recognition. Xi′an, China: IEEE: 751-755 [DOI: 10.1109/FG.2018.00118http://dx.doi.org/10.1109/FG.2018.00118]
Wei C, Wang W J, Yang W H and Liu J Y. 2018. Deep retinex decomposition for low-light enhancement[EB/OL]. [2021-08-14].https://arxiv.org/pdf/1808.04560.pdfhttps://arxiv.org/pdf/1808.04560.pdf
Yuan L and Sun J. 2012. Automatic exposure correction of consumer photographs//Proceedings of the 12th European Conference on Computer Vision. Florence, Italy: Springer: 771-785 [DOI: 10.1007/978-3-642-33765-9_55http://dx.doi.org/10.1007/978-3-642-33765-9_55]
Zhang L, Zhang L J, Liu X, Shen Y, Zhang S M and Zhao S J. 2019. Zero-shot restoration of back-lit images using deep internal learning//Proceedings of the 27th ACM International Conference on Multimedia. Nice, France: ACM: 1623-1631 [DOI: 10.1145/3343031.3351069http://dx.doi.org/10.1145/3343031.3351069]
Zhang Y H, Guo X J, Ma J Y, Liu W and Zhang J W. 2021b. Beyond brightening low-light images. International Journal of Computer Vision, 129(4): 1013-1037 [DOI: 10.1007/s11263-020-01407-x]
Zhang Y L, Tian Y P, Kong Y, Zhong B N and Fu Y. 2021a. Residual dense network for image restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(7): 2480-2495 [DOI: 10.1109/TPAMI.2020.2968521]
相关作者
相关机构