Low-light image enhancement and denoising with internal and external priors
- Vol. 28, Issue 9, Pages: 2844-2855(2023)
Published: 16 September 2023
DOI: 10.11834/jig.220707
移动端阅览
浏览全部资源
扫码关注微信
Published: 16 September 2023 ,
移动端阅览
都双丽, 党慧, 赵明华, 石争浩. 2023. 结合内外先验知识的低照度图像增强与去噪算法. 中国图象图形学报, 28(09):2844-2855
Du Shuangli, Dang Hui, Zhao Minghua, Shi Zhenghao. 2023. Low-light image enhancement and denoising with internal and external priors. Journal of Image and Graphics, 28(09):2844-2855
目的
2
现有大多数低照度图像增强算法会放大噪声,且用于极低照度图像时会出现亮度提升不足、色彩失真等问题。为此,提出一种基于Retinex(retina cortex)的增强与去噪方法。
方法
2
为了增强极低照度图像, 首先利用暗通道先验原理估计场景的全局光照,若光照低于0.5,对图像进行初始光照校正;其次,提出一种Retinex顺序分解模型,使低照度图像中的噪声均体现在反射分量中,基于分解结果,利用Gamma校正求取增强后的噪声图像; 最后,提出一种基于内外双重互补先验约束的去噪机制,利用非局部自相似性原理为反射分量构建内部先验约束,基于深度学习,为增强后的噪声图像构建外部先验约束,使内外约束相互制约。
结果
2
将本文算法与6种算法比较,在140幅普通低照度图像和162幅极低照度图像上(有正常曝光参考图像)进行主观视觉和客观指标评价比较,结果显示本文方法在亮度提升、色彩保真及去噪方面均有明显优势,对于普通低照度图像,BTMQI(blind tone-mapped quality index)和NIQE(natural image quality evaluator)指标均取得次优值,对于极低照度图像,NIQMC(no-reference image quality metric for contrast distortion)、峰值信噪比(peak signal-to-noise ratio,PSNR)和结构相似性(structural similarity index,SSIM)3种指标均取得最优值,其他算法的峰值信噪比在8~18.35 dB,结构相似度在0.3~0.78,而本文算法可达到18.94 dB和 0.82,优势明显。
结论
2
本文算法不仅可以增强不同光照条件下的低照度图像,还可以有效去除图像中的噪声,效果稳定。
Objective
2
Low-light image enhancement has been studied extensively in the past few decades as one of the most challenging image processing problems. The images taken in low-light conditions usually contain extremely dark areas and unexpected noise. Many impressive methods, including cognition-based and learning-based approaches, have been proposed to improve image brightness and recover image details and color information. Remarkable enhancement results have been achieved by deep-learning-based techniques. Low-light and norm-light image pairs are required for enhancement methods based on supervised learning. However, no unique or well-defined norm-light ground truth exists. In addition, the models trained by a direct image-to-image transformation manner, even with generative adversarial networks, tend to show a bias toward a certain range of luminance values and scenes. Approaches based on the retina cortex (Retinex) represent a branch of cognition-based methods. However, they tend to amplify the noises hidden in dark images. Some attempts for noise suppression have been introduced. They focus on utilizing the internal prior in the input image to distinguish the noise from the image texture. The denoising performance is limited, and the image texture is often removed together with noise. This scenario leads to a blurry background. Additionally, most of these enhancement methods are designed for generally low-light images. If they are used for images with extremely low light, insufficient brightness improvement and obvious color deterioration are produced. This study proposes a low-light image enhancement and denoising method to address these issues by combining the internal and external priors.
Method
2
We regard extremely low-light image enhancement as a two-stage illumination correction task. First, the global illumination in a scene is estimated based on the well-known dark channel prior. If the global illumination is lower than 0.5, the input image is regarded as an extremely low-light image, and an initial brightness correction is performed for the image. If the global illumination is greater than or equal to 0.5, the input image is a generally low-light image; thus, no further processing is required. Second, a sequential Retinex decomposition model is proposed to decompose a low-light image into an illumination component multiplied by a reflectance component. An L1-norm regularization term on the illumination gradient is applied under the assumption that it is spatially piecewise smooth. Unlike approximating the illumination layer to a pre-estimation, our method aims to approximate the illumination layer to the low-light image in the RGB color space. Then, all noises are supposed to be contained in the reflectance layer. Based on the Retinex decomposition result, the enhanced noise image is produced with Gamma correction. Finally, a denoising technique is proposed based on a dual, complementary prior constraint. This technique utilizes a nonlocal self-similarity property to construct the internal prior for the reflection component. The deep learning technique is also utilized to construct the external prior constraint for the enhanced noise image. Then, the internal and external priors restrict each other. The proposed denoising model can be solved by an alternating optimization strategy.
Result
2
We compare the proposed method with six existing enhancement algorithms, including two Retinex-based traditional approaches, two deep learning approaches, and two Retinex-based learning approaches, to verify its effectiveness. We select 140 generally low-light images (global illumination > 0.5) from the commonly used datasets, including DICM, LIME, and ExDark. We also select 162 extremely low-light images (global illumination < 0.15) from the LOL dataset for testing. For the generally low-light images, no well-exposed normal-exposure image exists for reference. Both visual evaluation and quantitative evaluation are provided. Three nonreference quality assessment metrics, including blind tone-mapped quality index (BTMQI), no-reference image quality metric for contrast distortion (NIQMC) and natural image quality evaluator (NIQE), and two full-reference metrics, including peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM), are utilized for evaluation. The visual comparisons show that our method has advantages in brightness improvement, color fidelity, and denoising. For the generally low-light images, the quantitative comparisons show that our method achieves the second-best results for BTMQI and NIQE. For NIQMC, the result of our method is close to the results of the two Retinex-based traditional methods. For extremely low-light images, our method achieves the best results for NIQMC, PSNR, and SSIM. The PSNR values obtained by other algorithms range from 8 to 18.35 dB, and their SSIM values range from 0.3 to 0.78. In comparison, our algorithm can reach 18.94 dB and 0.82 for PSNR and SSIM, respectively, showing noticeable advantages. Qualitative and quantitative experimental results show that the proposed algorithm can enhance low-light images under different illumination conditions and effectively remove noise hidden in images. Moreover, its performance is relatively stable.
Conclusion
2
This paper proposes a novel Retinex-based low-light image enhancement and denoising method, which can be used for generally and extremely low-light images. The irreconcilable problem between the brightness increase and the color distortion for extreme low-light enhancement task is effectively solved by transforming one extremely low-light image into a generally low-light image. A dual, complementary constraint is constructed based on the internal and external priors to remove the amplified noise. The experiments demonstrate that the constraint can balance noise removal and texture preservation, making the enhanced image edge clear.
Retinex分解低照度图像增强暗通道先验环境光照估计双重互补先验约束去噪
Retinex decompositionlow-light image enhancementdark channel priorenvironmental illumination estimationdual complementary prior constraintdenoising
Cai B L, Xu X M, Guo K L, Jia K, Hu B and Tao D C. 2017. A joint intrinsic-extrinsic prior model for Retinex//Proceedings of 2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE: 4020-4029 [DOI: 10.1109/ICCV.2017.431http://dx.doi.org/10.1109/ICCV.2017.431]
Fu X Y, Zeng D L, Huang Y, Zhang X P and Ding X H. 2016. A weighted variational model for simultaneous reflectance and illumination estimation//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE: 2782-2790 [DOI: 10.1109/CVPR.2016.304http://dx.doi.org/10.1109/CVPR.2016.304]
Gu K, Lin W S, Zhai G T, Yang X K, Zhang W J and Chen C W. 2017. No-reference quality metric of contrast-distorted images based on information maximization. IEEE Transactions on Cybernetics, 47(12): 4559-4565 [DOI: 10.1109/TCYB.2016.2575544http://dx.doi.org/10.1109/TCYB.2016.2575544]
Gu K, Wang S Q, Zhai G T, Ma S W, Yang X K, Lin W S, Zhang W J and Gao W. 2016. Blind quality assessment of tone-mapped images via analysis of information, naturalness, and structure. IEEE Transactions on Multimedia, 18(3): 432-443 [DOI: 10.1109/TMM.2016.2518868http://dx.doi.org/10.1109/TMM.2016.2518868]
Guo X J, Li Y and Ling H B. 2017. LIME: low-light image enhancement via illumination map estimation. IEEE Transactions on Image Processing, 26(2): 982-993 [DOI: 10.1109/TIP.2016.2639450http://dx.doi.org/10.1109/TIP.2016.2639450]
Hao S J, Han X, Guo Y R, Xu X and Wang M. 2020. Low-light image enhancement with semi-decoupled decomposition. IEEE Transactions on Multimedia, 22(12): 3025-3038 [DOI: 10.1109/TMM.2020.2969790http://dx.doi.org/10.1109/TMM.2020.2969790]
He K M, Sun J and Tang X O. 2011. Single image haze removal using dark channel prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(12): 2341-2353 [DOI: 10.1109/TPAMI.2010.168http://dx.doi.org/10.1109/TPAMI.2010.168]
Huang H, Tao H J and Wang H F. 2019. Low-illumination image enhancement using a conditional generative adversarial network. Journal of Image and Graphics, 24(12): 2149-2158
黄鐄, 陶海军, 王海峰. 2019. 条件生成对抗网络的低照度图像增强方法. 中国图象图形学报, 24(12): 2149-2158 [DOI: 10.11834/jig.190145http://dx.doi.org/10.11834/jig.190145]
Jiang Y F, Gong X Y, Liu D, Cheng Y, Fang C, Shen X H, Yang J C, Zhou P and Wang Z Y. 2021. EnlightenGAN: deep light enhancement without paired supervision. IEEE Transactions on Image Processing, 30: 2340-2349 [DOI: 10.1109/TIP.2021.3051462http://dx.doi.org/10.1109/TIP.2021.3051462]
Jiang Z T, Qin L L, Qin J Q and Zhang S Q. 2021. Low-light image enhancement method based on MDARNet. Journal of Software, 32(12): 3977-3991
江泽涛, 覃露露, 秦嘉奇, 张少钦. 2021. 一种基于MDARNet的低照度图像增强方法. 软件学报, 32(12): 3977-3991 [DOI: 10.13328/j.cnki.jos.006112http://dx.doi.org/10.13328/j.cnki.jos.006112]
Lee C, Lee C and Kim C S. 2013. Contrast enhancement based on layered difference representation of 2D histograms. IEEE Transactions on Image Processing, 22(12): 5372-5384 [DOI: 10.1109/TIP.2013.2284059http://dx.doi.org/10.1109/TIP.2013.2284059]
Li C Y, Guo C L and Loy C C. 2022. Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(8): 4225-4238 [DOI: 10.1109/TPAMI.2021.3063604http://dx.doi.org/10.1109/TPAMI.2021.3063604]
Li M D, Liu J Y, Yang W H, Sun X Y and Guo Z M. 2018. Structure-revealing low-light image enhancement via robust retinex model. IEEE Transactions on Image Processing, 27(6): 2828-2841 [DOI: 10.1109/TIP.2018.2810539http://dx.doi.org/10.1109/TIP.2018.2810539]
Lim S and Kim W. 2021. DSLR: deep stacked laplacian restorer for low-light image enhancement. IEEE Transactions on Multimedia, 23: 4272-4284 [DOI: 10.1109/TMM.2020.3039361http://dx.doi.org/10.1109/TMM.2020.3039361]
Ng M K and Wang W. 2011. A total variation model for retinex. SIAM Journal on Imaging Sciences, 4(1): 345-365 [DOI: 10.1137/100806588http://dx.doi.org/10.1137/100806588]
Ren W Q, Liu S F, Ma L, Xu Q Q, Xu X Y, Cao X C, Du J P and Yang M H. 2019. Low-light image enhancement via a deep hybrid network. IEEE Transactions on Image Processing, 28(9): 4364-4375 [DOI: 10.1109/TIP.2019.2910412http://dx.doi.org/10.1109/TIP.2019.2910412]
Ren X T, Yang W H, Cheng W H and Liu J Y. 2020. LR3M: robust low-light enhancement via low-rank regularized retinex model. IEEE Transactions on Image Processing, 29: 5862-5876 [DOI: 10.1109/TIP.2020.2984098http://dx.doi.org/10.1109/TIP.2020.2984098]
Wang Y F, Liu H M and Fu Z W. 2019. Low-light image enhancement via the absorption light scattering model. IEEE Transactions on Image Processing, 28(11): 5679-5690 [DOI: 10.1109/TIP.2019.2922106http://dx.doi.org/10.1109/TIP.2019.2922106]
Wei K X, Fu Y, Yang J L and Huang H. 2020. A physics-based noise formation model for extreme low-light raw denoising//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE: 2755-2764 [DOI: 10.1109/CVPR42600.2020.00283http://dx.doi.org/10.1109/CVPR42600.2020.00283]
Wu W H, Weng J, Zhang P P, Wang X, Yang W H and Jiang J M. 2022. URetinex-net: retinex-based deep unfolding network for low-light image enhancement//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE: 5891-5900 [DOI: 10.1109/CVPR52688.2022.00581http://dx.doi.org/10.1109/CVPR52688.2022.00581]
Xu J, Hou Y K, Ren D W, Liu L, Zhu F, Yu M Y, Wang H Q and Shao L. 2020. STAR: a structure and texture aware retinex model. IEEE Transactions on Image Processing, 29: 5022-5037 [DOI: 10.1109/TIP.2020.2974060http://dx.doi.org/10.1109/TIP.2020.2974060]
Yang W H, Wang W J, Huang H F, Wang S Q and Liu J Y. 2021. Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE Transactions on Image Processing, 30: 2072-2086 [DOI: 10.1109/TIP.2021.3050850http://dx.doi.org/10.1109/TIP.2021.3050850]
Yu C Y, Xu X D, Lin H X and Ye X Y. 2017. Low-illumination image enhancement method based on a fog-degraded model. Journal of Image and Graphics, 22(9): 1194-1205
余春艳, 徐小丹, 林晖翔, 叶鑫焱. 2017. 应用雾天退化模型的低照度图像增强. 中国图象图形学报, 22(9): 1194-1205 [DOI: 10.11834/jig.170117http://dx.doi.org/10.11834/jig.170117]
Zhang K, Zuo W M and Zhang L. 2018. FFDNet: toward a fast and flexible solution for CNN-based image denoising. IEEE Transactions on Image Processing, 27(9): 4608-4622 [DOI: 10.1109/TIP.2018.2839891http://dx.doi.org/10.1109/TIP.2018.2839891]
Zhao Z J, Xiong B S, Wang L, Ou Q F, Yu L and Kuang F. 2022. RetinexDIP: a unified deep framework for low-light image enhancement. IEEE Transactions on Circuits and Systems for Video Technology, 32(3): 1076-1088 [DOI: 10.1109/TCSVT.2021.3073371http://dx.doi.org/10.1109/TCSVT.2021.3073371]
相关文章
相关作者
相关机构