Current Issue Cover
基于照度与场景纹理注意力图的低光图像增强

赵明华, 汶怡春, 都双丽, 胡静, 石程, 李鹏(西安理工大学计算机科学与工程学院, 西安 710048)

摘 要
目的 现有的低照度图像增强算法常存在局部区域欠增强、过增强及色彩偏差等情况,且对于极低照度图像增强,伴随着噪声放大及细节信息丢失等问题。对此,提出了一种基于照度与场景纹理注意力图的低光图像增强算法。方法 首先,为了降低色彩偏差对注意力图估计模块的影响,对低光照图像进行了色彩均衡处理;其次,试图利用低照度图像最小通道约束图对正常曝光图像的照度和纹理进行注意力图估计,为后续增强模块提供信息引导;然后,设计全局与局部相结合的增强模块,用获取的照度和场景纹理注意力估计图引导图像亮度提升和噪声抑制,并将得到的全局增强结果划分成图像块进行局部优化,提升增强性能,有效避免了局部欠增强和过增强的问题。结果 将本文算法与2种传统方法和4种深度学习算法比较,主观视觉和客观指标均表明本文增强结果在亮度、对比度以及噪声抑制等方面取得了优异的性能。在VV(Vasileios Vonikakis)数据集上,本文方法的BTMQI(blind tone-mapped quality index)和NIQMC(no-reference image quality metric for contrast distortion)指标均达到最优值;在178幅普通低照度图像上本文算法的BTMQI和NIQMC均取得次优值,但纹理突出和噪声抑制优势显著。结论 大量定性及定量的实验结果表明,本文方法能有效提升图像亮度和对比度,且在突出暗区纹理时,能有效抑制噪声。本文方法用于极低照度图像时,在色彩还原、细节纹理恢复和噪声抑制方面均具有明显优势。代码已共享在Github上:https://github.com/shuanglidu/LLIE_CEIST.git。
关键词
Low-light image enhancement algorithm based on illumination and scene texture attention map

Zhao Minghua, Wen Yichun, Du Shuangli, Hu Jing, Shi Cheng, Li Peng(School of Computer Science and Engineering, Xi’an University of Technology, Xi’an 710048, China)

Abstract
Objective Owing to the lack of sufficient environmental light,images captured from low-light scenes often suffer from several kinds of degradations,such as low visibility,low contrast,intensive noise,and color distortion.Such degradations will not only lower the visual perception quality of the images but also reduce the performance of the subsequent middle- and high-level vision tasks,such as object detection and recognition,semantic segmentation,and automatic driving.Therefore,the images taken under low-light conditions should be enhanced to meet subsequent utilization.Low-light image enhancement is one of the most important low-level vision tasks,which aims at improving the illumination and recovering image details of dark regions with lighting noise and has been intensively studied.Many impressive traditional methods and deep learning-based methods have been proposed.The methods achieved by traditional image processing techniques mainly include value mapping(such as histogram equalization and gamma correction)and model-based methods (such as Retinex model and atmospheric scattering model).However,they only improve image quality from a single perspective,such as contrast or dynamic range,and neglect such degradations as noise and scene detail recovery.On the contrary,with the great development of deep neural networks in low-level computer vision,deep learning-based methods can simultaneously optimize the enhancement results from multiple perspectives,such as brightness,color,and contrast.Thus,the enhancement performance is significantly improved.Although significant progress has been achieved,the existing deep learning-based enhancement methods have drawbacks,such as underenhancement,overenhancement,and color distortion in local areas,and the enhanced results are inconsistent with the visual characteristics of human eyes.In addition,given the high distortion degree of extremely low-light images,recovering scene details and suppressing noise amplification during enhancement are usually difficult.Therefore,increased attention should be paid to low-light image enhancement methods.To this end,a low-light image enhancement algorithm based on illumination and scene texture attention map is proposed in this paper.Method First,unlike in normal-light images,the illumination intensity of RGB channels is obviously different in low-light images,leading to apparent color distortion.Color equalization processing is performed for low-light images to reduce the influence of color distortion on the estimation module of attention map.We implement color equalization using the illumination intensity of RGB channels estimated with the dark channel prior to make the light intensity of each channel similar.Second,considering that the minimum channel constraint map has the characteristics of noise suppression and texture prominence,we estimate the illumination and texture attention map of normal-exposure images on the basis of the minimum channel constraint map of low-light images and provide information guidance for the subsequent enhancement module.Thus,an attention map estimation module based on the U-Net architecture is proposed.Third,an enhancement module is developed to improve image quality from the perspectives of the whole image and local patches.In the global enhancement module,the estimated illumination and scene texture attention map is used to guide the illumination adjustment and noise suppression.The attention mechanism can enable the network to allocate different attention to various brightness areas in low-light images during the training process to help the network focus on useful information effectively.The global enhanced result is divided into small patches to deal with the problems of underenhancement and overenhancement in local areas to improve the results further.Result To verify the effectiveness of the proposed method,we compare it with six state-of-the-art enhancement methods,including two traditional methods:semi-decomposed decomposition (SDD)and plug-and-play Retinex model(PnPR),and four deep learning-based methods:EnlightenGAN,zero-reference deep curve estimation (Zero-DCE),Retinex-based deep unfolding network (URetinex-Net),and signal-to-noise-ratio aware low light image enhancement(SNR-aware).We use digital images from commercial cameras(DICM),low-light image enhancement(LIME),multi-exposure image fusion(MEF),and 9 other datasets to construct 178 low-light images for testing.These low-light images do not have normal-exposure image for reference.Quantitative and qualitative evaluations are performed.For the quantitative evaluation,natural image quality evaluator(NIQE),blind tone-mapped quality index(BTMQI),and no-reference image quality metric for contrast distortion(NIQMC)are used to assess image quality.NIQE examines the image with the designed natural image model.BTMQI evaluates image perception quality after tone mapping by analyzing the naturalness and structure.For NIQE and BTMQI,the lower the value is,the higher the natural quality of the image is.NIQMC evaluates image quality by calculating the contrast between the local properties and the related properties of the blocks in the image.The higher the score is,the better the image quality is.On the VV dataset,which is a challenging dataset,our method obtains the best results for the BTMQI and NIQMC indicators.Experiments on the 178 low-light images show that our method achieves suboptimal values for the BTMQI and NIQMC metrics,but the advantages of texture prominence and noise suppression are significant.Conclusion Experimental results indicate that the enhanced results by our method achieve expected visual effects in terms of brightness,contrast,and noise suppression.In addition,our method can realize expected enhancement results for extremely low-light images.
Keywords

订阅号|日报