Print

发布时间: 2017-10-16
摘要点击次数:
全文下载次数:
DOI: 10.11834/jig.170151
2017 | Volume 22 | Number 10




    图像分析和识别    




  <<上一篇 




  下一篇>> 





结合HSV与纹理特征的视频阴影消除算法
expand article info 武明虎, 宋冉冉, 刘敏
1. 湖北工业大学太阳能高效利用及储能运行控制湖北省重点实验室, 武汉 430068;
2. 湖北工业大学太阳能高效利用湖北省协同创新中心, 武汉 430068

摘要

目的 在视频监控目标检测应用中,场景中的阴影会直接影响目标检测的准确度,因此阴影抑制算法研究显得尤为重要。目前广泛使用的是HSV(hue,saturation,value)阴影抑制方法,但是该方法存在由于亮度比值的阈值不稳定而造成将运动目标也检测为阴影的问题。针对该问题,本文提出了一种结合HSV与纹理特征的视频阴影消除方法。方法 首先将输入的图像使用传统的混合高斯模型建立背景并在灰度空间中提取前景,其次在HSV空间使用亮度比的阈值方法检测阴影,二者综合得到运动目标;针对由于亮度比值的阈值不稳定而导致的前景误检为阴影的问题,采用了LBP(local binary pattern)算子结合大津阈值(OTSU)提取部分运动目标。最后将LBP算子结合大津阈值提取的部分运动目标与HSV空间检测的目标两者相或,最终去除运动目标的阴影。结果 本文选用在CVPR-ATON和CAVIAR标准视频库中多个场景的阴影视频,将本文算法与SNP算法、SP算法、DNM1算法和DNM2算法进行对比仿真,实验结果表明本文算法在阴影检测率和阴影识别率的平均值上提升约10%。结论 本文提出的视频阴影消除算法结合了HSV与纹理特征,可以在不同的环境中有效地去除阴影,运动目标保留完整,可适用于智能视频监控、遥感图像和人机交互中。

关键词

阴影消除; HSV颜色空间; 亮度; LBP算子; 大津阈值; 适用性

Video shadow elimination algorithm by combining HSV with texture features
expand article info Wu Minghu, Song Ranran, Liu Min
1. Hubei Collaborative Innovation Center for High-efficiency Utilization of Solar Energy and Operation Control of Energy Storage System, Hubei University of Technology, Wuhan 430068, China;
2. Hubei Collaborative Innovation Center for High-efficiency Utilization of Solar Energy, Hubei University of Technology, Wuhan 430068, China
Supported by: National Natural Science Foundation of China(61471162);Science and Technology Support Program (R & D) Project of Hubei Province, China (2015BAA115)

Abstract

Objective In the application of video surveillance target detection, the shadow will directly affect the accuracy of the target detection in the scenes, so the shadow suppression algorithm is particularly important.The traditional algorithm which is based on hue, saturation, and value(HSV) to detect shadow is popular.Inspired by color perception mechanism of human visual system, this algorithm detects the shadow by the luminance ratio between the current video frame and background model.We propose a shadow elimination algorithm based on HSV spatial feature and texture features to overcome the shortcoming of the luminance ratio between them, which causes the moving target to be mistaken for the shadow. Method The Gaussian mixture model can effectively overcome the interference caused by the change of illumination and periodic disturbance of background image.First, the mixed Gaussian model(essentially 3 to 5) is used to characterize each pixel in the input images, and the updating mixed Gaussian model is obtained after the new input frame image.Each pixel of the current image is matched with the mixed Gaussian model, and if it is successful, the point belongs to background; otherwise, it belongs to foreground.The algorithm based on HSV color space can detect the shadow accurately by calculating the luminance ratio between the current video frame and the background model because the hue and saturation values are approximate in the shadow compared with the ones in the background, and the luminance value of shadow pixels is lower than the luminance value of the background pixels.The luminance ratio between them is usually 0.7 to 1.Therefore, the moving target can be obtained by combining the foreground detected by Gaussian mixture background model with the shadow detected by the method based on HSV color space.The traditional algorithms based on HSV color space can obtain the accurate detection results, but the moving target is often mistaken for shadows seriously in the video frames.To overcome this problem, we use texture featuresthat conclude local binary pattern(LBP) and OTSU to extract the moving target.LBP is an operator of gray-scale variation.A smaller threshold is selected, which is compared with the difference value between gray value of the central pixel and gray value of its corresponding neighborhood pixel.If the difference is greater than the threshold, it is marked as 1;otherwise, it is marked as 0.Therefore, we can obtain a description of the texture change at the location of the central pixel.The LBP operator extracts local texture features by the original gray level of the image.OSTU is a maximum interclass variance method.If the shadow and the target have large variances, then the two parts have much difference.When the partial target is regarded as shadows or part of the shadow is regarded as the target, the difference of two parts becomes smaller.Thus, the largest variance segmentation between classes can result in a minimum probability of misclassification.According to the gray feature of image, the image is divided into the shadow and the target by OTSU.The complete moving target is obtained by OR operator of combining the foreground, which is respectivelyextracted by OTSU and LBP operator with the result that is extracted by HSV. Result The proposed algorithm is applied to several different shadow videos, which are included in CVPR-ATON standard video library and CAVIAR standard video library.Experimental results show that when the threshold of luminance ratio, which is applied to detect the shadow in HSV color space, remains unchanged, the moving target is extracted accurately and its shadows are basically eliminated.Compared with other traditional algorithms, such as statistical parametric(SP) approach, statistical nonparametric(SNP) approach, and two kinds of deterministic non-model(DNM1, DNM2) approach, the proposed algorithm obtain the better result.Experimental results show that the proposed algorithm has much better increment of about 10% than the forehead algorithms in terms of the average of shadow detection rate and shadow discrimination rate.Although in the Intelligent Room video, the shadow discrimination rate is 1.6% lower than that of the DNM2 algorithm and the shadow discrimination rate is 2.9% lower than that of the SP algorithm in the video of Laboratory; thus, the algorithm improves the shadow detection rate by 29% and 27.2%, respectively.In the real-time test, this algorithm can process 12~15 frames per second, which can satisfy the real-time needs. Conclusion Although the traditional algorithms that use HSV has great effect in the shadow elimination, the moving target may easily be interpreted as the shadow.The texture featuresthat include LBP and OTSU can make up for this shortcoming, wepropose the video shadow elimination algorithm by combining HSV with Texture features.Compared with other algorithms, our method can obtain more accurate shadow detection result and has much better advantages in terms of average shadow detection rate and shadow discrimination.Our method can be applied to intelligent video surveillance, remote sensing images, and human-computer interaction.Our future work will focus on improving the real-time performance.

Key words

shadow elimination; HSV(hue, saturation, value)color space; value; LBP(local binary pattern) operator; OTSU threshold; applicability

0 引言

随着经济社会的不断发展和计算机性能的不断升级,智能视频的处理将会越来越普及,其中对提取视频中运动目标的要求也越来越高,故阴影的消除显得至关重要。有效去除运动目标的阴影不仅能够使视频分析中运动目标检测的性能提高,而且对视频监控系统中运动目标的识别和行为的分析等[1-2]起到了至关重要的作用。因此阴影的去除是图像处理方向中一个关键和难点的问题。

近几年,运动阴影去除已经逐渐成为智能视频监控领域研究的热点问题,并且在国内外引起许多专家和学者的广泛研究[3-13]。目前对运动阴影去除研究方法有很多,大致上可以总结为两大类:基于确定性方法和基于统计学方法。前者是通过利用环境对阴影和非阴影进行判断,在判断的过程中是否需要背景、运动目标和光照等先验知识建立模型。基于确定性方法又可以具体划分为确定性非模型方法[3]和确定性模型方法[4]两类。基于统计学方法是采用背景和阴影像素的概率值区分阴影和非阴影。这种方法可以具体的分为参数方法[5]和非参数方法[6]两类。在国外,2003年,Parti等人[7]第一次将运动阴影检测方法详细的总结,分成四类,接着从每类中提到性能较好的方法模型,并进行了分析对比,提出了一套评价指标运用于阴影检测中,而经典颜色空间模型HSV阴影检测法[8]也在其中被提及。最近,Al-Najdawi等人[9]和Sanin等人[10]分别从不同角度对近些年提到的运动阴影检测的模型进行了综合的分析对比,将Parti等人的工作进行了完善。在国内,殷保才等人[11]采用了将色度和纹理不变性相结合的阴影检测方法;邱一川等人[12]通过颜色特征和边缘特征相结合对阴影去除;艾维丽等人[13]提出了区域配对的阴影检测算法。

HSV颜色空间使用的是颜色的色度、饱和度及亮度,人的视觉感知方式与其联系紧密,因此更能准确的将运动目标与阴影的灰度表现出来。但传统的HSV空间阴影消除法只是通过视频帧与背景帧之间的亮度比值的阈值确定阴影区域,在不同的背景和光照的条件下,亮度比值的阈值发生变化难以确定,导致运动目标误检为阴影的的现象严重。针对HSV空间模型的缺点,提出了一种结合HSV与纹理特征的视频阴影消除算法。先通过高斯混合背景建模的方法,建立背景和在灰度空间中提取前景。然后在HSV颜色空间模型中确定视频帧与背景帧之间的亮度比的阈值检测出目标的阴影,将其与高斯混合得到的前景相结合,得到运动目标;再对高斯混合模型提取的前景分别加入LBP算子[14]和大津阈值[15]提取部分运动目标,两者相或可以得出运动目标,最后再与前面HSV提取的运动目标再次相或,即可达到消除运动目标阴影的效果。通过在CVPR-ATON标准视频库中有阴影的视频和CAVIAR标准视频库中的视频下进行测试,该方法是可行的。本文算法框图如图 1所示,主要包括前景提取和阴影去除。其中阴影去除是将HSV颜色空间去阴影的运动目标与LBP算子和大津阈值去阴影的运动目标相或。

图 1 本文的算法框图
Fig. 1 The algorithm flow chart of this paper

1 前景提取

用混合高斯背景建模提取运动的目标,它是采用$ K $个高斯分布对图像中的每个像素建模,对$ t $时刻新获取像素值 $ {X_t} $,判断其与已有的$ K $个高斯分布是否匹配,若满足条件$ \left| {{X_t} - {\mu _{i, t - 1}}} \right| \le 2.5{\sigma _i} $,其中 $ {\mu _{i,t}} $$ {\sigma _i} $分别为高斯分布的均值和标准差,则可以断定该像素值与高斯分布匹配。如果$ {X_t} $都不匹配,则需要引入新高斯分布或者通过新高斯分布替代优先级为 $ {\lambda _{i,t}} $最小的分布。这时,新的高斯分布的均值是$ {X_t} $

若有匹配的分布,令 $ {M_{k,t}} = 1 $,否则$ {M_{k, t}} = 0 $。而不匹配的分布,保持均值与方差的不变,$ {\omega _{i, t}} $作为第$i$个高斯分布的权重,且有 $ {\omega _{1,t}} + {\omega _{2,t}} + \cdots + {\omega _{K - 1,t}} + {\omega _{K,t}} = 1 $$ {\omega _{i, t}} $更新公式为

$ {\omega _{i, t}} = \left( {1 - \alpha } \right) \cdot {\omega _{i, t - 1}} + \alpha \left( {{M_{i, t}}} \right) $ (1)

式中,$\alpha $是自定义的学习率。且0≤$\alpha $≤1,背景更新的速度依据$\alpha $的大小。

最后,对每个像素的$K$个高斯分布按照 $ {\lambda _{i,t}} $进行排序,选择前$B$个分布表示背景模型用来确定作为背景模型的分布。然后,再次对将前文提到的前$B$个高斯分布和 $ {X_t} $进行匹配,如果$ {X_t} $和前$B$个高斯分布之一匹配,则可断定这个像素是背景点;否则该像素是前景点。

2 阴影去除

2.1 颜色空间HSV阴影检测模型

通过视觉系统感知可以清楚地知道阴影主要拥有两个特性。即阴影本身的亮度值要比阴影投射到的背景区域的亮度值要低和投射阴影的物体连接着阴影,而且两者运动状态相同。故HSV颜色空间阴影检测通过采用HSV空间的色度、饱和度与亮度信息。阴影检测的判别函数为

$ S{P_K}\left( {x, y} \right)\left\{ \begin{array}{l} 1\;\;\;\alpha \le \frac{{I_K^V\left( {x, y} \right)}}{{B_K^V\left( {x, y} \right)}} \le \beta \wedge \\ \;\;\;\;\left( {I_K^S\left( {x, y} \right) - B_K^S\left( {x, y} \right)} \right) \le {\tau _s} \wedge \\ \;\;\;\;\;\left| {I_K^H\left( {x, y} \right) - B_K^H\left( {x, y} \right)} \right| \le {\tau _H}\\ 0\;\;\;\;{\rm{其他}} \end{array} \right. $ (2)

式中,$ I_K^H\left( {x, y} \right) $$ {I_K^S\left( {x, y} \right)} $$ {I_K^V\left( {x, y} \right)} $$ B_K^H\left( {x, y} \right)、B_K^S\left( {x, y} \right)、B_K^I\left( {x, y} \right) $分别为$K$时刻的视频帧和背景帧在位置$(x,y)$上像素的色度、饱和度和亮度。$ S{P_K}\left( {x, y} \right) = 1 $时,表示$K$时刻在位置$(x,y)$上像素为阴影,否则为目标。$ {\tau _s} $$ {\tau _H} $分别表示饱和度和色度差值的阈值,$\alpha $与阴影的亮度值强弱有关,$\beta $与光线的强弱有关。阴影检测主要依据的是视频帧与背景帧之间亮度比,而饱和度和色度差值的影响很小。

2.2 纹理特征去除阴影

2.2.1 LBP算子的改进

LBP的原理是选取一个较小的阈值,与中心像素的灰度值与其对应的邻域像素的灰度值的差值作比较,大于阈值记为1,否则标记为0。然后根据顺时针方向的顺序可得二进制串,该二进制串作为这个中心像素的LBP算子,该算子是在中心像素的8邻域中。其公式为

$ \begin{array}{l} LB{P_{Q, R}}\left( {{x_c}, {y_c}} \right) = \sum\limits_{q = 0}^{Q - 1} {s\left( {{g_q} - {g_c}} \right)} {2^q}\\ \;\;\;\;\;\;\;\;s\left( u \right) = \left\{ \begin{array}{l} 1\;\;\;\;u \ge {T_{{\rm{lbp}}}}\\ 0\;\;\;\;u < {T_{{\rm{lbp}}}} \end{array} \right. \end{array} $ (3)

式中,$Q$为这个像素邻域中的像素数目,$R$为环形邻域的半径,位置在$ \left( {{x_c}, {y_c}} \right) $像素的$ {\rm{LBP}} $算子通过$ LB{P_{Q, R}}\left( {{x_c}, {y_c}} \right) $表示,$ {{g_c}} $表示中心像素的灰度值,环形邻域中像素的灰度值采用$ {{g_q}} $。由于要增加$ {\rm{LBP}} $算子的鲁棒性,$ {T_{{\rm{lbp}}}} $需要选取一个相对较小的阈值。

一般情况下,$ {T_{{\rm{lbp}}}} $的取值是2≤$ {T_{{\rm{lbp}}}} $≤5,而文中为了将阴影全部消除,将阈值提高到7≤$ {T_{{\rm{lbp}}}} $≤10,这样依然提取到部分运动目标,本文中采用$ {T_{{\rm{lbp}}}} $=8。

2.2.2 大津阈值(OTSU)

它是按图像的灰度特性,将图像分成阴影和目标两部分。阴影和目标之间的类间方差越大,说明构成图像的两部分的差别越大,当部分目标错分为阴影或部分阴影错分为目标都会导致两部分差别变小。因此,使类间方差最大的分割意味着错分概率最小。

把图像的灰度级设为$ 0, 1, \cdots, L - 1 $,灰度值为$i$的像素个数为ni,全部像素数为$ N = {n_0} + {n_1} + \cdots + {n_{L - 1}} $,而像素的灰度值是$i$的概率则表示为$ {p_i} = {n_i}/N $p0+p1+p2+…+pL-2+pL-1=1表示为所有概率求和,图像像素灰度的总均值为$ {\mu _T} = {p_1} + 2{p_2} + \cdots + \left( {L - 2} \right){p_{L - 2}} + \left( {L - 1} \right){p_{L - 1}} $。若令C0C1分别为两类像素集,由上可知$ {\mathit{\boldsymbol{C}}_0} = \left[{0, \cdots, k} \right] $$ {\mathit{\boldsymbol{C}}_1} = \left[{k + 1, \cdots, L-1} \right] $。将C0C1的均值分别表示成$ {\mu _0}\left( k \right) $$ {\mu _1}\left( k \right) $。此时令$ {\omega _0}\left( k \right) = {p_0} + {p_1} + \cdots + {p_{k - 1}}, {\omega _1} = 1 - {\omega _0}\left( k \right) $$ \mu \left( k \right) = {p_1} + 2{p_2} + \cdots + k{p_k} $,类间方差$ \sigma _B^2 $的计算为

$ \begin{array}{l} \sigma _B^2\left( k \right) = {\omega _0}\left( k \right){\left[{\frac{{\mu \left( k \right)}}{{{\omega _0}\left( k \right)}}-{\mu _T}} \right]^2} + \\ \;\;\;\;{\omega _1}\left( k \right){\left[{\frac{{{\mu _T}-\mu \left( k \right)}}{{1-{\omega _0}\left( k \right)}}-{\mu _T}} \right]^2} \end{array} $ (4)

最佳阈值$ {{k^ * }} $的选取方法为

$ \sigma _B^2\left( {{k^ * }} \right) = \mathop {\max }\limits_{0 \le k < L - 1} \sigma _B^2\left( k \right) $ (5)

2.3 结合HSV与纹理特征去除阴影

在传统算法上,对于视频帧,通过自适应高斯混合的背景建模法建立背景,提取前景,在颜色空间模型HSV中检测运动目标的阴影,基于颜色的检测方法几乎检测出全部的阴影像素。然而,亮度比值的$ \alpha $$ \beta $一般视具体情况而定,这种变化导致运动目标中的比较暗或者与背景区域具有相似的色度信息的区域被检测为运动阴影,可见,在阴影检测过程中仅仅使用颜色信息并不能达到令人满意的检测结果。

基于颜色空间模型HSV的缺点,本文提出了将前景分别通过改进的$ {\rm{LBP}} $算子处理和大津阈值(OTSU)处理,由于$ {\rm{LBP}} $算子和大津阈值处理都只能得到部分运动目标,且阴影基本消除,故两者相或,得到运动目标。用HSV颜色空间模型检测阴影时,可使亮度比值的$ \alpha $$ \beta $选取固定的值,将阴影尽可能检测出来。与视频帧结合提取运动目标时,虽然有运动目标中某些区域有损失,但将其与$ {\rm{LBP}} $算子和大津阈值去阴影后相或的运动目标再次相或,即可得到准确去阴影的运动目标。

3 本文算法流程

输入:视频帧。

输出:去除阴影的运动目标。

For t=1 to T

通过混合高斯模型建立视频帧的背景。

End for

For t=T to Tvideo

1) 通过混合高斯模型和建立的背景提取视频帧中带阴影的运动目标。

2) 利用式(2) 检测运动目标的阴影。

3) 利用式(3) 计算去除目标阴影的二值化图像。

4) 利用式(4) 和式(5) 计算去除目标阴影的二值化图像。

5) 将步骤1) 和步骤2) 得到的图像两者相减,可得到目标阴影去除的二值化图像。

6) 将步骤3)—步骤5) 得到的二值化图像,三者相或,得到最终目标阴影去除的二值化图像。

End for

4 实验结果

本文采用vs2013开发平台,使用Opencv2.4.9的库进行编程实现,PC机配置为Intel Pentium Dual-Core 2.70 GHz CPU,4 GB内存。

将本文阴影去除算法在CVPR-ATON标准视频库中有阴影的视频(intelligent room,campus,laboratory)和CAVIAR标准视频库的视频(OneLeaveShopReenter1cor)下进行测试。在HSV模型阴影检测中,色调和饱和度的分量阈值选取比较大即可,亮度比值的阈值选取0.7~1效果最为平衡;$ {\rm{LBP}} $算子的阈值$ {T_{lbp}} $取8效果最佳。

图 2中Campus_raw第60帧结果可以看出,车的大部分阴影已经被消除,其运动目标也可以较为准确的检测出来。车的处理效果稍差是因为视频模糊并有噪声干扰,而且运动目标中一部分亮度较低,极易与阴影混淆。从图 2的结果可以看出,本文算法对人的阴影消除效果比较好,阴影基本被消除,运动目标可以准确的检测出来。这是由于HSV阴影检测效果较好,而本文算法先采用了HSV阴影检测方法将阴影尽可能的消除,这时提取的运动目标损失较大。然后将提取的运动目标与纹理特征方法中的$ {\rm{LBP}} $算子和大津阈值提取部分目标相或。这样提取的运动目标准确,其阴影基本被消除。

图 2 阴影消除结果
Fig. 2 Result of window shadows elimination((a) original video frames; (b)extracted foreground; (c) shadow elimination)

为保证实验结果的可靠性,提出的阴影检测率$ \eta $和阴影判别率$ \xi $作为评价算法性能的指标[16],将两者求和取均值$ Avg $来进一步分析其性能[17],具体定义分别为

$ \eta = \frac{{T{P_s}}}{{T{P_s} + F{N_s}}} \times 100\% $ (6)

$ \xi = \frac{{T{P_F}}}{{F{N_F} + T{P_F}}} \times 100\% $ (7)

$ Avg = \frac{{\eta + \xi }}{2} $ (8)

式中,$ {T{P_s}} $表示正确检测到阴影像素的个数,$ {F{N_s}} $表示将阴影像素误检为前景像素的个数,$ {T{P_F}} $表示正确检测到的前景像素个数,$ {F{N_F}} $表示将前景像素误检为阴影像素的个数。通过人工标注视频的若干不同帧的目标和阴影,并在本算法的检测下可以求出$ T{P_s}、F{N_s}、T{P_F}和F{N_F} $

将本文的阴影消除算法与其他算法作对比,如表 1所示,与文献[1]中总结的SNP算法、SP算法、DNM1算法、DNM2算法相比,在阴影检测率、阴影判别率和两者平均值3项测试指标中,本文算法大部分情况下优于对比的算法,尤其是反映算法综合性能的平均值比其他算法提升约10 %。这是由于本文算法将HSV阴影检测方法与纹理特征消除阴影的方法相结合,通过对阴影的分析可知,投射的背景区域的物理属性也不变,故阴影区域和背景区域的纹理性是相似的。另外阴影本身的亮度值要比阴影投射到的背景区域的亮度值要低,$ {\rm{LBP}} $算子通过对图像原始灰度级提取局部纹理特征;大津阈值是按灰度特征将图像分为目标和阴影。根据这两种特性分析,纹理特征方法中的$ {\rm{LBP}} $算子和大津阈值对阴影消除效果较好。

表 1 阴影消除算法比较
Table 1 Comparison of shadow elimination algorithms

下载CSV
/%
测试序列 测试标准 SNP算法 SP算法 DNM1算法 DNM2算法 本文算法
Intelligent Room $ \eta $ 72.8 76.2 78.6 62.0 91.0
$ \xi $ 88.9 90.7 90.3 93.9 92.3
$ Avg $ 80.9 83.5 84.5 78.0 91.7
Campus $ \eta $ 80.5 72.4 82.9 69.1 83.2
$ \xi $ 63.7 74.1 86.6 63.0 87.6
$ Avg $ 72.1 73.3 84.8 66.1 85.4
Laboratory $ \eta $ 84.0 64.8 76.2 60.3 92.0
$ \xi $ 92.3 95.3 89.8 81.5 92.4
$ Avg $ 88.2 80.1 83.0 70.9 92.2
CAVIAR $ \eta $ 61.4 92.7 93.3 78.2 94.2
$ \xi $ 87.9 74.4 79.1 73.4 90.1
$ Avg $ 74.2 83.6 86.2 75.8 92.2

虽然在IntelligentRoom中,阴影判别率比DNM2算法下降1.6 %,和在Laboratory的视频中,阴影判别率比SP算法下降2.9 %,但本文算法在阴影检测率上分别提升29 %和27.2 %,综合性能更加优秀。在实时性测试中,本文算法可以达到12 15帧/s,可以满足实时性的需求。

5 结论

为了有效地去除视频中运动目标的阴影,克服传统的HSV阴影检测算法中运动目标误检为阴影的缺点,本文利用纹理特征中的LBP算子和大津阈值(OTSU)提取部分运动目标的特点,提出了HSV空间阴影检测与纹理特征相结合的方法。实验结果表明,该算法改进了在背景和光照的变化下,HSV空间的亮度比阈值不稳定导致运动目标被误判为阴影的缺点。在不同的场景中,相比于其他经典算法,可以得到更好的检测结果,适用性和鲁棒性更好。本文下一步将继续研究如何提高实时性和稳定性,将其运用于环境更为复杂的智能视频监控中。

参考文献

  • [1] Dollar P, Wojek C, Schiele B, et al. Pedestrian detection:an evaluation of the state of the art[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(4): 743–761. [DOI:10.1109/TPAMI.2011.155]
  • [2] Tian J D, Sun J, Tang Y D. Tricolor attenuation model for shadow detection[J]. IEEETransactions on Image Processing, 2009, 18(10): 2355–2363. [DOI:10.1109/TIP.2009.2026682]
  • [3] Mcfeely R, Glavin M, Jones E. Shadow identification for digital imagery using colour and texture cues[J]. IET Image Processing, 2012, 6(2): 148–159. [DOI:10.1049/iet-ipr.2010.0083]
  • [4] Fang L Z, Qiong W Y, Sheng Y Z. A method to segment moving vehicle cast shadow basedon wavelet transform[J]. Pattern Recognition Letters, 2008, 29(16): 2182–2188. [DOI:10.1016/j.patrec.2008.08.009]
  • [5] Meher S K, Murty M N. Efficient method of moving shadow detection and vehicle classification[J]. AEU-International Journal of Electronics and Communications, 2013, 67(8): 665–670. [DOI:10.1016/j.aeue.2013.02.001]
  • [6] Choi J M, Yoo Y J, Choi J Y. Adaptive shadow estimator for removing shadow of moving object[J]. Computer Vision and Image Understanding, 2010, 114(9): 1017–1029. [DOI:10.1016/j.cviu.2010.06.003]
  • [7] Prati A, Mikic I, Trivedi M M, et al. Detecting moving shadows:Algorithms and evaluation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(7): 918–923. [DOI:10.1109/TPAMI.2003.1206520]
  • [8] Cucchiara R, Grana C, Piccardi M, et al.Improving shadow suppression in moving object detection with HSV color information[C]//Proceedings of 2001 IEEE Intelligent Transportation Systems.Oakland, CA, USA:IEEE, 2001:334-339.[DOI:10.1109/ITSC.2001.948679]
  • [9] Al-Najdawi N, Bez H E, Singhai J, et al. A survey of cast shadow detection algorithms[J]. Pattern Recognition Letters, 2012, 33(6): 752–764. [DOI:10.1016/j.patrec.2011.12.013]
  • [10] Sanin A, Sanderson C, Lovell B C. Shadow detection:a survey and comparative evaluation of recent methods[J]. Pattern Recognition, 2012, 45(4): 1684–1695. [DOI:10.1016/j.patcog.2011.10.001]
  • [11] Yin B C, Liu Y, Wang Z F. Moving shadow detection by combining chromaticity and texture invariance[J]. Journal of Image and Graphics, 2014, 19(6): 896–905. [殷保才, 刘羽, 汪增福. 结合色度和纹理不变性的运动阴影检测[J]. 中国图象图形学报, 2014, 19(6): 896–905. ] [DOI:10.11834/jig.20140610]
  • [12] Qiu Y C, Zhang Y Y, Liu C M. Vehicle shadow removal with multi-feature fusion[J]. Journal of Image and Graphics, 2015, 20(3): 311–319. [邱一川, 张亚英, 刘春梅. 多特征融合的车辆阴影消除[J]. 中国图象图形学报, 2015, 20(3): 311–319. ] [DOI:10.11834/jig.20150302]
  • [13] Ai W L, Wu Z H, Liu Y L. Outdoor shadow detection with pairedregions[J]. Journal of Image and Graphics, 2015, 20(4): 551–558. [艾维丽, 吴志红, 刘艳丽. 结合区域配对的室外阴影检测[J]. 中国图象图形学报, 2015, 20(4): 551–558. ] [DOI:10.11834/jig.20150412]
  • [14] Heikkila M, Pietikainen M. A texture-based method for modeling the background and detecting moving objects[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(4): 657–662. [DOI:10.1109/TPAMI.2006.68]
  • [15] He Z Y, Sun L N, Chen G L. Fast computation of threshold based on Otsu criterion[J]. Acta Electronica Sinica, 2013, 41(2): 267–272. [何志勇, 孙立宁, 陈国立. Otsu准则下分割阈值的快速计算[J]. 电子学报, 2013, 41(2): 267–272. ] [DOI:10.3969/j.issn.0372-2112.2013.02.010]
  • [16] Li N, Xie S Y, Xie Y B. Application of grey relational theory in moving object detection from video sequences[J]. Journal of Computer-Aided Design & Computer Graphics, 2009, 21(5): 663–667. [李楠, 谢松云, 谢玉斌. 灰关联分析在视频运动目标检测中的应用[J]. 计算机辅助设计与图形学学报, 2009, 21(5): 663–667. ]
  • [17] Joshi A J, Papanikolopoulos N P. Learning to detect moving shadows in dynamic environments[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008, 30(11): 2055–2063. [DOI:10.1109/TPAMI.2008.150]