区域颜色属性空间直方图背景建模
Region spatiogram in color names for background modeling
- 2019年24卷第5期 页码:714-723
收稿:2018-09-07,
修回:2018-11-5,
纸质出版:2019-05-16
DOI: 10.11834/jig.180524
移动端阅览

浏览全部资源
扫码关注微信
收稿:2018-09-07,
修回:2018-11-5,
纸质出版:2019-05-16
移动端阅览
目的
2
为了能在光照变化、动态背景干扰这一类复杂场景中实时、准确地分割出运动前景,针对传统的基于颜色特征和基于像素的方法的不足,提出一种在颜色属性空间进行区域直方图建模的运动目标检测方法。
方法
2
首先将RGB颜色空间映射到更为稳健的低维颜色属性空间,以颜色属性为特征在像素的局部范围内建立直方图,同时记录直方图每一个分区中像素的空间信息,使用
K
个空间直方图构成每个像素的背景模型,每个直方图根据其匹配度赋予不同的权重。降维的颜色属性提高了模型的鲁棒性和检测的时效性,空间直方图引入的位置信息提高了背景模型的准确性。然后通过学习率
α
b
和
α
ω
来控制各模型直方图及其权重的更新,以提高模型的适应性。在标准测试数据集的所有视频序列中进行了实验,通过分析综合性能指标(F1)及平均假阳性(FN)曲线,确定了算法中涉及参数的合理取值范围。
结果
2
对实验结果定性和定量的分析表明,本文方法能够得到良好的前景检测效果,尤其在多模态场景和光线变化的复杂场景中能显著提高检测性能。各类场景的平均综合性能指标(average F1)相比性能突出的方法ViBe、LOBSTER(local binary similarity segmenter)和DECOLOR(detecting contiguous outliers in the low-rank representation)分别提高了0.65%、3.86%和3.9%,并通过GPU并行加速实现运动目标的实时检测。
结论
2
在复杂视频环境下的运动目标检测中,相比已有方法,本文方法能够更为准确地分割出运动前景,是一种实时、有效的检测方法,具有一定的实用价值。
Objective
2
In recent years
the technique of intelligent video analysis has become an important research area in computer vision. Moving object detection is aimed at catching moving foreground in all types of surveillance environment and is thus an essential foundation for following video processing
including target tracking and object segmentation. Traditional methods often model the background in a color feature space and single pixel. The traditional color feature is easily disturbed by light and shadow. A single pixel cannot reflect the region spatial relation between pixels. To detect the moving foreground precisely in complex video sequences
including the illumination and dynamic background in time
we propose a moving detection method on the basis of the background modeling technique via region spatiogram in the color name space. Color names are linguistic labels that humans attach to colors. The learning of color names is achieved by the PLSA model. In fact
it conducts mapping from the RGB space to the robust 11-dimension CN space. The modeling background in the color name space addresses the illumination variation. A histogram is a zeroth-order tool for feature description that is robust to scale variation and rotation variation
whereas a second-order spatiogram contains the spatial mean and covariance for each histogram bin. Thus
the spatiogram retains extensive information about the geometry of patches and captures the global positions of pixels rather than their pairwise relationships. Therefore
using spatiogram in the color name space for background modeling is necessary.
Method
2
A novel method for moving detection was proposed. At first
we mapped the RGB color space to a lower-dimensional color name space that is more robust. Then
we established spatiograms in the pixel local region characterized by the color name feature and recorded the spatial information of pix
els in every bin. The background models of every pixel comprised
K
spatiograms. The spatiograms were given different weights according to the matching rates. The color name feature by dimension reduction enhanced the robustness of the models and the detection of timeliness. The spatial information introduced by the spatiograms enhanced the accuracy of the background model. To enhance the adaptivity of the models
the approach controlled the update of the model spatiograms and their weights by learning rate
α
b
and
α
ω
. We conducted experiments on all video sequences from the standard test data CDnet (changedetection.net)
which included different challenges
such as illumination variation
moving shadow
multi-model background
and so on. The parameters such as model size
K
; threshold
T
B
T
p
; and learning rates
α
b
α
ω
in the algorithm were determined through the analysis of comprehensive performance F1 and averaged false negative curves.
Result
2
The quantitative and qualitative analyses indicates that the proposed method can achieve expected results. The method can obtain outstanding effects in certain scenes
including illumination and multi-model background. Compared with ViBe
LOBSTER(loeal binary similarity segmenter)
and DECOLOR(detecting contiguous outliers in the low-rank representation)
the method enhances 0.65%
3.86%
and 3.9% of the average comprehensive performance F1 of all scenes
respectively. Modeling for every pixel in its local region is concurrent. Thus
real-time detection is achieved with GPU parallel acceleration to improve time efficiency.
Conclusion
2
Robust color name spaces effectively address illumination variation. Multiple spatiogram models effectively match multi-model background
such as waving tree
water
and fountain. Therefore
the algorithm can segment moving foreground in complex video environment more accurately than existing methods. The algorithm is a real-time and effective detection algorithm that has certain practical value in intelligent video analysis.
Dong J N, Yang C H. Moving object detection using improved Gaussian mixture models based on spatial constraint[J]. Journal of Image and Graphics, 2016, 21(5):588-594.
董俊宁, 杨词慧.空间约束混合高斯运动目标检测[J].中国图象图形学报, 2016, 21(5):588-594.[DOI:10.11834/jig.20160506]
Aqel S, Aarab A, Sabri M A. Shadow detection and removal for traffic sequences[C ] //Proceedings of 2016 International Conference on Electrical and Information Technologies. Tangiers, Morocco: IEEE, 2016: 168-173.[ DOI: 10.1109/EITech.2016.7519583 http://dx.doi.org/10.1109/EITech.2016.7519583 ]
Bouwmans T. Traditional and recent approaches in background modeling for foreground detection:an overview[J]. Computer Science Review, 2014, 11-12:31-66.[DOI:10.1016/j.cosrev.2014.04.001]
Stauffer C, Grimson W E L. Adaptive background mixture models for real-time tracking[C ] //Proceedings of the Computer Society Conference on Computer Vision and Pattern Recognition. Fort Collins: IEEE, 1999: 246-252.[ DOI: 10.1109/CVPR.1999.784637 http://dx.doi.org/10.1109/CVPR.1999.784637 ]
Kim K, Chalidabhongse T H, Harwood D, et al. Real-time foreground-background segmentation using codebook model[J]. Real-Time Imaging, 2005, 11(3):172-185.[DOI:10.1016/j.rti.2004.12.004]
Jin J, Dang J W, Wang Y P, et al. Application of adaptive low-rank and sparse decomposition in moving objections detection[J]. Journal of Frontiers of Computer Science and Technology, 2016, 10(12):1744-1751.
金静, 党建武, 王阳萍, 等.自适应低秩稀疏分解在运动目标检测中的应用[J].计算机科学与探索, 2016, 10(12):1744-1751.][DOI:10.3778/j.issn.1673-9418.1603092]
Liu X, Zhong B N, Zhang M S, et al. Motion saliency extraction via tensor based low-rank recovery and block-sparse representation[J]. Journal of Computer-Aided Design&Computer Graphics, 2014, 26(10):1753-1763.
柳欣, 钟必能, 张茂胜, 等.基于张量低秩恢复和块稀疏表示的运动显著性目标提取[J].计算机辅助设计与图形学学报, 2014, 26(10):1753-1763.
Maddalena L, Petrosino A. The SOBS algorithm: What are the limits?[C ] //Proceedings of 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. Providence, RI, USA: IEEE, 2012: 21-26.[ DOI: 10.1109/CVPRW.2012.6238922 http://dx.doi.org/10.1109/CVPRW.2012.6238922 ]
Hofmann M, Tiefenbacher P, Rigoll G. Background segmentation with feedback: the pixel-based adaptive segmenter[C ] //Proceedings of 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. Providence, RI, USA: IEEE, 2012: 38-43.[ DOI: 10.1109/CVPRW.2012.6238925 http://dx.doi.org/10.1109/CVPRW.2012.6238925 ]
Barnich O, Van Droogenbroeck M. Vibe:a universal background subtraction algorithm for video sequences[J]. IEEE Transactions on Image Processing, 2011, 20(6):1709-1724.[DOI:10.1109/TIP.2010.2101613]
Braham M, Van Droogenbroeck M. Deep background subtraction with scene-specific convolutional neural networks[C ] //Proceedings of 2016 International Conference on Systems, Signals and Image Processing. Bratislava, Slovakia: IEEE, 2016: 1-4.[ DOI: 10.1109/IWSSIP.2016.7502717 http://dx.doi.org/10.1109/IWSSIP.2016.7502717 ]
Van De Weijer J, Schmid C, Verbeek J, et al. Learning color names for real-world applications[J]. IEEE Transactions on Image Processing, 2009, 18(7):1512-1523.[DOI:10.1109/TIP.2009.2019809]
Conaire C O, O'Connor N E, Smeaton A F. An improved spatiogram similarity measure for robust object localisation[C ] //Proceedings of 2007 IEEE International Conference on Acoustics, Speech and Signal Processing. Honolulu, HI, USA: IEEE, 2007: I-1069-I-1072.[ DOI: 10.1109/ICASSP.2007.366096 http://dx.doi.org/10.1109/ICASSP.2007.366096 ]
Goyette N, Jodoin P M, Porikli F, et al. Changedetection.net: A new change detection benchmark dataset[C ] //Proceedings of 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. Providence, RI, USA: IEEE, 2012: 1-8.[ DOI: 10.1109/CVPRW.2012.6238919 http://dx.doi.org/10.1109/CVPRW.2012.6238919 ]
St-Charles P L, Bilodeau G A. Improving background subtraction usinglocal binary similarity patterns[C ] //Proceedings of 2014 IEEE Winter Conference on Applications of Computer Vision. Steamboat Springs, CO, USA: IEEE, 2014: 509-515.[ DOI: 10.1109/WACV.2014.6836059 http://dx.doi.org/10.1109/WACV.2014.6836059 ]
Gao Z, Cheong L, Wang Y X. Block-sparse RPCA for salient motion detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(10):1975-1987.[DOI:10.1109/TPAMI.2014.2314663]
相关作者
相关机构
京公网安备11010802024621