结合置信度加权融合与视觉注意机制的前景检测
Foreground detection via fusing confidence by weight and visual attention mechanism
- 2021年26卷第10期 页码:2462-2472
纸质出版日期: 2021-10-16 ,
录用日期: 2020-09-29
DOI: 10.11834/jig.200367
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2021-10-16 ,
录用日期: 2020-09-29
移动端阅览
成科扬, 孙爽, 王文杉, 师文喜, 李鹏, 詹永照. 结合置信度加权融合与视觉注意机制的前景检测[J]. 中国图象图形学报, 2021,26(10):2462-2472.
Keyang Cheng, Shuang Sun, Wenshan Wang, Wenxi Shi, Peng Li, Yongzhao Zhan. Foreground detection via fusing confidence by weight and visual attention mechanism[J]. Journal of Image and Graphics, 2021,26(10):2462-2472.
目的
2
在视频前景检测中,像素级的背景减除法检测结果轮廓清晰,灵活性高。然而,基于样本一致性的像素级分类方法不能有效利用像素信息,遇到颜色伪装和出现静止前景等复杂情形时无法有效检测前景。为解决这一问题,提出一种基于置信度加权融合和视觉注意的前景检测方法。
方法
2
通过加权融合样本的颜色置信度和纹理置信度之和判断前景,进行自适应更新样本的置信度和权值;通过划分子序列结合颜色显著性和纹理差异度构建视觉注意机制判定静止前景目标,使用更新置信度最小样本的策略保持背景模型的动态更新。
结果
2
本文方法在CDW 2014(change detection workshops 2014)和SBM-RGBD(scene background modeling red-green-blue-depth)数据集上进行检测,相较于5种主流算法,本文算法的查全率和精度相较于次好算法分别提高2.66%和1.48%,综合性能最优。
结论
2
本文算法提高了在颜色伪装和存在静止前景等复杂情形下前景检测的精度和召回率,在公开数据集上得到更好的检测效果。可将其应用于存在颜色伪装和静止前景等复杂情形的视频监控中。
Objective
2
In the field of intelligent video surveillance
video target detection serves as a bottom-level task for high-level video analysis technologies such as target tracking and re-recognition
and the false detection and missing detection of low-level target detection are amplified layer by layer. Therefore
improving the accuracy of foreground target detection has important research value.In the foreground detection of video
the result of pixel-level background subtraction is clear and flexible. However
the pixel-level classification method based on sample consistency cannot make full use of the pixel information effectively and obtain full foreground mask when meeting the complex situation of color camouflage and static object
such as error detection of foreground pixels and missing foreground. An algorithm is proposed based on fusing confidences with weight and visual attention to solve this problem effectively.
Method
2
The advantage of this method is to make full use of the credibility of the sample to construct the background model
combine the secondary detection of color level and texture dimension to overcome the problem of color camouflage effectively
and construct attention mechanism to detect static foreground. The proposed model contains three modules. First
considering the prospect of double-dimension missing detection
the foreground is determined by the sum of fusing with color confidence and texture confidence based on weight. The color confidence and texture confidence of strong correlation samples are summed
and then weighted sum is determined. If it is less than the minimum threshold
then it is judged as foreground; otherwise
it is background. Then
the confidence and weight of the samples are updated adaptively. For the pixels detected as background
the sample template with the minimum confidence in the model is replaced by the current pixel information. If the distance between the current frame pixel and the sample in the model is greater than the given distance threshold
then the sample is valid. The confidence of the effective sample is reduced
and the confidence of the invalid sample is reduced to prevent the valid sample from being updated as much as possible. The static foreground is determined by constructing the visual attention mechanism in the subsequence
and the background model is finally dynamically updated with the strategy of updating the minimum confidence samples. The core of this step is to define a visual attention mechanism to judge whether it is a background based on color saliency and similarity between background and texture. The pixel classification method based on the weighted fusion of confidence and the static foreground detection based on visual attention are used to extract moving foreground and still foreground
respectively
which are combinations of a whole. The foreground mask obtained by the pixel classification method based on the weighted fusion of confidence is used as the candidate region of static foreground detection. When the texture difference is calculated
the sample information of the background model constructed is also needed
and the background model here is the updated background model. The pixels detected as still foreground also cover the pixels that are falsely detected as background by the pixel classification method based on the confidence weighted fusion to guide the updating of model samples. In still foreground detection
when no candidate foreground region exists in the first frame of the subsequence
the static foreground detection of the current subsequence is not carried out
which improves the efficiency of the algorithm.
Result
2
In this paper
to evaluate the performance of the proposed algorithm
10 groups of video sequences are randomly selected from the scene background modeling red-green-blue-depth(SBM-RGBD) and change detection workshops 2014(CDW 2014) database for the experiment. Compared with the contrast algorithm
the proposed algorithm performs better in most video sequences and can detect static and camouflaged foreground targets. Overall
from the qualitative point of view
the proposed algorithm is better than the five other algorithms in static foreground and color camouflage detection. The recall and precision indexes of the proposed algorithm are improved by 2.66% and 1.48%
respectively
compared with the second best algorithm.
Conclusion
2
Quantitative and qualitative analyses of the experiment show that the proposed algorithm is superior to the other algorithms
achieves the accuracy and recall rate of foreground detection in the complex situation of color camouflage and presence of still objects
and achieves a better detection effect. Experimental results show that the proposed algorithm can effectively detect foreground targets in complex scenes caused by camouflage and static foreground. In addition
foreground target detection can be carried out in actual monitoring scenes.
目标检测前景检测置信度颜色伪装视觉注意静态前景
object detectionforeground detectionconfidencecolor camouflagevisual attentionstatic foreground
Camplani M, Maddalena L, Alcover G M, Petrosino A and Salgado L. 2017. A benchmarking framework for background subtraction in RGBD videos//Proceedings of International Conference on Image Analysis and Processing. Catania, Italy: Springer: 219-229[DOI: 10.1007/978-3-319-70742-6_21http://dx.doi.org/10.1007/978-3-319-70742-6_21]
Fang W T, Zhang T T, Zhao C Q, Soomro D B, Taj R and Hu H B. 2018. Background subtraction based on random superpixels under multiple scales for video analytics. IEEE Access, 6: 33376-33386[DOI:10.1109/ACCESS.2018.2846678]
Guo L L, Xu D and Qiang Z P. 2016. Background subtraction using local SVD binary pattern//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Las Vegas, USA: IEEE: 1159-1167[DOI: 10.1109/CVPRW.2016.148http://dx.doi.org/10.1109/CVPRW.2016.148]
Hernandez-Lopez F J and Rivera M. 2014. Change detection by probabilistic segmentation from monocular view. Machine Vision and Applications, 25(5): 1175-1195[DOI:10.1007/s00138-013-0564-3]
Hofmann M, Tiefenbacher P and Rigoll G. 2012. Background segmentation with feedback: the pixel-based adaptive segmenter//Proceedings of 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. Provence, USA: IEEE: 38-43[DOI: 10.1109/CVPRW.2012.6238925http://dx.doi.org/10.1109/CVPRW.2012.6238925]
Isik S, Özkan K, Günal S and Gerek Ö N. 2018. SWCD: a sliding window and self-regulated learning-based background updating method for change detection in videos. Journal of Electronic Imaging, 27(2): #023002[DOI:10.1117/1.JEI.27.2.023002]
Jeyabharathi D and Dejey. 2018. New feature descriptor: extended symmetrical-diagonal hexadecimal pattern for efficient background subtraction and object tracking. Computers and Electrical Engineering, 66: 454-473[DOI:10.1016/j.compeleceng.2017.11.001]
Jiang S Q and Lu X B. 2018. WeSamBE: a weight-sample-based method for background subtraction. IEEE Transactions on Circuits and Systems for Video Technology, 28(9): 2105-2115[DOI:10.1109/TCSVT.2017.2711659]
Jin J, Dang J W, Wang Y P and Zhai F W. 2019. Region spatiogram in color names for background modeling. Journal of Image and Graphics, 24(5): 714-723
金静, 党建武, 王阳萍, 翟凤文. 2019. 区域颜色属性空间直方图背景建模. 中国图象图形学报, 24(5): 714-723
Lin Y W, Tong Y, Cao Y, Zhou Y J and Wang S. 2017. Visual-attention-based background modeling for detecting infrequently moving objects. IEEE Transactions on Circuits and Systems for Video Technology, 27(6): 1208-1221[DOI:10.1109/TCSVT.2016.2527258]
Maddalena L and Petrosino A. 2012. The SOBS algorithm: what are the limits?//Proceedings of 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. Providence, USA: IEEE: 21-26[DOI: 10.1109/CVPRW.2012.6238922http://dx.doi.org/10.1109/CVPRW.2012.6238922]
Ortego D, Sanmiguel J C and Martínez J M. 2019. Hierarchical improvement of foreground segmentation masks in background subtraction. IEEE Transactions on Circuits and Systems for Video Technology, 29(6): 1645-1658[DOI:10.1109/TCSVT.2018.2851440]
St-Charles P L, Bilodeau G A and Bergevin R. 2015. SuBSENSE: a universal change detection method with local adaptive sensitivity. IEEE Transactions on Image Processing, 24(1): 359-373[DOI:10.1109/TIP.2014.2378053]
St-Charles P L, Bilodeau G A and Bergevin R. 2016. Universal background subtraction using word consensus models. IEEE Transactions on Image Processing, 25(10): 4768-4781[DOI:10.1109/TIP.2016.2598691]
Szwoch G. 2016. Extraction of stable foreground image regions for unattended luggage detection. Multimedia Tools and Applications, 75(2): 761-786[DOI:10.1007/s11042-014-2324-4]
Varghese A and Sreelekha G. 2017. Sample-based integrated background subtraction and shadow detection. IPSJ Transactions on Computer Vision and Applications, 9(1): #25[DOI:10.1186/s41074-017-0036-1]
Wang R Q, Zheng L and Wang B. 2017. Foreground object detection based on improved PBAS. Computer Science, 44(5): 294-298, 313
汪荣琪, 郑林, 王标. 2017. 基于改进的PBAS算法的前景目标检测. 计算机科学, 44(5): 294-298, 313 [DOI:10.11896/j.issn.1002-137X.2017.05.054]
Wang W, Wang X P and Liang J C. 2020. An improved ViBe algorithm based on adaptive detection of moving targets. Journal of Measurement Science and Instrumentation, 11(2): 126-134[DOI:10.3969/j.issn.1674-8042.2020.02.004]
Wang Y, Jodoin P M, Porikli F, Konrad J, Benezeth Y and Ishwar P. 2014. CDnet 2014: an expanded change detection benchmark dataset//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Columbus, USA: IEEE: 393-400[DOI: 10.1109/CVPRW.2014.126http://dx.doi.org/10.1109/CVPRW.2014.126]
Zeng D D, Zhu M, Xu F and Zhou T X. 2018. Extended scale invariant local binary pattern for background subtraction. IET Image Processing, 12(8): 1292-1302[DOI:10.1049/iet-ipr.2016.1026]
Zhong X, Wang M, Zhang Q and Li L. 2017. Moving object detection by fusing texture and color features with confidence. Application Research of Computers, 34(7): 2196-2201
钟忺, 汪梦, 张倩, 李琳. 2017. 一种基于纹理和颜色置信融合的运动目标检测方法. 计算机应用研究, 34(7): 2196-2201[DOI:10.3969/j.issn.1001-3695.2017.07.0600]
Zhu W J, Wang G L, Tian J, Qiao Z T and Gao F Q. 2018. Detection of moving objects in complex scenes based on multiple features. Acta Optica Sinica, 38(6): #0612004
朱文杰, 王广龙, 田杰, 乔中涛, 高凤岐. 2018. 基于多特征的复杂场景运动目标检测. 光学学报, 38(6): #0612004 [DOI:10.3788/AOS201838.0612004]
相关作者
相关机构