融合多特征与先验信息的显著性目标检测
Saliency object detection based on multiple features and prior information
- 2020年25卷第2期 页码:321-332
收稿:2019-04-12,
修回:2019-7-4,
录用:2019-7-11,
纸质出版:2020-02-16
DOI: 10.11834/jig.190128
移动端阅览

浏览全部资源
扫码关注微信
收稿:2019-04-12,
修回:2019-7-4,
录用:2019-7-11,
纸质出版:2020-02-16
移动端阅览
目的
2
图像的显著性目标检测是计算机视觉领域的重要研究课题。针对现有显著性目标检测结果存在的纹理细节刻画不明显和边缘轮廓显示不完整的问题,提出一种融合多特征与先验信息的显著性目标检测方法,该方法能够高效而全面地获取图像中的显著性区域。
方法
2
首先,提取图像感兴趣的点集,计算全局对比度图,利用贝叶斯方法融合凸包和全局对比度图获得对比度特征图。通过多尺度下的颜色直方图得到颜色空间图,根据信息熵定理计算最小信息熵,并将该尺度下的颜色空间图作为颜色特征图。通过反锐化掩模方法提高图像清晰度,利用局部二值算子(LBP)获得纹理特征图。然后,通过图形正则化(GR)和流行排序(MR)算法得到中心先验图和边缘先验图。最后,利用元胞自动机融合对比度特征图、颜色特征图、纹理特征图、中心先验图和边缘先验图获得初级显著图,再通过快速引导滤波器优化处理得到最终显著图。
结果
2
在2个公开的数据集MSRA10K和ECSSD上验证本文算法并与12种具有开源代码的流行算法进行比较,实验结果表明,本文算法在准确率-召回率(PR)曲线、受试者工作特征(ROC)曲线、综合评价指标(F-measure)、平均绝对误差(MAE)和结构化度量指标(S-measure)等方面有显著提升,整体性能优于对比算法。
结论
2
本文算法充分利用了图像的对比度特征、颜色特征、纹理特征,采用中心先验和边缘先验算法,在全面提取显著性区域的同时,能够较好地保留图像的纹理信息和细节信息,使得边缘轮廓更加完整,满足人眼的层次要求和细节要求,并具有一定的适用性。
Objective
2
Saliency object detection has been widely used in many fields
such as image matching. Although the current saliency object detection algorithm has achieved good results
the following problems still exist:the texture detail is not obvious and the edge contours are incomplete. In addition
the saliency detection results of images are influenced by many factors
such as contrast and texture
and the reliability of the saliency detection results based on a single saliency factor is low. Hence
to solve these problems
a method of saliency object detection based on multiple features and prior information is proposed
this method can obtain final saliency images with prominent saliency areas
high brightness contrast
clear levels
distinct texture details
and complete edge contours.
Method
2
First
the convex hulls of the image are extracted
the points near the boundary in the convex hulls are removed
and the set of points of interest (i.e.
the hull) is preserved. Meanwhile
the superpixel segmentation method is used to obtain compact image blocks with a uniform size
calculate the contrast and spatial distribution of each image block
the contrast and spatial distribution are fused linearly to obtain the global contrast map
calculate the prior probability and likelihood probability based on the hull and global contrast map. The Bayesian algorithm is utilized to obtain the contrast feature map. Under multi-scale conditions
the color histogram of the image is calculated and used to obtain the color spatial map. In accordance with information entropy theory
the information entropy of each color spatial map is calculated
the minimum information entropy is obtained
and the image is used with this scale as the color feature map. The unsharp mask method is adopted to improve the sharpness of the original image
enhance the edge of the image
and highlight other details. The local binary pattern operator is employed to obtain the texture feature map of the image
and the popular graph regularized and manifold ranking algorithms are used to obtain the center prior map and edge prior map. Finally
the primary saliency map is obtained by using the cellular automation fusion contrast feature map
color feature map
texture feature map
center prior map and edge prior maps. The primary saliency map is optimized with a fast guided filter to obtain the final saliency map.
Result
2
To confirm the availability and accuracy of the proposed algorithm
its performance is tested on two open datasets
namely
MSRA10K and ECSSD (extended complex scence saliency datay base). The MSRA10K dataset is one of the most frequently used datasets for comparing saliency test results. It contains 10 000 images and their corresponding ground truth images. The images in the datasets are surrounded by the bounding box of the artificial marker
and the background is simple. The ECSSD dataset contains 1 000 images and their corresponding ground truth images. The images in this dataset contain multiple targets
which are close to natural images and have an extremely complex background. Under the same experiment environment
200 images are randomly selected from each dataset and compared with 12 saliency object detection methods with an open-source code based on multi-information fusion. Experimental results show that the proposed saliency object detection method based on multiple features and prior information is significantly improved in terms of PR (precision-recall) curves
ROC (receiver operating characteristic) curves
F-measure
MAE (mean absolute error)
and S-measure. Its overall performance is better than that of the compared algorithms
and it can solve the above mentioned problems well. On the MSRA10K and ECSSD datasets
the PR curves of the proposed algorithm are the closest to the upper right
and the DCL (diffusion-based compactness and local wntrast) algorithm is close to our algorithm
both of which are higher than the other compared algorithms. The ROC curves of the BSCA (background-based map optimized via single-lager cellular automata) and DCL algorithms are closer to the upper left than our algorithm on the MSRA10K dataset. Our algorithm is close to the ROC curves of the DCL algorithm
and they are better than the other compared methods on the ECSSD dataset. The F-measure values of our algorithm are the highest and reach 0.944 49 and 0.855 73. The values for the popular SACS (self-adaptively weighted co-saliency detection via rank constraint)
BSCA
DCL
and WMR (weighted manifold ranking) algorithms are slightly lower than that of the proposed algorithm
which indicates that our algorithm has an optimal overall performance. The MAE values of our algorithm are the smallest and reach 0.070 8 and 0.125 71
indicating that our algorithm has the best detection effect. The S-measure values of our algorithm are the highest and reach 0.913 26 and 0.818 88
indicating that the salient image of our algorithm is the more similar to the structure of the ground truth image
and the detection effect is perfect.
Conclusion
2
In this study
a saliency object detection method based on multiple features and prior information is proposed. This method fully combines the advantages of contrast features
color features
texture features
center prior information
and edge prior information. It comprehensively extracts the salient region and preserves the texture and detail information of the image well. Thus
the edge contour is more complete. The proposed method also satisfies the hierarchical and detail requirements of the human eye and has a certain applicability. However
the method is not perfect when dealing with the no-salient region of complex images. Optimization will be considered in future research.
Cao X C, Tao Z Q, Zhang B, Fu H Z and Feng W. 2014. Self-adaptively weighted co-saliency detection via rank constraint. IEEE Transactions on Image Processing, 23(9):4175-4186[DOI:10.1109/TIP.2014.2332399]
Cheng M M, Mitra N J, Huang X L, Torr P H S and Hu S M. 2015. Global contrast based salient region detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3):569-582[DOI:10.1109/TPAMI.2014.2345401]
Cui L L, Xu J L, Xu G and Wu Q. 2018. Image saliency detection method based on a pair of feature maps. Journal of Image and Graphics, 23(4):583-594
崔玲玲, 许金兰, 徐岗, 吴卿. 2018.融合双特征图信息的图像显著性检测方法.中国图象图形学报, 23(4):583-594[DOI:10.11834/jig.170367]
Fan D P, Cheng M M, Liu Y and Li T. 2017. Structure-measure: a new way to evaluate foreground maps//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 4548-4557[ DOI: 10.1109/ICCV.2017.487 http://dx.doi.org/10.1109/ICCV.2017.487 ]
Gallea R, Ardizzone E and Pirrone R. 2014. Physical metaphor for streaming media retargeting. IEEE Transactions on Multimedia, 16(4):971-979[DOI:10.1109/TMM.2014.2305917]
Gao Y, Shi M J, Tao D C and Xu C. 2015. Database saliency for fast image retrieval. IEEE Transactions on Multimedia, 17(3):359-369[DOI:10.1109/TMM.2015.2389616]
Goferman S, Zelnik-Manor L and Tal A. 2012. Context-aware saliency detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(10):1915-1926[DOI:10.1109/TPAMI.2011.272]
He K M and Sun J. 2015. Fast guided filter[EB/OL].[2019-03-30] . https://arxiv.org/pdf/1505.00996.pdf https://arxiv.org/pdf/1505.00996.pdf
He K M, Sun J and Tang X. 2011. Single image haze removal using dark channel prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(12):2341-2353[DOI:10.1109/TPAMI.2010.168]
Hou Y H, WangP C, Xiang W and Gao Z M. 2015. A novel rate control algorithm for video coding based on fuzzy-PID controller. Signal, Image and Video Processing, 9(4):875-884[DOI:10.1007/s11760-013-0518-2]
Itti L, Koch C and Niebur E. 1998. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11):1254-1259[DOI:10.1109/34.730558]
Jiang H Z, Wang J D, Yuan Z J and Liu T. 2011. Automatic salient object segmentation based on context and shape prior//Proceedings of the 22nd British Machine Vision Conference. Dundee, Britain: BMVA Press: 1-12[ DOI: 10.5244/C.25.110 http://dx.doi.org/10.5244/C.25.110 ]
Li J, Liu Y G, Du S L and Xun Z Y. 2017. Precise image matching via multi-resolution analysis and least square optimization. Control Theory&Applications, 34(6):811-819[DOI:10.7641/CTA.2017.60560]
Qin Y, Lu H C, Xu Y Q and Wang H. 2015. Saliency detection via cellular automata//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA: IEEE: 110-119[ DOI: 10.1109/CVPR.2015.7298606 http://dx.doi.org/10.1109/CVPR.2015.7298606 ]
Qudsia M and Saeed M. 2017. Gaussian cellular automata model for the classification of points inside 2D grid patterns//Proceedings of 2017 International Conference on Frontiers of Information Technology. Islamabad, Pakistan: IEEE: 350-355[ DOI: 10.1109/FIT.2017.00069 http://dx.doi.org/10.1109/FIT.2017.00069 ]
Sakarya U, Demirkesen C and Teke M. 2014. Unsharp masking filter based shadow-invariant feature extraction for hyperspectral signatures//Proceedings of the 22nd Signal Processing and Communications Applications Conference. Trabzon, Turkey: IEEE: 293-296[ DOI: 10.1109/SIU.2014.6830223 http://dx.doi.org/10.1109/SIU.2014.6830223 ]
Shen X and Wu Y. 2012. A unified approach to salient object detection via low rank matrix recovery//Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI, USA: IEEE: 853-860[ DOI: 10.1109/CVPR.2012.6247758 http://dx.doi.org/10.1109/CVPR.2012.6247758 ]
Tang C, Wu J, Zhang C Q and Wang P C. 2017. Salient object detection via weighted low rank matrix recovery. IEEE Signal Processing Letters, 24(4):490-494[DOI:10.1109/LSP.2016.2620162]
Wang J P, Lu H C, Li X H and Tong N. 2015. Saliency detection via background and foreground seed selection. Neurocomputing, 152:359-368[DOI:10.1016/j.neucom.2014.10.056]
Wang Z H, Liao K, Xiong J L and Zhang Q. 2014. Moving object detection based on temporal information. IEEE Signal Processing Letters, 21(11):1403-1407[DOI:10.1109/lsp.2014.2338056]
Xie Y, Lu H and Yang M H. 2013. Bayesian saliency via low and mid level cues. IEEE Transactions on Image Processing, 22(5):1689-1698[DOI:10.1109/TIP.2012.2216276]
Xu J L. 2018. Research on face recognition algorithm based on improved LBP operator. Huainan: Anhui University of Science&Technology
徐金林. 2018.基于改进LBP算子的人脸识别算法研究.淮南: 安徽理工大学 http://cdmd.cnki.com.cn/Article/CDMD-10361-1018183052.htm .
Xu T, Jia S M and Zhang G L. 2017. Salient subtle region accurate detection via cellular automata multi-scale optimization. Optics and Precision Engineering, 25(5):1312-1321
徐涛, 贾松敏, 张国梁. 2017.元胞自动机多尺度优化的显著性细微区域检测.光学精密工程, 25(5):1312-1321[DOI:10.3788/OPE.20172505.1312]
Yang C, Zhang L H and Lu H C. 2013a. Graph-regularized saliency detection with convex-hull-based center prior. IEEE Signal Processing Letters, 20(7):637-640[DOI:10.1109/LSP.2013.2260737]
Yang C, Zhang L H, Lu H C and Ruan X. 2013b. Saliency detection via graph-based manifold ranking//Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, OR, USA: IEEE: 3166-3173[ DOI: 10.1109/CVPR.2013.407 http://dx.doi.org/10.1109/CVPR.2013.407 ]
Yuan Q, Cheng Y F and Chen X Q. 2018. Saliency detection based on multiple priorities and comprehensive contrast. Journal of Image and Graphics, 23(2):239-248
袁巧, 程艳芬, 陈先桥. 2018.多先验特征与综合对比度的图像显著性检测.中国图象图形学报, 23(2):239-248[DOI:10.11834/jig.170381]
Zhang L, Gu Z and Li H. 2013. SDSP: a novel saliency detection method by combining simple priors//Proceedings of 2013 IEEE International Conference on Image Processing. Melbourne, VIC, Australia: IEEE: 171-175[ DOI: 10.1109/ICIP.2013.6738036 http://dx.doi.org/10.1109/ICIP.2013.6738036 ]
Zhang W, Borji A, Wang Z, Callet P and Liu H T. 2016. The application of visual saliency models in objectiveimage quality assessment:a statistical evaluation. IEEE Transactions on Neural Networks and Learning Systems, 27(6):1266-1278[DOI:10.1109/TNNLS.2015.2461603]
Zhang Y, Zhang Z L, Shen Z K and Lu X Y. 2008. The images tracking algorithm using particle filter based on dynamic salient features of targets. Acta Electronica Sinica, 36(12):2306-2311, 2305
张焱, 张志龙, 沈振康, 鹿小莺. 2008.基于动态显著性特征的粒子滤波多目标跟踪算法.电子学报, 36(12):2306-2311, 2305[DOI:10.3321/j.issn:0372-2112.2008.12.006]
Zhou L, Yang Z H, Yuan Q, Zhou Z T and Hu D W. 2015. Salient region detection via integrating diffusion-based compactness and local contrast. IEEE Transactions on Image Processing, 24(11):3308-3320[DOI:10.1109/tip.2015.2438546]
Zhou S J, Ren F J, Du J and Yang S. 2017. Salient region detection based on the integration of background-bias prior and center-bias prior. Journal of Image and Graphics, 22(5):584-595
周帅骏, 任福继, 堵俊, 杨赛. 2017.融合背景先验与中心先验的显著性目标检测.中国图象图形学报, 22(5):584-595[DOI:10.11834/jig.160387]
Zhu X Z, Tang C, Wang P C, Xu H Y, Wang M H, Chen J J and Tian J. 2018. Saliency detection via affinity graph learning and weighted manifold ranking. Neurocomputing, 312:239-250[DOI:10.1016/j.neucom.2018.05.106]
相关作者
相关机构
京公网安备11010802024621