不规则像素簇显著性检测算法
Significance detection method with irregular pixel clusters
- 2020年25卷第9期 页码:1837-1847
收稿:2019-11-22,
修回:2020-3-20,
录用:2020-3-27,
纸质出版:2020-09-16
DOI: 10.11834/jig.190587
移动端阅览

浏览全部资源
扫码关注微信
收稿:2019-11-22,
修回:2020-3-20,
录用:2020-3-27,
纸质出版:2020-09-16
移动端阅览
目的
2
显著性检测领域的研究重点和难点是检测具有复杂结构信息的显著物体。传统的基于图像块的检测算法,主要根据相对规则的图像块进行检测,计算过程中不能充分利用图像不规则的结构和纹理的信息,对算法精度产生影响。针对上述问题,本文提出一种基于不规则像素簇的显著性检测算法。
方法
2
根据像素点的颜色信息量化颜色空间,同时寻找图像的颜色中心,将每个像素的颜色替代为最近的颜色中心的颜色。然后根据相同颜色标签的连通域形成不规则像素簇,并以连通域的中心为该簇的位置中心,以该连通域对应颜色中心的颜色为该簇整体的颜色。通过像素簇的全局对比度得到对比度先验图,利用目标粗定位法估计显著目标的中心,计算图像的中心先验图。然后将对比度先验图与中心先验图结合得到初始显著图。为了使显著图更加均匀地突出显著目标,利用图模型及形态学变化改善初始显著图效果。
结果
2
将本文算法与5种公认表现最好的算法进行对比,并通过5组图像进行验证,采用客观评价指标精确率—召回率(precision-recall,PR)曲线以及精确率和召回率的调和平均数F-measure进行评价,结果表明本文算法在PR曲线上较其他算法表现良好,在F-measure方面相比其他5种算法均有00.3的提升,且有更佳的视觉效果。
结论
2
本文通过更合理地对像素簇进行划分,并对目标物体进行粗定位,更好地考虑了图像的结构和纹理特征,在显著性检测中有较好的检测效果,普适性强。
Objective
2
Saliency detection is a technique that uses algorithms to simulate human visual characteristics. It aims to identify the most conspicuous objects or regions in an image and is used as a first step in image analysis and synthesis
allowing priority to be given to the allocation of computational resources in subsequent processing. The technique has been widely used in several visual applications
such as image segmentation of regions of interest
object recognition
image adaptive compression
and image retrieval. In most traditional methods
the basic unit of saliency detection is formed by image oversegmentation on the basis of regular regions
which are usually improved on
n
×
n
square blocks. Final saliency maps consist of these regions with their saliency scores
which result in the boundary block effect of the final saliency map. The performance of these models relies on whether the segmentation results fit the boundary of the salient object and the accuracy of feature extraction. By using this method
an improved effect can be obtained on the salient target with relatively regular structure and texture. However
in the real world
significant objects and backgrounds are often characterized by complex textures and irregular structure. These approaches cannot produce satisfactory results when images have complex textures
which yield low accuracy. To deal with the limitations of the past algorithms
we propose an algorithm for salient object detection based on irregular superpixels. This algorithm can consider the information of the structure and color features of the object and is closer to the object boundary to a certain extent
thus increasing the precision and recall rate.
Method
2
In the algorithm
the images to be inputted are first preprocessed by bilateral filtering and mean-shift in to reduce the scattered dots in the picture. Then in the RGB(red-green-blue) color space
the K-means algorithm is used to quantize the color of the image
and the color values of the cluster center and the cluster center are obtained and saved
in order to speed up the subsequent calculations. Then
according to the connected domain of the same color label
irregular pixel clusters are formed
in the meantime
set the center of the connected domain to the center of the location of the cluster
set the color corresponding to the color label of the connected domain to the color of the cluster. Next
for the contrast prior
the saliency scores of the image pixel cluster is determined by the color statistic information of the input image. In particular
the saliency scores of a pixel cluster is defined by its color contrast with all other pixel clusters in the image
the size of the pixel cluster and the probability of the corresponding color appearing in the picture. For the central prior graph
the center of the significant target is first estimated by the target coarse positioning method. Then
on the basis of the distance between clusters and the center
the saliency scores of each pixel cluster can be calculated; thus
the central prior graph can be formed. The contrast prior graph is then combined with the central prior graph to obtain an initial saliency map. Lastly
to make the salient map highlight the significant target prominently
a graph model and morphological changes are introduced in saliency detection due to their outstanding performance in image segmentation tasks. In this manner
the final saliency map is obtained.
Result
2
To test the recognition effect of the proposed algorithm
we compare our model with five excellent saliency models on two public datasets
namely
DUT-OMRON(Dalian University of Technology and OMRON Corporation) and Microsoft Research Asia(MSRA) salient object database. The quantitative evaluation metrics contain F-measure and precision-recall(PR) curves. We provide several saliency maps of each method for comparison. Experiment results show that the algorithm proposed in this study has a greater performance improvement compared with the previous algorithms; it also has a better visual effect in the MSRA and DUT-OMRON datasets. The saliency maps show that our model can produce refined results. Compared with the detection results in frequency-tuned salient region detection(FT)
luminance contrast(LC)
histogram based contrast(HC)
region based contrast(RC)
and minimum barrier salient object detection(MB) in MSRA
the F-measure (higher is better) increases by 47.37%
61.29%
31.05%
2.73%
and 5.54%
respectively. Compared with DUT-OMRON
the F-measure increases by 75.40%
92.10%
63.50%
8.83%
and 16.34%
respectively. Comparative experiments demonstrate that the fusion algorithm improves saliency detection. In addition
a series of comparative experiments in MSRA are conducted to show the preponderance of our algorithm.
Conclusion
2
In this study
a saliency recognition algorithm based on irregular pixel blocks is proposed. The algorithm is divided into three parts: irregular pixel blocks
which are constructed by using the color information of images; initial saliency graph
which is obtained by fusing contrast prior and center prior; The final saliency map is obtained by improving the initial saliency map using a graph model. Experimental results show that our model improves recognition performance and outperforms several best performing saliency approaches.
Achanta R, Hemami S, Estrada F and Susstrunk S. 2009. Frequency-tuned salient region detection//Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami: IEEE: 1597-1604[ DOI:10.1109/CVPR.2009.5206596 http://dx.doi.org/10.1109/CVPR.2009.5206596 ]
Achanta R, Shaji A, Smith K, Lucchi A, Fua P and Süsstrunk S. 2012. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(11):2274-2282[DOI:10.1109/TPAMI.2012.120]
Cheng M M, Zhang G X, Mitra N J, Huang X L and Hu S M. 2011. Global contrast based salient region detection//Proceedings of 2011 Computer Vision and Pattern Recognition (CVPR). Providence: IEEE: 409-416[ DOI:10.1109/CVPR.2011.5995344 http://dx.doi.org/10.1109/CVPR.2011.5995344 ]
Goferman S, Zelnik-Manor L and Tal A. 2012. Context-aware saliency detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(10):1915-1926[DOI:10.1109/TPAMI.2011.272]
Harel J, Koch C and Perona P. 2006. Graph-based visual saliency//Advances in Neural Information Processing Systems 19: Proceedings of the 2006 Conference. Cambridge: MIT Press: 545-552[ DOI:10.7551/mitpress/7503.003.0073 http://dx.doi.org/10.7551/mitpress/7503.003.0073 ]
Itti L, Koch C and Niebur E. 1998. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11):1254-1259[DOI:10.1109/34.730558]
Jiang H, Wang J, Yuan Z, Liu T, Zheng N and Li S. 2011. Automatic salient object segmentation based on context and shape prior//Proceedings of the 22nd British Machine Vision Conference. Dundee: BMVC: 110.1-110.12[ DOI:10.5244/C.25.110 http://dx.doi.org/10.5244/C.25.110 ]
Jiang H, Wang J, Yuan Z, Wu Y, Zheng N and Li S. 2013. Salient object detection: a discriminative regional feature integration approach//Proceedings of 2013 IEEE Conference on Computer Vision And Pattern Recognition (CVPR). Portland: IEEE: 2083-2090
Judd T, Ehinger K, Durand F and Torralba A. 2010. Learning to predict where humans look//Proceedings of the 12th IEEE International Conference on Computer Vision. Kyoto: IEEE: 2106-2113[ DOI:10.1109/ICCV.2009.5459462 http://dx.doi.org/10.1109/ICCV.2009.5459462 ]
Li B, Lu C Y, Jin L B and Leng C C. 2016. Saliency detection based on lazy random walk. Journal of Image and Graphics, 21(9):1191-1201
李波, 卢春园, 金连宝, 冷成财. 2016.惰性随机游走视觉显著性检测算法.中国图象图形学报, 21(9):1191-1201[DOI:10.11834/jig.20160908]
Liu T, Zheng N N, Tang X and Shum H Y. 2007. Learning to detect a salient object//Proceedings of 2007 IEEE Conference on Computer Vision and Pattern Recognition(VPR). Minneapolis: IEEE: 1-8[ DOI:10.1109/CVPR.2007.383047 http://dx.doi.org/10.1109/CVPR.2007.383047 ]
Liu Y, Han J G, Zhang Q and Wang L. 2019. Salient object detection via two-stage graphs. IEEE Transactions on Circuits and Systems for Video Technology, 29(4):1023-1037[DOI:10.1109/TCSVT.2018.2823769]
Margolin R, Zelnik-Manor L and Tal A. 2014. How to evaluate foreground maps//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus: IEEE: 248-255[ DOI:10.1109/CVPR.2014.39 http://dx.doi.org/10.1109/CVPR.2014.39 ]
Wang Q S, Zheng W and Piramuthu R. 2016. GraB: visual Saliency via novel graph model and background priors//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas: IEEE: 535-543[ DOI:10.1109/CVPR.2016.64 http://dx.doi.org/10.1109/CVPR.2016.64 ]
Yang C, Zhang L H, Lu H C, Ruan X and Yang M H. 2013. Saliency detection via graph-based manifold ranking//Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland: IEEE: 3166-3173[ DOI:10.1109/CVPR.2013.407 http://dx.doi.org/10.1109/CVPR.2013.407 ]
Yuan Q, Cheng Y F and Chen X Q. 2018. Saliency detection based on multiple priorities and comprehensive contrast. Journal of Image and Graphics, 23(2):239-248
袁巧, 程艳芬, 陈先桥. 2018.多先验特征与综合对比度的图像显著性检测.中国图象图形学报, 23(2):239-248 [DOI:10.11834/jig.170381]
Zhai Y and Shah M. 2006. Visual attention detection in video sequences using spatiotemporal cues//Proceedings of the 14th ACM International Conference on Multimedia. Santa Barbara: ACM: 815-824[ DOI:10.1145/1180639.1180824 http://dx.doi.org/10.1145/1180639.1180824 ]
Zhang J M, Sclaroff S, Lin Z, Shen X H, Price B and Mech R. 2015. Minimum barrier salient object detection at 80 FPS//Proceedings of 2015 IEEE International Conference on Computer Vision (ICCV). Santiago: IEEE: 1404-1412[ DOI:10.1109/ICCV.2015.165 http://dx.doi.org/10.1109/ICCV.2015.165 ]
Zhang L Y, Tong M H, Marks T K, Shan H H and Cottrell G W. 2008. SUN:a Bayesian framework for saliency using natural statistics. Journal of Vision, 8(7):32-32[DOI:10.1167/8.7.32]
Zhang Q, Lin J J and Xie Z G. 2016. Structure extraction and region contrast based salient object detection//Proceedings of the 8th International Conference on Digital Image Processing. Chengdu: SPIE, 10033: #100330K[ DOI:10.1117/12.2244930 http://dx.doi.org/10.1117/12.2244930 ]
相关文章
相关作者
相关机构
京公网安备11010802024621