Image saliency detection based on background-absorbing Markov chain
- Vol. 23, Issue 6, Pages: 857-865(2018)
Received:08 September 2017,
Revised:2017-12-8,
Published:16 June 2018
DOI: 10.11834/jig.170492
移动端阅览

浏览全部资源
扫码关注微信
Received:08 September 2017,
Revised:2017-12-8,
Published:16 June 2018
移动端阅览
目的
2
现有的基于马尔可夫链的显著目标检测方法是使用简单线性迭代聚类(SLIC)分割获得超像素块构造图的节点,再将四边界进行复制获得吸收节点,但是SLIC分割结果的好坏直接影响到后续结果,另外,很大一部分图像的显著目标会占据1~2个边界,特别是对于人像、雕塑等,如果直接使用四边界作为待复制的节点,必然影响最终效果。针对以上存在的缺陷,本文提出一种背景吸收的马尔可夫显著目标检测方法。
方法
2
首先通过差异化筛选去除差异较大的边界,选择剩余3条边界上的节点进行复制作为马尔可夫链的吸收节点,通过计算转移节点的吸收时间获得初始的显著图,从初始显著图中选择背景可能性较大的节点进行复制作为吸收节点,再进行一次重吸收计算获得显著图,并对多层显著图进行融合获得最终的显著图。
结果
2
在ASD、DUT-OMRON和SED 3个公开数据库上,对比实验验证本文方法,与目前12种主流算法相比,在PR曲线、F值和直观上均有明显的提高,3个数据库计算出的F值分别为0.903、0.544 7、0.775 6,验证了算法的有效性。
结论
2
本文针对使用图像边界的超像素块复制作为吸收节点和SLIC分割技术的缺陷,提出了一种基于背景吸收马尔可夫显著目标检测模型,实验表明,本文的方法适用于自底向上的图像显著目标检测,特别是适用于存在人像或雕塑等目标的图像,并且可以应用于图像检索、目标识别、图像分割和图像压缩等多个领域。
Objective
2
The method of saliency detection via absorbing Markov chain uses simple linear iterative cluster (SLIC) method to obtain superpixels as graph nodes. Then
a k-regular graph is constructed
and the edge weight is calculated by the difference of two nodes on CIELAB value. The superpixels are duplicated on boundary as absorbing nodes. Then
the absorbed time of transient nodes on the Markov chain is calculated. If the absorbed time is small
then the transient node is similar to the absorbing node
which is possibly a background node. On the contrary
the transient node is dissimilar to the absorbing node when the absorbed time is large. Then
the node is a salient node. Actually
the number of superpixels obtained by using the SLIC method influences the results of the saliency maps. If the size of the superpixel is extremely large
then the detailed information is ignored. On the contrary
if the size of the superpixel is extremely small
then the global cue is missed. The saliency objects usually occupy one or two boundaries of images
especially for portraits and sculptures. The final saliency results are influenced when four boundary nodes are duplicated as absorbing nodes. Considering these drawbacks
we propose an improved method based on background nodes as absorbing nodes on absorbing Markov chain. Multilayer image fusion is used to restrain the influence of the uncertain number of superpixels through the SLIC method.
Method
2
First
we confirm the edge selection. We separately obtain the four boundaries of the image nodes by SLIC method
duplicate them as absorbing nodes
and obtain four types of saliency maps. Then
we calculate the difference values of two saliency maps. The largest value of one saliency map is computed and compared with those of the other three maps. Then
we remove the most different edge and continue to use the remaining three boundary nodes to be duplicated as absorbing nodes. Then
the initial saliency map can be obtained by calculating the absorbed time of the transient nodes to absorbing nodes at the absorbing Markov chain. Second
to further optimize the algorithm
the number of absorbing nodes should be increased. In addition
the background may be a part of the boundary nodes. We add other nodes that are probably background nodes
which are selected from the initial saliency map by a threshold. If the initial saliency value of the node is lower than the threshold
then the node is considered a background node. The selected boundary and background nodes are duplicated as absorbing nodes
by which the absorbed time of the transient nodes are calculated. The pixel saliency value can be obtained from the saliency value of the superpixels. Finally
we fuse the multiple pixel saliency maps and calculate the average values as the final results. The multiple saliency values are obtained via different numbers of superpixels by using the SLIC method.
Result
2
We evaluate the effectiveness of our method on the following three benchmark datasets: ASD
DUT-OMRON
and SED. We compare our method with 12 recent state-of-art saliency detection methods
namely
MC
CA
FT
SEG
BM
SWD
SF
GCHC
LMLC
PCA
MS
and MST. ASD contains 1 000 simple images. DUT-OMRONS contains 5 168 complex images. SED includes 200 images
in which 100 images have one salient object
and the other 100 images have two salient objects. The experimental results show that the improved algorithm is efficient and better than the 12 state-of-the-art methods in precision recall curves (PR-curve) and F-measure. The precision is calculated as the ratio of actual saliency results assigned to all predicted saliency pixels. Recall is defined as the ratio of the total saliency captured pixels to the ground-truth number. F-measure is an overall performance measurement. Some visual comparative examples selected from three datasets are shown intuitively. The F-measure values of the three benchmark databases are 0.775 6
0.903
and 0.544 7. These values are higher than that of the other 12 methods.
Conclusion
2
The image can be segmented into complex background
and the partition of human eyes is interesting. The visual image saliency detection extracts the portion of interest through the computer simulation as a human visual system. We propose an improved model
which is based on the background nodes and image fusion
to obtain the final results given the drawbacks of using four image boundaries to be duplicated as absorbing nodes and the uncertain number of superpixels through the SLIC method. The experiments show that the method is efficient and can be applicable to the bottom-up image saliency detection
especially for images such as portraits or sculptures. It can also be applied to many fields
such as image retrieval
object recognition
image segmentation
and image compression.
Zhu G Y, Zheng Y F, Doermann D, et al. Signature detection and matching for document image retrieval[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 31(11):2015-2031.[DOI:10.1109/TPAMI.2008.237]
Wu J F. The research of image retrieval algorithm based on visual saliency[D]. Dalian: Dalian Maritime University, 2017. http://cdmd.cnki.com.cn/Article/CDMD-10151-1017054261.htm .
吴俊峰. 基于视觉显著性的图像检索算法研究[D]. 大连: 大连海事大学, 2017.
Rutishauser U, Walther D, Koch C, et al. Is bottom-up attention useful for object recognition?[C]//Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington DC, USA: IEEE, 2004, 2: Ⅱ-37-Ⅱ-44. [ DOI:10.1109/CVPR.2004.1315142 http://dx.doi.org/10.1109/CVPR.2004.1315142 ]
Ko B C, Nam J Y. Object-of-interest image segmentation based on human attention and semantic region clustering[J]. Journal of the Optical Society of America A, 2006, 23(10):2462-2470.[DOI:10.1364/JOSAA.23.002462]
Zhang G X, Cheng M M, Hu S M, et al. A shape-preserving approach to image resizing[J]. Computer Graphics Forum, 2009, 28(7):1897-1906.[DOI:10.1111/j.1467-8659.2009.01568.x]
Ao H H. Research on applications based on visual saliency[D]. Hefei: University of Science and Technology of China, 2013.
敖欢欢. 视觉显著性应用研究[D]. 合肥: 中国科学技术大学, 2013. [ DOI:10.7666/d.Y2354193 http://dx.doi.org/10.7666/d.Y2354193 ]
Achanta R, Hemami S, Estrada F, et al. Frequency-tuned salient region detection[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Miami, Florida , USA: IEEE, 2009: 1597-1604. [ DOI:10.1109/CVPR.2009.5206596 http://dx.doi.org/10.1109/CVPR.2009.5206596 ]
Li B, Lu C Y, Jin L B, et al. Saliency detection based on lazy random walk[J]. Journal of Image and Graphics, 2016, 21(9):1191-1201.
李波, 卢春园, 金连宝, 等.惰性随机游走视觉显著性检测算法[J].中国图象图形学报, 2016, 21(9):1191-1201. [DOI:10.11834/jig.20160908]
Xu W, Tang Z M. Exploiting hierarchical prior estimation for salient object detection[J]. Acta Automatica Sinica, 2015, 41(4):799-812.
徐威, 唐振民.利用层次先验估计的显著性目标检测[J].自动化学报, 2015, 41(4):799-812. [DOI:10.16383/j.aas.2015.c140281]
Borji A, Sihite D N, Itti L. Probabilistic learning of task-specific visual attention[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Providence, Rhode Island, USA: IEEE, 2012: 470-477. [ DOI:10.1109/CVPR.2012.6247710 http://dx.doi.org/10.1109/CVPR.2012.6247710 ]
Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(11):1254-1259.[DOI:10.1109/34.730558]
Harel J, Koch C, Perona P. Graph-based visual saliency[C]//Proceedings of the 19th International Conference on Neural Information Processing Systems. Kitakyushu, Japan: ACM, 2006: 545-552. [ DOI:10.1.1.70.2254 http://dx.doi.org/10.1.1.70.2254 ]
Hou X D, Zhang L Q. Saliency detection: a spectral residual approach[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Minneapolis, Minnesota, USA: IEEE, 2007: 1-8. [ DOI:10.1109/CVPR.2007.383267 http://dx.doi.org/10.1109/CVPR.2007.383267 ]
Jiang B W, Zhang L H, Lu H C, et al. Saliency detection via absorbing Markov chain[C]//Proceedings of IEEE International Conference on Computer Vision. Sydney, NSW, Australia: IEEE, 2013: 1665-1672. [ DOI:10.1109/ICCV.2013.209 http://dx.doi.org/10.1109/ICCV.2013.209 ]
Qin Y, Lu H C, Xu Y Q, et al. Saliency detection via cellular automata[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA: IEEE, 2015: 110-119. [ DOI:10.1109/CVPR.2015.7298606 http://dx.doi.org/10.1109/CVPR.2015.7298606 ]
Tu W C, He S F, Yang Q X, et al. Real-time salient object detection with a minimum spanning tree[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE, 2016: 2334-2342. [ DOI:10.1109/CVPR.2016.256 http://dx.doi.org/10.1109/CVPR.2016.256 ]
Achanta R, Shaji A, Smith K, et al. SLIC superpixels compared to state-of-the-art superpixel methods[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(11):2274-2282.[DOI:10.1109/TPAMI.2012.120]
Lyu J Y, Tang Z M. Improved salient object detection based on absorbing Markov chain[J]. Journal of Nanjing University of Science and Technology, 2015, 39(6):674-679.
吕建勇, 唐振民.一种改进的马尔可夫吸收链显著性目标检测方法[J].南京理工大学学报, 2015, 39(6):674-679. [DOI:10.14177/j.cnki.32-1397n.2015.39.06.007]
Tong N, Lu H C, Zhang L H, et al. Saliency detection with multi-Scale superpixels[J]. IEEE Signal Processing Letters, 2014, 21(9):1035-1039.[DOI:10.1109/LSP.2014.2323407]
Wang W H, Zhou J B, Gao S B, et al. Improved multi-scale saliency detection based on HSV space[J]. Computer Engineering&Science, 2017, 39(2):354-370.
王文豪, 周静波, 高尚兵, 等.基于HSV空间改进的多尺度显著性检测[J].计算机工程与科学, 2017, 39(2):354-370. [DOI:10.3969/j.issn.1007-130X.2017.02.022]
Wang H L, Luo B. Saliency detection based on hierarchical graph integration[J]. Journal of Frontiers of Computer Science and Technology, 2016, 10(12):1752-1762.
王慧玲, 罗斌.层次图融合的显著性检测[J].计算机科学与探索, 2016, 10(12):1752-1762. [DOI:0.3778/j.issn.1673-9418.1607044]
Goferman S, Zelnik-Manor L, Tal A. Context-aware saliency detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(10):1915-1926.[DOI:10.1109/TPAMI.2011.272]
Rahtu E, Kannala J, Salo M, et al. Segmenting salient objects from images and videos[C]//Proceedings of the 11th European Conference on Computer Vision. Heraklion, Crete, Greece: Springer, 2010: 366-379. [ DOI:10.1007/978-3-642-15555-0_27 http://dx.doi.org/10.1007/978-3-642-15555-0_27 ]
Xie Y L, Lu H C. Visual saliency detection based on Bayesian model[C]//Proceedings of the 18th IEEE International Conference on Image Processing. Brussels, Belgium: IEEE, 2011: 645-648. [ DOI:10.1109/ICIP.2011.6116634 http://dx.doi.org/10.1109/ICIP.2011.6116634 ]
Duan L J, Wu C P, Miao J, et al. Visual saliency detection by spatially weighted dissimilarity[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Colorado Springs, CO, USA: IEEE, 2011: 473-480. [ DOI:10.1109/CVPR.2011.5995676 http://dx.doi.org/10.1109/CVPR.2011.5995676 ]
Perazzi F, Krähenbühl P, Pritch Y, et al. Saliency filters: contrast based filtering for salient region detection[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Providence, Rhode Island, USA: IEEE, 2012: 733-740. [ DOI:10.1109/CVPR.2012.6247743 http://dx.doi.org/10.1109/CVPR.2012.6247743 ]
Yang C, Zhang L H, Lu H C. Graph-regularized saliency detection with convex-hull-based center Prior[J]. IEEE Signal Processing Letters, 2013, 20(7):637-640.[DOI:10.1109/LSP.2013.2260737]
Xie Y L, Lu H C, Yang M H. Bayesian saliency via low and mid level cues[J]. IEEE Transactions on Image Processing, 2013, 22(5):1689-1698.[DOI:10.1109/TIP.2012.2216276]
Margolin R, Tal A, Zelnik-Manor L. What makes a patch distinct?[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Portland, Oregon, USA: IEEE, 2013: 1139-1146. [ DOI:10.1109/CVPR.2013.151 http://dx.doi.org/10.1109/CVPR.2013.151 ]
Yang C, Zhang L H, Lu H C, et al. Saliency detection via graph-based manifold ranking[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Portland, Oregon, USA: IEEE, 2013: 3166-3173. [ DOI:10.1109/CVPR.2013.407 http://dx.doi.org/10.1109/CVPR.2013.407 ]
Alpert S, Galun M, Basri R, et al. Image segmentation by probabilistic bottom-up aggregation and cue integration[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Minneapolis, MN, USA: IEEE, 2007: 1-8. [ DOI:10.1109/CVPR.2007.383017 http://dx.doi.org/10.1109/CVPR.2007.383017 ]
相关文章
相关作者
相关机构
京公网安备11010802024621