融合背景块再选取过程的显著性检测
Saliency detection based on the background block reselection method
- 2020年25卷第6期 页码:1104-1115
收稿:2019-07-05,
修回:2019-12-12,
录用:2019-12-19,
纸质出版:2020-06-16
DOI: 10.11834/jig.190317
移动端阅览

浏览全部资源
扫码关注微信
收稿:2019-07-05,
修回:2019-12-12,
录用:2019-12-19,
纸质出版:2020-06-16
移动端阅览
目的
2
显著性检测算法大多使用背景先验提高算法性能,但传统模型只是简单地将图像四周的边缘区域作为背景区域,导致结果在显著性物体触及到图像边界的情况下产生误检测。为更准确地应用背景先验,提出一种融合背景块再选取过程的显著性检测方法。
方法
2
利用背景先验、中心先验和颜色分布特征获得种子向量并构建扩散矩阵,经扩散方法得到初步显著图,并以此为输入再经扩散方法得到二层显著图。依据Fisher准则的思想以二层显著图为基础创建背景块再选取过程,将选取的背景块组成背景向量并构建扩散矩阵,经扩散方法得到背景显著图。将背景显著图与二层显著图进行非线性融合获得最终显著图。
结果
2
在5个通用数据集上将本文算法与6种算法进行实验对比。本文算法在MSRA10K(Microsoft Research Asia 10K)数据集上,平均绝对误差(mean absolute error,MAE)取得了最小值,与基于多特征扩散方法的显著性物体检测算法(salient object detection via multi-feature diffusion-based method,LMH)相比,F值提升了0.84%,MAE降低了1.9%;在数据集ECSSD(extended complex scene saliency dataset)上,MAE取得了次优值,F值取得了最优值,与LMH算法相比,F值提升了1.33%;在SED2(segmentation evaluation database 2)数据集上,MAE与F值均取得了次优值,与LMH算法相比,F值提升了0.7%,MAE降低了0.93%。本文算法检测结果在主观对比中均优于LMH算法,表现为检测所得的显著性物体更加完整,置信度更高,在客观对比中,查全率均优于LMH算法。
结论
2
提出的显著性检测模型能更好地应用背景先验,使主客观检测结果有更好提升。
Objective
2
Many saliency detection algorithms use background priors to improve algorithm performance. In the past
however
most traditional models simply used the edge region around an image as the background region
resulting in false detection in cases wherein a salient object touches the edge of the image. To accurately apply background priors
we propose a saliency detection method that integrates the background block reselection process.
Method
2
First
the original image is segmented using a superpixel segmentation algorithm
namely
simple linear iterative clustering (SLIC)
to generate a superpixel image. Then
a background prior
a central prior
and a color distribution feature are used to select a partial superpixel block from the superpixel image to form a seed vector
which constructs a diffusion matrix. Second
the seed vector is diffused by the diffusion matrix to obtain a preliminary saliency map. Then
the preliminary saliency map is used as an input and then diffused by the diffusion matrix to obtain a second saliency map to obtain high-level features. Third
we develop a background block reselection process in accordance with the idea of Fisher's criterion. The two-layer saliency map is first fed into the background block reselection algorithm to extract background blocks. Then
we use the selected background blocks to form the background vector
which can be utilized to construct a new diffusion matrix. Lastly
the seed vector is diffused by the new diffusion matrix to obtain a background saliency map. Fourth
the background and two-layer saliency maps are nonlinearly fused to obtain the final saliency map.
Result
2
The experiments are performed on five general datasets: Microsoft Research Asia 10K (MSRA10K)
extended complex scene saliency dataset (ECSSD)
Dalian University of Technology and OMRON Corporation (DUT-OMRON)
salient object dataset (SOD)
and segmentation evaluation database 2 (SED2). Our method is compared with six recent algorithms
namely
generic promotion of diffusion-based salient object detection (GP)
inner and inter label propagation: salient object detection in the wild (LPS)
saliency detection via cellular automata (BSCA)
salient object detection via structured matrix decomposition (SMD)
salient region detection using a diffusion process on a two-layer sparse graph (TSG)
and salient object detection via a multifeature diffusion-based method LMH (salient object detection via multi-feature diffusion-based method)
by using three evaluation indicators: PR(precision-recall) curve
F index
and mean absolute error (MAE). On the MSRA10K dataset
MAE achieved the minimum value in all the comparison algorithms. Compared with the preimproved algorithm LMH
the F value increased by 0.84% and MAE decreased by 1.9%. On the ECSSD dataset
MAE was the second and the F value reached the maximum value in all the methods. Compared with the algorithm LMH
the F value increased by 1.33%. On the SED2 dataset
MAE and F values were both second in all the methods. Compared with the algorithm LMH
the F value increased by 0.7% and MAE decreased by 0.93%. Simultaneously
we separately extract the generated background saliency map and the final saliency map from our method and compare them with the corresponding high-level saliency map and final saliency map generated using the algorithm LMH. The experiment shows that our method also performs better at the subjective level. The salient objects in the saliency map are more complete and exhibit higher confidence
which is consistent with the phenomenon that recall rate in the objective comparison is better than that of the algorithm LMH. In addition
we experimentally verify the process of dynamically selecting thresholds in the proposed background block reselection process. The F-indexes obtained on three datasets (MSRA10K
SOD
and SED2) are better than those in the corresponding static processes. On ECSSD
the performance on the dataset is basically the same as that in the static process. However
the performance on the DUT-OMRON dataset is not as good as that in the static process. Consequently
we conduct theoretical analysis and verify the experiment by increasing the selection interval of the background block.
Conclusion
2
The proposed saliency detection method can better apply the background prior
such that the final detection effect is better at the subjective and objective indicator levels. Simultaneously
the proposed method performs better when dealing with the type of image in which the salient region touches the edge of the image. In addition
the comparative experiment on the dynamic selection process of thresholds shows that the process of dynamically selecting thresholds is effective and reliable.
Achanta R, Shaji A, Smith K, Lucchi A, Fua P and Süsstrunk S. 2012. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(11):2274-2282[DOI:10.1109/TPAMI.2012.120]
Bai C, Chen J N, Huang L, Kpalma K and Chen S Y. 2018. Saliency-based multi-feature modeling for semantic image retrieval. Journal of Visual Communication and Image Representation, 50:199-204[DOI:10.1016/j.jvcir.2017.11.021]
Bi H, Tang H, Yang G Y, Shu H Z and Dillenseger J L. 2018. Accurate image segmentation using Gaussian mixture model with saliency map. Pattern Analysis and Applications, 21(3):869-878[DOI:10.1007/s10044-017-0672-1]
Borji A, Cheng M M, Jiang H Z and Li J. 2015. Salient object detection:a benchmark. IEEE Transactions on Image Processing, 24(12):5706-5722[DOI:10.1109/TIP.2015.2487833]
Feng S H, Lang C Y and Xu D. 2011. Combining graph learning and region saliency analysis for content-based image retrieval. Acta Electronica Sinica, 39(10):2288-2294
冯松鹤, 郎丛妍, 须德. 2011.一种融合图学习与区域显著性分析的图像检索算法.电子学报, 39(10):2288-2294
Itti L, Koch C and Niebur E. 1998. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11):1254-1259[DOI:10.1109/34.730558]
Jiang P, Vasconcelos N and Peng J L. 2015. Generic promotion of diffusion-based salient object detection//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE: 217-225[ DOI: 10.1109/ICCV.2015.33 http://dx.doi.org/10.1109/ICCV.2015.33 ]
Kim J, Han D, Tai Y W and Kim J. 2016. Salient region detection via high-dimensional color transform and local spatial support. IEEE Transactions on Image Processing, 25(1):9-23[DOI:10.1109/TIP.2015.2495122]
Li H Y, Lu H C, Lin Z, Shen X H and Price B. 2015. Inner and inter label propagation:salient object detection in the wild. IEEE Transactions on Image Processing, 24(10):3176-3186[DOI:10.1109/tip.2015.2440174]
Li L, Zhou F G, Zheng Y and Bai X Z. 2018. Saliency detection based on foreground appearance and background-prior. Neurocomputing, 301:46-61[DOI:10.1016/j.neucom.2018.03.049]
Liu G H and Yang J Y. 2019. Exploiting color volume and color difference for salient region detection. IEEE Transactions on Image Processing, 28(1):6-16[DOI:10.1109/TIP.2018.2847422]
Liu S W, Li M, Hu J L and Cui Y M. 2015. Image classification method based on visual saliency detection. Journal of Computer Applications, 35(9):2629-2635
刘尚旺, 李名, 胡剑兰, 崔艳萌. 2015.基于视觉显著性检测的图像分类方法.计算机应用, 35(9):2629-2635[DOI:10.11772/j.issn.1001-9081.2015.09.2629]
Martin D R, Fowlkes C C and Malik J. 2004. Learning to detect natural image boundaries using local brightness, color, and texture cues. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(5):530-549[DOI:10.1109/tpami.2004.1273918]
Peng H W, Li B, Ling H B, Hu W M, Xiong W H and Maybank SJ. 2017. Salient object detection via structured matrix decomposition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4):818-832[DOI:10.1109/TPAMI.2016.2562626]
Perazzi F, Krähenbühl P, Pritch Y and Hornung A. 2012. Saliency filters: contrast basedfiltering for salient region detection//Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, USA: IEEE: 733-740[ DOI: 10.1109/CVPR.2012.6247743 http://dx.doi.org/10.1109/CVPR.2012.6247743 ]
Qin Y, Lu H C, Xu Y Q and Wang H. 2015. Saliency detection via cellular automata//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE: 110-119[ DOI: 10.1109/CVPR.2015.7298606 http://dx.doi.org/10.1109/CVPR.2015.7298606 ]
Shehnaz M and Naveen N. 2015. An object recognition algorithm with structure-guided saliency detection and SVM classifier//Proceedings of 2015 International Conference on Power, Instrumentation, Control and Computing. Thrissur, India: IEEE: 1-4[ DOI: 10.1109/PICC.2015.7455804 http://dx.doi.org/10.1109/PICC.2015.7455804 ]
Tang J J, Ge Y and Liu Y Z. 2016. Application of visual saliency and feature extraction algorithm applied in large-scale image classification//Proceedings of 2016 International Conference on Communication and Electronics Systems. Coimbatore, India: IEEE: 1-6[ DOI: 10.1109/CESYS.2016.7889903 http://dx.doi.org/10.1109/CESYS.2016.7889903 ]
van Rijsbergen C J. 1986. A new theoretical framework for information retrieval. ACM SIGIR Forum, 21(1/2):23-29[DOI:10.1145/24634.24635]
Wan S H, Jin P Q, Yue L H and Yan L. 2017. Image retrieval based on multi-instance saliency model//Proceedings of SPIE 10420, 9th International Conference on Digital Image Processing. Hong Kong, China: SPIE: #104201X[ DOI: 10.1117/12.2281919 http://dx.doi.org/10.1117/12.2281919 ]
Wei W, Liu X H, Zhou B B, Zhao Y J, Dong L Q, Liu M, Kong L Q and Chu X H. 2016. Sea surface target detection and recognition algorithm based on local and global salient region detection//Proceedings of SPIE: 9971, Applications of Digital Image Processing XXXIX. San Diego, USA: SPIE: #99712U[ DOI: 10.1117/12.2237103 http://dx.doi.org/10.1117/12.2237103 ]
Yang C, Zhang L H, Lu H C, Ruan X and Yang M H. 2013. Saliency detection via graph-based manifold ranking//Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, USA: IEEE: 3166-3173[ DOI: 10.1109/CVPR.2013.407 http://dx.doi.org/10.1109/CVPR.2013.407 ]
Ye F, Hong S T, Chen J Z, Zheng Z H and Liu G H. 2018. Salient object detection via multi-feature diffusion-based method. Journal of Electronics and Information Technology, 40(5):1210-1218
叶锋, 洪斯婷, 陈家祯, 郑子华, 刘广海. 2018.基于多特征扩散方法的显著性物体检测.电子与信息学报, 40(5):1210-1218[DOI:10.11999/JEIT170827]
Zhang J J, Ding S Y, Li L B and Zhao C X. 2017. Saliency based image detection and segmentation method for unmanned vehicle. Computer Engineering and Applications, 53(22):176-179, 242
张俊杰, 丁淑艳, 李伦波, 赵春霞. 2017.基于视觉显著性的无人车图像检测及分割方法.计算机工程与应用, 53(22):176-179, 242[DOI:10.3778/j.issn.1002-8331.1607-0302]
Zhou L, Yang Z H, Zhou Z T and Hu D W. 2017. Salient region detection using diffusion process on a two-layer sparse graph. IEEE Transactions on Image Processing, 26(12):5882-5894[DOI:10.1109/tip.2017.2738839]
Zhu C B, Huang K and Li G. 2018. An innovative saliency guided ROI selection model for panoramic images compression//Proceedings of 2018 Data Compression Conference. Snowbird, USA: IEEE: #436[ DOI: 10.1109/DCC.2018.00089 http://dx.doi.org/10.1109/DCC.2018.00089 ]
相关作者
相关机构
京公网安备11010802024621