发布时间: 2016-10-25 摘要点击次数: 全文下载次数: DOI: 10.11834/jig.20161014 2016 | Volumn 21 | Number 10 图像理解和计算机视觉

 收稿日期: 2016-02-01; 修回日期: 2016-06-02 基金项目: 国家科技支撑计划基金项目（2015BAK24B01）；高等学校博士学科点专项科研基金联合资助课题（20133401110009）；安徽高校省级自然科学研究项目（KJ2015A009） 第一作者简介: 黄子超(1992-),男,安徽大学计算机科学与技术学院计算机应用技术专业硕士研究生,主要研究方向为人工智能。E-mail:1052041670@qq.com 中图法分类号: TP391 文献标识码: A 文章编号: 1006-8961(2016) 10-1392-10

# 关键词

Feature integration and S-D probability correction based RGB-D saliency detection
Huang Zichao, Liu Zhengyi
College of Computer Science and Technology, Anhui University, Hefei 230601, China
Supported by: National Key Technology Research and Development Program of the Ministry of Science and Technology of China (2015BAK24B01)

# Abstract

Objective Saliency detection is a fundamental part of computer vision applications. Its goal is to obtain a high-quality saliency map that can detect important pixels or regions in an image, which attracts human visual attention the most. Recently, saliency detection approaches in RGB-D images have become increasingly popular, and depth information is proven as a fundamental element of human vision. Most existing saliency detection methods are concentrated on detecting salient objects in 2D images, but they can not be used in detecting salient objects in RGB-D images. In this paper, however, a new RGB-D saliency detection approach based on feature integration and saliency-depth(S-D) probability correction method, is proposed. The proposed method considers image features both at the 2D and RGB-D levels, and extracts color and depth features to complement each other. Method First, the method extracts color and depth features, and sets four boundaries as background seeds to compute the initial saliency map by manifold-ranking algorithm. Second, according to RGB image saliency and depth maps, the method computes S-D correction probability. Third, the method computes another saliency depth map and uses the S-D correction probability to correct the result. After correction, the proposed method finally selects foreground seeds through image threshold processing. Then, a final saliency map is optimized by using the manifold-ranking algorithm again. Result In our experiments, we evaluate the saliency detection ability of our method and six state-of-the-art methods on a large and prevalent RGB-D image dataset, which contains 1,000 images. Experimental results indicate that saliency detection results from our proposed method are much closer to the ground truth than other methods. We also plot a precision-recall curve to show the advantages of the proposed method. From the precision-recall curve, we can conclude that the proposed method has better performance than the other five methods when recall is the same. In addition, we evaluate the time complexity of our algorithm. Our method can process a single image in 2.150 seconds, which is faster than the speed of most of the other methods. Conclusion In this paper, we propose a novel RGB-D saliency detection approach that combines color features from the RGB image and depth features from the depth image. The depth features are extracted to guide RGB image saliency ranking. The RGB saliency detection results are utilized for saliency detection alignment results of depth images. Experiment results demonstrate that the manifold-ranking approach with feature integration can fuse depth and color features effectively, and enable those two components to complement each other. With the help of S-D probability correction, RGB saliency detection results can effectively guide depth saliency detection.

# Key words

saliency detection; S-D probability correction; feature integration; Manifold Ranking; RGB-D; color feature; depth feature

# 1 特征融合的Manifold Ranking

 ${f^*} = arg\mathop {min}\limits_f {1 \over 2}(\sum\limits_{i,j = 1}^n {{w_{ij}}||{f_i}/\sqrt {{d_{ii}}} - } {f_j}/\sqrt {{d_{jj}}} |{|^2} + \mu \sum\limits_{i = 1}^n {||{f_i} - {y_i}|{|^2}} )$ (1)

 ${f^*} = {(D - \alpha W)^{ - 1}}y$ (2)

 ${f^*} = {(D - \alpha {W_{DC}})^{ - 1}}y$ (3)

# 2.2 构建图

 ${c_{ij}} = ||{\kern 1pt} {\kern 1pt} {\kern 1pt} {c_i} - {c_j}||{\kern 1pt} {\kern 1pt}$ (4)

 ${d_{ij}} = ||{\kern 1pt} {\kern 1pt} {\kern 1pt} {d_i} - {d_j}||$ (5)

 ${W_{d{c_{ij}}}} = \left\{ {\matrix{ {\exp ( - {{{c_{ij}} + {d_{ij}}} \over {2\sigma _w^2}}){\kern 1pt} } & {j \in {N_i}} \cr 0 & {其他} \cr } {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} } \right.$ (6)

 ${f^ * } = {(D - \alpha {W_{dc}})^{ - 1}}y$ (7)

# 2.3 RGB图片显著目标检测

 ${S_{Bt}}(i) = 1 - {\overline f ^ * }(i),\left( {i = 1,2, \ldots ,N,} \right)$ (8)

 ${S_B}(i) = {S_{Bt}}(i) \times {S_{Bb}}(i) \times {S_{Bl}}(i) \times {S_{Br}}(i)$ (9)

# 2.4 深度图显著目标检测

 ${S_D}(i) = {1 \over {2\pi \delta _d^2}} \times \exp \left( {{{1 - d(i)} \over {2\delta _d^2}}} \right)$ (10)

# 2.5 S-D矫正概率

 ${P_{s - d}}{\rm{(}}i{\rm{) = }}{{{S_B}(i)} \over {D(i)}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} ({\kern 1pt} i \in (D(i) \ne 0){\kern 1pt} {\kern 1pt} ){\kern 1pt}$ (11)

# 2.6 S-D矫正

 ${S_{DF}}(i) = {S_D}(i) \times {P_{s - d}}(i)$ (12)

# 2.7 前景角度优化

 ${S_F}(i) = {\overline f ^ * }(i), i = 1,2, \ldots ,N,$ (13)

# 2.8 算法分析

1) 根据参数将输入的RGB图片和深度图分割成超像素集合。

2) 将超像素作为节点构造闭环无向图。首先根据深度特征和CIELab颜色特征构造新型关联矩阵Wdc。计算(D－αWdc)－1，其中α设置为0.99，此即特征融合的Manifold Ranking方法。

3) 计算RGB图片的显著图。使用特征融合的Manifold Ranking方法根据式(8) 来计算每条边界的显著图。然后，根据式(9) 计算RGB图片对应的显著图SB

4) 计算深度图(预处理后的深度图)对应的显著图。首先，根据式(10) 来计算基于深度特征的显著图SD；其次，根据式(11) 计算S-D矫正概率；最后，根据式(12) 对显著图SD进行S-D概率矫正，得到矫正后的显著图SDF

5) 从前景角度出发，采用自适应阈值方法对SDF进行二值化分割，大于阈值的节点被标记为前景询问节点构造指示向量y，再次使用特征融合的Manifold Ranking进行优化，从而得到最终的显著图SF

# 3.2 评价标准

 $P = \sum\nolimits_x {{g_x}{a_x}} /\sum\nolimits_x {{a_x}}$ (14)

 $R = \sum\nolimits_x {{g_x}{a_x}} /\sum\nolimits_x {{g_x}}$ (15)

# 3.4 运行时间比较

Table 1 Comparison of average run time of different methods

 /(s/幅) 方法 MRD WCTRD LMH ACSD CDS 本文方法 时间 3.596 1.984 5.892 0.102 17.484 2.150 代码 Matlab Matlab Matlab C++ Matlab Matlab

# 参考文献

• [1] Goferman S, Zelnik-Manor L, Tal A. Context-aware saliency detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2012, 34 (10) : 1915–1926. DOI:10.1109/TPAMI.2011.272]
• [2] Rother C, Kolmogorov V, Blake A. "GrabCut": interactive foreground extraction using iterated graph cuts[J]. ACM Transactions on Graphics , 2004, 23 (3) : 309–314. DOI:10.1145/1186562.1015720]
• [3] Ding Y Y, Xiao J, Yu J Y. Importance filtering for image retargeting[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI, USA: IEEE, 2011: 89-96.[DOI: 10.1109/CVPR.2011.5995445
• [4] Mahadevan V, Vasconcelos N. Saliency-based discriminant tracking[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Miami, Florida, USA: IEEE, 2009: 1007-1013.[DOI: 10.1109/CVPRW.2009.5206573
• [5] Siagian C, Itti L. Rapid biologically-inspired scene classification using features shared with visual attention[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2007, 29 (2) : 300–312. DOI:10.1109/TPAMI.2007.40]
• [6] Perazzi F, Krähenbühl P, Pritch Y, et al. Saliency filters: contrast based filtering for salient region detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI, USA: IEEE, 2012: 733-740.[DOI: 10.1109/CVPR.2012.6247743
• [7] Cheng M M, Mitra N J, Huang X L, et al. Global contrast based salient region detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2015, 37 (3) : 569–582. DOI:10.1109/TPAMI.2014.2345401]
• [8] Klein D, Frintrop S. Center-surround divergence of feature statistics for salient object detection[C]//Proceedings of the IEEE International Conference on Computer Vision. Barcelona, Spain: IEEE, 2011: 2214-2219.[DOI: 10.1109/ICCV.2011.6126499
• [9] Harel J, Koch C, Perona P. Graph-based visual saliency[C]//Proceedings of the Twentieth Annual Conference on Neural Information Processing Systems. Vancouver, British Columbia, Canada: MIT Press, 2006: 545-552.
• [10] Wei Y C, Wen F, Zhu W J, et al. Geodesic saliency using background priors[C]//Proceedings of the 12th European Conference on Computer Vision. Berlin Heidelberg: Springer, 2012: 29-42.[DOI: 10.1007/978-3-642-33712-3_3
• [11] Yang C, Zhang L H, Lu H C, et al. Saliency detection via graph-based manifold ranking[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Portland, OR, USA: IEEE, 2013: 3166-3173.[DOI: 10.1109/CVPR.2013.407
• [12] Zhu W J, Liang S, Wei Y C, et al. Saliency optimization from robust background detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Columbus, OH, USA: IEEE, 2014: 2814-2821.[DOI: 10.1109/CVPR.2014.360
• [13] Peng H W, Li B, Xiong W H, et al. RGBD salient object detection: a benchmark and algorithms[C]//Proceedings of the 13th European Conference on Computer Vision-ECCV 2014. Switzerland: Springer, 2014, 8691: 92-109.[DOI: 10.1007/978-3-319-10578-9_7
• [14] Guo J F, Ren T W, Bei J, et al. Salient object detection in RGB-D image based on saliency fusion and propagation[C]//Proceedings of the 7th International Conference on Internet Multimedia Computing and Service. Hunan, China: ACM, 2015: #59.[DOI: 10.1145/2808492.2808551
• [15] Xue H Y, Gu Y, Li Y J, et al. RGB-D saliency detection via mutual guided manifold ranking[C]//Proceedings of the IEEE International Conference on Image Processing. Quebec City, QC, Canada: IEEE, 2015: 666-670.[DOI: 10.1109/ICIP.2015.7350882]
• [16] Levin A, Lischinski D, Weiss Y. Colorization using optimization[J]. ACM Transactions on Graphics , 2004, 23 (3) : 689–694. DOI:10.1145/1015706.1015780]
• [17] Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence , 1998, 20 (11) : 1254–1259. DOI:10.1109/34730558]
• [18] Achanta R, Shaji A, Smith K, et al. SLIC superpixels (2010)[R]. EPFL Technical Report 149300.
• [19] Lang C Y, Nguyen T V, Katti H, et al. Depth matters: influence of depth cues on visual saliency[C]//Proceedings of the 12th European Computer Vision. Berlin Heidelberg: Springer, 2012: 101-115.[DOI: 10.1007/978-3-642-33709-3_8
• [20] Ohtsu N. A threshold selection method from gray-level histograms[J]. IEEE Transactions on Systems, Man, and Cybernetics , 1979, 9 (1) : 62–66. DOI:10.1109/TSMC.1979.4310076]
• [21] Ren J Q, Gong X J, Yu L, et al. Exploiting global priors for RGB-D saliency detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Boston, MA, USA: IEEE, 2015: 25-32.[DOI: 10.1109/CVPRW.2015.7301391
• [22] Ju R, Liu Y, Ren T W, et al. Depth-aware salient object detection using anisotropic center-surround difference[J]. Signal Processing: Image Communication , 2015, 38 : 115–126. DOI:10.1016/j.image.2015.07.002]
• [23] Cheng Y P, Fu H Z, Wei X X, et al. Depth enhanced saliency detection method[C]//Proceedings of the International Conference on Internet Multimedia Computing and Service. Xiamen, China: ACM, 2014: 23.[DOI: 10.1145/2632856.2632866