联合聚焦度和传播机制的光场图像显著性检测
Saliency detection on a light field via the focusness and propagation mechanism
- 2020年25卷第12期 页码:2578-2586
收稿:2019-12-26,
修回:2020-3-20,
录用:2020-3-27,
纸质出版:2020-12-16
DOI: 10.11834/jig.190675
移动端阅览

浏览全部资源
扫码关注微信
收稿:2019-12-26,
修回:2020-3-20,
录用:2020-3-27,
纸质出版:2020-12-16
移动端阅览
目的
2
图像显著性检测方法对前景与背景颜色、纹理相似或背景杂乱的场景,存在背景难抑制、检测对象不完整、边缘模糊以及方块效应等问题。光场图像具有重聚焦能力,能提供聚焦度线索,有效区分图像前景和背景区域,从而提高显著性检测的精度。因此,提出一种基于聚焦度和传播机制的光场图像显著性检测方法。
方法
2
使用高斯滤波器对焦堆栈图像的聚焦度信息进行衡量,确定前景图像和背景图像。利用背景图像的聚焦度信息和空间位置构建前/背景概率函数,并引导光场图像特征进行显著性检测,以提高显著图的准确率。另外,充分利用邻近超像素的空间一致性,采用基于K近邻法(K-nearest neighbor,K-NN)的图模型显著性传播机制进一步优化显著图,均匀地突出整个显著区域,从而得到更加精确的显著图。
结果
2
在光场图像基准数据集上进行显著性检测实验,对比3种主流的传统光场图像显著性检测方法及两种深度学习方法,本文方法生成的显著图可以有效抑制背景区域,均匀地突出整个显著对象,边缘也更加清晰,更符合人眼视觉感知。查准率达到85.16%,高于对比方法,F度量(F-measure)和平均绝对误差(mean absolute error,MAE)分别为72.79%和13.49%,优于传统的光场图像显著性检测方法。
结论
2
本文基于聚焦度和传播机制提出的光场图像显著性模型,在前/背景相似或杂乱背景的场景中可以均匀地突出显著区域,更好地抑制背景区域。
Objective
2
Saliency detection
which has extensive applications in computer vision
aims to locate pixels or regions in a scene that attract the visual attention of humans the most. An accurate and reliable salient region detection can benefit numerous vision and graphics tasks
such as scene analysis
object tracking
and target recognition. Traditional 2D images focus on low features
including color
texture
and focus cues
to detect salient objects from the background. Although state-of-the-art 2D saliency detection methods have shown promising results
they may encounter failure in complex scenes where the foreground and background have a similar appearance or where the background is cluttered.3D images provide in-depth information that benefits saliency detection to some extent. However
most of 3D saliency detection results greatly depend on the quality of depth maps; in this way
an inaccurate depth map can greatly affect the final saliency detection result. Moreover
3D saliency detection methods may produce inaccurate detection when the salient object cannot be distinguished at the depth level. The human visual system can distinguish regions at different depth levels by adjusting the focus of eyes. Similarly
light field has a focusing capability where a stack of images that emphasize different depth levels can be stacked. The focus cue supplied by a focal stack helps determine the background and foreground slice candidates or those conditions with certain complexities (i.e.
the foreground and background having similar color/textures). Therefore
focus can improve the precision of saliency detection in challenging scenarios. The extant light field saliency detection methods verify the effectiveness of integrating light field cues
including focus
location
and color contrast cues. From the above discussion
an important aim of saliency detection for a light field is to explore the interactions and complementarities among light field cues.
Method
2
This paper builds a foreground/background probability model that can highlight salient objects and suppress the background by using location and focus cues. A propagation mechanism is also proposed to enhance the spatial consistency of the saliency results and to refine the saliency map. The focal stack and the all-focus image are taken as light field input images that are segmented into a set of non-overlapping super-pixels via simple linear iterative clustering (SLIC). First
we detect the in-focus regions of each image in the focal stack by applying the harmonic variance measure in the frequency domain to define our focus measure. Second
to determine the foreground image set and background image
we scale the focus of the focal stack by using a Gaussian filter. Saliency detection follows a common assumption that salient objects are more likely to lie at the central area and are often photographed in focus. Therefore
we analyze the distribution of in-focus objects with respect to their prior location by using a 1D band-pass Gaussian filter and then compute the foreground likelihood score of each focal slice. We choose the slice with the lowest foreground likelihood score as our background slice and the slice whose likelihood score is 0.9 times higher than the highest reported foreground likelihood score as our foreground slice. Afterward
we construct a foreground/background probability function by combing the focus of the background slice with the spatial location. We then compute the foreground cue from the foreground slice candidates and the color cue on the all-focus image. To enhance contrast
we use the foreground/background probability function to guide the foreground and color cues
which we use in turn to obtain the foreground and color saliency maps
respectively. From these maps
we find that the low saliency value of a salient object can be improved and that the high saliency value of the background area in complex scenarios (e.g.
where the foreground and background are the same) can be restrained. We combine the foreground and color saliency maps by using a Bayesian fusion strategy to generate a new saliency map. We then apply a K-NN enhanced graph-based saliency propagation method that considers the neighboring relationships in both spatial and feature spaces to further optimize this saliency map. Optimizing the spatial consistency of adjacent super-pixels can uniformly highlight the saliency of objects. We eventually obtain a high-level saliency map.
Result
2
We compare the performance of our model with that of five state-of-the-art saliency models
including traditional approaches and deep learning methods
on a leading light field saliency dataset(LFSD). Our proposed model effectively suppresses the background and evenly highlights the entire saliency object
thereby obtaining sharp edges for the saliency map. We also evaluate the similarity between the predicted saliency maps and the ground truth by using three quantitative evaluation metrics
namely
canonical precision-recall curve (PRC)
F-measure
and mean absolute error (MAE). Experiment results show that our proposed method outperforms all the other methods in terms of accuracy (85.16%). F-measure (72.79%)
and MAE (13.49%).
Conclusion
2
We propose a light field saliency detection model that combs the focus and propagation mechanisms. Experiment results prove that our saliency detection scheme can efficiently work in challenging scenarios
such as scenes with similar foregrounds and backgrounds or complex background textures
and out performs the state-of-the-art traditional approaches in terms of precision rates in PRC and false positive rates in MAE.
Achanta R, Hemami S, Estrada F and Süsstrunk S. 2009. Frequency-tuned salient region detection//Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, USA: IEEE: 1597-1604[ DOI: 10.1109/CVPR.2009.5206596 http://dx.doi.org/10.1109/CVPR.2009.5206596 ]
Achanta R, Shaji A, Smith K, Lucchi A, Fua P and Süsstrunk S. 2012. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(11):2274-2282[DOI: 10.1109/TPAMI.2012.120]
Borji A, Cheng M M, Hou Q B, Jiang H Z and Li J. 2019. Salient object detection:a survey. Computational Visual Media, 5(2):117-150[DOI: 10.1007/s41095-019-0149-9]
Cheng M M, Zhang G X, Mitra N J, Huang X L and Hu S M. 2011. Global contrast based salient region detection//Proceedings of CVPR 2011. Providence, RI, USA: IEEE: 409-416[ DOI: 10.1109/CVPR.2011.5995344 http://dx.doi.org/10.1109/CVPR.2011.5995344 ]
Dai J F, Li Y, He K M and Sun J. 2016. R-FCN: object detection via region-based fully convolutional networks//Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona, Spain: Curran Associates: 379-387[ DOI: 10.5555/3157096.3157139 http://dx.doi.org/10.5555/3157096.3157139 ]
Itti L, Koch C and Niebur E. 1998. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11):1254-1259[DOI: 10.1109/34.730558]
Jiang P, Ling H B, Yu J Y and Peng J L. 2013. Salient region detection by UFO: uniqueness, focusness and objectness//Proceedings of 2013 IEEE International Conference on Computer Vision. Sydney, NSW, Australia: IEEE: 1976-1983[ DOI: 10.1109/ICCV.2013.248 http://dx.doi.org/10.1109/ICCV.2013.248 ]
Li X, Hu W M, Shen C H, Zhang Z F and Dick A.2013a. A survey of appearance models in visual object tracking. Transactions on Intelligent Systems and Technology, 4(4):#58[DOI: 10.1145/2508037.2508039]
Li F and Porikli F. 2013. Harmonic variance: a novel measure for in-focus segmentation//Proceedings of the British Machine Vision Conference. Bristol, UK: BMVA Press: 1-11[ DOI: 10.5244/C.27.33 http://dx.doi.org/10.5244/C.27.33 ]
Li X H, Lu H C, Zhang L H, Ruan X and Yang M H. 2013b. Saliency Detection via Dense and Sparse Reconstruction//Proceedings of 2013 IEEE International Conference on Computer Vision. Sydney, NSW, Australia: IEEE: 2976-2983.[ DOI: 10.1109/ICCV.2013.370 http://dx.doi.org/10.1109/ICCV.2013.370 ]
Li N Y, Sun B L and Yu J Y. 2015. A weighted sparse coding framework for saliency detection//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE: 5216-5223[ DOI: 10.1109/CVPR.2015.7299158 http://dx.doi.org/10.1109/CVPR.2015.7299158 ]
Li N Y, Ye J W, Ji Y, Ling H B and Yu J Y. 2014. Saliency detection on light field//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, USA: IEEE: 2806-2813[ DOI: 10.1109/CVPR.2014.359 http://dx.doi.org/10.1109/CVPR.2014.359 ]
Li N Y, Ye J W, Ji Y, Ling H B and Yu J Y. 2017. Saliency detection on light field. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(8):1605-1616[DOI: 10.1109/TPAMI.2016.2610425]
Lu S, Mahadevan V and Vasconcelos N. 2014. Learning optimal seeds for diffusion-based salient object detection//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, USA: IEEE: 2790-2797[ DOI: 10.1109/CVPR.2014.357 http://dx.doi.org/10.1109/CVPR.2014.357 ]
Peng H W, Li B, Xiong W H, Hu W M and Ji R R. 2014. RGBD salient object detection: a benchmark and algorithms//Proceedings of the 13th European Conference on Computer Vision. Zurich, Switzerland: Springer: 92-109[ DOI: 10.1007/978-3-319-10578-9_7 http://dx.doi.org/10.1007/978-3-319-10578-9_7 ]
Piao Y R, Li X, Zhang M, Yu J Y and Lu H C. 2019. Saliency detection via depth-induced cellular automata on light field. IEEE Transactions on Image Processing, 29:1879-1889[DOI: 10.1109/TIP.2019.2942434]
Ren J Q, Gong X J, Yu L, Zhou W H and Yang M Y. 2015. Exploiting global priors for RGB-D saliency detection//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Boston, USA: IEEE: 25-32[ DOI: 10.1109/CVPRW.2015.7301391 http://dx.doi.org/10.1109/CVPRW.2015.7301391 ]
Yuan Q, Cheng Y F and Chen X Q. 2018. Saliency detection based on multiple priorities and comprehensive contrast. Journal of Image and Graphics, 23(2):239-248
袁巧, 程艳芬, 陈先桥. 2018.多先验特征与综合对比度的图像显著性检测.中国图象图形学报, 23(2):239-248[DOI: 10.11834/jig.170381]
Zeng Y, Zhuge Y Z, Lu H C, Zhang L H, Qian M Y and Yu Y Z. 2019. Multi-source weak supervision for saliency detection//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 6067-6076[ DOI: 10.1109/CVPR.2019.00623 http://dx.doi.org/10.1109/CVPR.2019.00623 ]
Zhang J, Liu Y M, Zhang S P, Poppe R and Wang M. 2020. Light field saliency detection with deep convolutional networks. IEEE Transactions on Image Processing, 29:4421-4434[DOI: 10.1109/TIP.2020.2970529]
Zhang J, Wang M, Gao J, Wang Y, Zhang X D and Wu X D. 2015. Saliency detection with a deeper investigation of light field//Proceedings of the 24th International Joint Conference on Artificial Intelligence. Buenos Aires, Argentina: AAAI: 2212-2218
Zhu L, Ling H B, Wu J, Deng H P and Liu J. 2017. Saliency pattern detection by ranking structured trees//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 5468-5477[ DOI: 10.1109/ICCV.2017.583 http://dx.doi.org/10.1109/ICCV.2017.583 ]
Zhu W J, Liang S, Wei Y C and Sun J. 2014. Saliency optimization from robust background detection//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, OH, USA: IEEE: 2814-2821[ DOI: 10.1109/CVPR.2014.360 http://dx.doi.org/10.1109/CVPR.2014.360 ]
相关作者
相关机构
京公网安备11010802024621