纹理边缘引导的深度图像超分辨率重建
Depth map super-resolution reconstruction based on the texture edge-guided approach
- 2018年23卷第10期 页码:1508-1517
收稿:2018-03-19,
修回:2018-5-7,
纸质出版:2018-10-16
DOI: 10.11834/jig.180127
移动端阅览

浏览全部资源
扫码关注微信
收稿:2018-03-19,
修回:2018-5-7,
纸质出版:2018-10-16
移动端阅览
目的
2
深度图像作为一种普遍的3维场景信息表达方式在立体视觉领域有着广泛的应用。Kinect深度相机能够实时获取场景的深度图像,但由于内部硬件的限制和外界因素的干扰,获取的深度图像存在分辨率低、边缘不准确的问题,无法满足实际应用的需要。为此提出了一种基于彩色图像边缘引导的Kinect深度图像超分辨率重建算法。
方法
2
首先对深度图像进行初始化上采样,并提取初始化深度图像的边缘;进一步利用高分辨率彩色图像和深度图像的相似性,采用基于结构化学习的边缘检测方法提取深度图的正确边缘;最后找出初始化深度图的错误边缘和深度图正确边缘之间的不可靠区域,采用边缘对齐的策略对不可靠区域进行插值填充。
结果
2
在NYU2数据集上进行实验,与8种最新的深度图像超分辨率重建算法作比较,用重建之后的深度图像和3维重建的点云效果进行验证。实验结果表明本文算法在提高深度图像的分辨率的同时,能有效修正上采样后深度图像的边缘,使深度边缘与纹理边缘对齐,也能抑制上采样算法带来的边缘模糊现象;3维点云效果显示,本文算法能准确区分场景中的前景和背景,应用于3维重建等应用能取得较其他算法更好的效果。
结论
2
本文算法普遍适用于Kinect深度图像的超分辨率重建问题,该算法结合同场景彩色图像与深度图像的相似性,利用纹理边缘引导深度图像的超分辨率重建,可以得到较好的重建结果。
Objective
2
Depth map plays an increasingly important role in many computer vision applications
such as 3D reconstruction
augmented reality
and gesture recognition.A new generation of active 3D range sensors
such as Microsoft Kinect camera
enables the acquisition of a real-time and affordable depth map.Incidentally
unlike natural images captured by RGB sensors
the depth maps captured by range sensors typically have low resolution (LR) and inaccurate edges due to their intrinsic physical constraints.Given that an accurate and high-resolution (HR) depth map is required and preferable in many applications
excellent depth map super-resolution (SR) techniques are desirable.Depth map SR can be generally addressed by two different types of approaches that depend on the use of input data.For single depth map SR
the resolution of the input depth map can be enhanced based on the information learned with from a pre-collected training database.Meanwhile
depth map SR algorithms that use RGB-D data can be further classified into MRF and filtering-based approaches.MRF-based methods view depth map SR as an optimization problem.Filtering-based methods obtain the weighted average of local depth map pixels for SR purposes.These methods aim to obtain a smooth HR depth map for regions belonging to the same object.However
these methods have two main issues:1) the inaccurate edges of the depth map cannot be fully refined and 2) the edges of the HR depth map suffer from blurring.In this paper
a novel texture edge-guided depth reconstruction approach is proposed to address the issue of existing methods.We pay more attention to the depth edge refinement
which is usually ignored by existing methods.
Method
2
In the first stage
an initial HR depth map is obtained by general up-sampling methods
such as interpolation and filters.Then
initial depth edges are extracted from the initial HR depth map by using many edge detectors for edge detection
such as Sobel and Canny.The edges extracted directly from the initial HR depth map are not the true edges because the misalignment between the LR depth map edges and the texture edges and the up-sampling operation can cause further edge errors.Subsequently
the texture edges are extracted from the color image.Traditional approaches for edge detection do not consider the visually salient edges; the texture edges and illusory contours are all taken as image edges.Moreover
many edges of the color image do not correspond to depth edges
such as the edges inside the object.Inspired by the advanced positive result of the vision field
we propose a depth map edge detection method based on the structured forest.The edge map of the color image is initially extracted by using the recently structured learning approach.By incorporating the 3D space information provided by the initial HR depth map
the texture edges of the objects inside are removed.Then
we obtain a clear and true depth edge map.Finally
the depth values on each side of the depth edge are refined to align the depth edges and correct the depth errors in the initial HR depth map.We detect the incorrect depth regions between the initial depth edges and the corresponding true depth edges and then fill the incorrect regions until the depth edges are consistent with the corresponding color image.The incorrect regions of initial HR depth map are refined by the joint bilateral filter in an outside-inward refining order that is regularized by the detected true depth edges.
Result
2
We perform experiments on the NYU dataset
which offers real-world color-depth image pairs that were captured by a Kinect camera.To evaluate the performance of our proposed method
we compare our results with two method categories:1) state-of-the-art single depth image super resolution methods (ScSR
PB
and E.G.) and 2) state-of-the-art color-guided depth map super resolution approaches (JBU
GIU
MRF
WMF
and JTU).We implement most of these methods by using the same parameter settings as provided in the corresponding papers.We down-sample the original depth maps into LR ones and perform SR.We evaluate our proposed method with the recovered HR depth map and the reconstructed point clouds.The recovered HR depth maps indicate that our proposed methods generate more visually appealing results than the compared approaches.The boundaries in our results are generally sharper and smoother along the edge direction
whereas the compared methods suffer from blurred artifacts around the boundaries.To demonstrate further the effectiveness of our proposed approach
we provide the 3D point clouds constructed from the up-scaled depth map with different methods.Results indicate that our proposed method yields a relatively clear foreground and background
while the competing results suffer from obvious flying pixels and aliased planes.
Conclusion
2
We present a novel method for depth map SR for Kinect depth.Experimental results demonstrate that the proposed method provides sharp and clear edges for the Kinect depth
and the depth edges are aligned with the texture edges.The proposed framework synthesizes an HR depth map given its LR depth map and corresponding HR color image.Our proposed method first estimates the initial HR depth map via traditional up-sampling approaches
then extracts the true edges of the RGB-D data and the fake edges of the initial HR depth map to identify the incorrect regions between the two edges.The incorrect regions of the initial HR depth maps are further refined by joint bilateral filter in an outside-inward refining order to align the edges of color image and depth map.The key to our success is the use of RGB-D depth edge detection
which is inspired by the structured forests-based edge detection.Besides
unlike most depth enhancement methods that use raster-scan order to fill incorrect regions
our method can determine the filing order by considering the true edges.Thus
our HR depth map output exhibits better quality with clear and aligned depth edges compared with the existing depth map SR.However
texture-based guidance may result in incorrect depth value due to the smooth object surface with rich color texture.Thus
the suppression of texture copying artifacts may be our next research goal.
Fossati A, Gall J, Grabner H, et al.Consumer Depth Cameras for Computer Vision:Research Topics and Applications[M].London:Springer, 2013:1161-1167.[DOI:10.1007/978-1-4471-4640-7]
Kitsunezaki N, Adachi E, Masuda T, et al.KINECT applications for the physical rehabilitation[C]//2013 IEEE International Symposium on Medical Measurements and Applications.Gatineau, QC, Canada: IEEE, 2013: 294-299.[ DOI: 10.1109/MeMeA.2013.6549755 http://dx.doi.org/10.1109/MeMeA.2013.6549755 ]
Chen G, Li JT, Wang B, et al.Reconstructing 3D human models with a kinect[J].Computer Animation&Virtual Worlds, 2016, 27(1):72-85.[DOI:10.1002/cav.1632]
Pedraza-Hueso M, Martín-Calzón S, Díaz-Pernas F J, et al.Rehabilitation using kinect-based games and virtual reality[J].Procedia Computer Science, 2015, 75:161-168.[DOI:10.1016/j.procs.2015.12.233]
Khoshelham K, Elberink S O.Accuracy and resolution of kinect depth data for indoor mapping applications[J].Sensors, 2012, 12(2):1437-1454.[DOI:10.1016/j.procs.2015.12.233]
Yang J C, Wright J, Huang T S, et al.Image super-resolution via sparse representation[J].IEEE Transactions on Image Processing, 2010, 19(11):2861-2873.[DOI:IEEE Signal Processing Society]
Aodha O M, Campbell N D F, Nair A, et al.Patch based synthesis for single depth image super-resolution[C]//Proceedings of the 12th European Conference on Computer Vision.Florence, Iraly: Springer, 2012: 71-84.[ DOI: 10.1007/978-3-642-33712-3_6 http://dx.doi.org/10.1007/978-3-642-33712-3_6 ]
Xie J, Feris R S, Sun M T.Edge guided single depth image super resolution[J].IEEE Transactions on Image Processing, 2016, 25(1):428-438.[DOI:10.1109/TIP.2015.2501749]
Diebel J, Thrun S.An application of markov random fields to range sensing[C]//Proceedings of the 18th International Conference on Neural Information Processing Systems.Vancouver, British Columbia, Canada: MIT Press, 2005: 291-298.
Lu J B, Min D B, Pahwa R S, et al.A revisit to MRF-based depth map super-resolution and enhancement[C]//Proceedings of 2011 IEEE International Conference on Acoustics, Speech and Signal Processing.Prague, Czech Republic: IEEE, 2011: 985-988.[ DOI: 10.1109/ICASSP.2011.5946571 http://dx.doi.org/10.1109/ICASSP.2011.5946571 ]
Liu W, Jia S Y, Li P L, et al.An MRF-based depth upsampling:upsample the depth map with its own property[J].IEEE Signal Processing Letters, 2015, 22(10):1708-1712.[DOI:10.1109/LSP.2015.2427376]
Zuo Y, Wu Q, Zhang J, et al.Minimum spanning forest with embedded edge inconsistency measurement for color-guided depth map upsampling[C]//Proceedings of 2017 IEEE International Conference on Multimedia and Expo.Hong Kong, China: IEEE, 2017: 211-216.[ DOI: 10.1109/ICME.2017.8019366 http://dx.doi.org/10.1109/ICME.2017.8019366 ]
Zuo Y F, Wu Q, Zhang J, et al.Explicit edge inconsistency evaluation model for color-guided depth map enhancement[J].IEEE Transactions on Circuits and Systems for Video Technology, 2018, 28(2):439-453.[DOI:10.1109/TCSVT.2016.2609438]
Kopf J, Cohen M F, Lischinski D, et al.Joint bilateralupsampling[J].ACM Transactions on Graphics, 2007, 26(3):#96.[DOI:10.1145/1276377.1276497]
Chen J W, Adams A, Wadhwa N, et al.Bilateral guided upsampling[J].ACM Transactions on Graphics, 2016, 35(6):#203.[DOI:10.1145/2980179.2982423]
Deng H P, Wu J, Zhu L, et al.Texture edge-guided depth recovery for structured light-based depth sensor[J].Multimedia Tools and Applications, 2017, 76(3):4211-4226.[DOI:10.1007/s11042-016-3340-3]
Cai J J, Chang L, Wang H B, et al.Boundary-preserving depth upsampling without texture copying artifacts and holes[C]//2017 IEEE International Symposium on Multimedia.Taichung, Taiwan, China: IEEE, 2017: 1-5.[ DOI: 10.1109/ISM.2017.11 http://dx.doi.org/10.1109/ISM.2017.11 ]
Matsuo T, Fukushima N, Ishibashi Y.Weighted joint bilateral filter with slope depth compensation filter for depth map refinement[C]//Proceedings of 2013 International Conference on Computer Vision Theory and Applications.Barcelona, Spain: VISAPP, 2013: 300-309.
Song Y B, Gong L J.Analysis and improvement of joint bilateral upsampling for depth image super-resolution[C]//Proceedings of the 8th International Conference on Wireless Communications & Signal Processing.Hangzhou, China: IEEE, 2016: 1-5.[ DOI: 10.1109/WCSP.2016.7752596 http://dx.doi.org/10.1109/WCSP.2016.7752596 ]
Yuan L, Jin X, Li Y G, et al.Depth map super-resolution via low-resolution depth guided joint trilateral up-sampling[J].Journal of Visual Communication and Image Representation, 2017, 46:280-291.[DOI:10.1016/j.jvcir.2017.04.012]
Lo K H, Wang Y C F, Hua K L.Edge-preserving depth map upsampling by joint trilateral filter[J].IEEE Transactions on Cybernetics, 2018, 48(1):371-384.[DOI:10.1109/TCYB.2016.2637661]
He K M, Sun J, Tang X O.Guided image filtering[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(6):1397-1409.[DOI:10.1109/TPAMI.2012.213]
Hua K L, Lo K H, Wang Y C F F.Extended guided filtering for depth map upsampling[J].IEEE Multimedia, 2016, 23(2):72-83.[DOI:10.1109/MMUL.2015.52]
Min D B, Lu J B, Do M N.Depth video enhancement based on weighted mode filtering[J].IEEE Transactions on Image Processing, 2012, 21(3):1176-1190.[DOI:10.1109/TIP.2011.2163164]
Dollár P, Zitnick C L.Fast edge detection using structured forests[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(8):1558-1570.[DOI:10.1109/TPAMI.2014.2377715]
Zhao X.The research on Kinect depth image inpainting technique[D].Dalian: Dalian University of Technology, 2013. http://cdmd.cnki.com.cn/Article/CDMD-10141-1013201414.htm .
赵旭.Kinect深度图像修复技术研究[D].大连: 大连理工大学, 2013.
相关作者
相关机构
京公网安备11010802024621