Objective Depth map is playing a more and more important role in many computer vision applications such as 3D reconstruction, augmented reality, gesture recognition, etc. A new generation of active 3D range sensors, such as Microsoft Kinect camera enables getting depth map with real time and affordable cost. Unfortunately, compared to the natural images captured by the RGB sensor, depth maps captured by the range sensor are typically with low resolution (LR) and inaccurate edge due to intrinsic physical constraints. Since accurate depth map at high resolution (HR) is required and preferable in many applications, the techniques of excellent depth map super-resolution (SR) would be desirable. Depth map SR can be generally addressed by two different types of approaches, depending on the use of input data. For single depth map SR, the resolution of the input depth map can be enhanced based on the information learned with from a pre-collected training database. On the other hand, depth map SR algorithm using RGB-D data can be further classified into MRF and filtering-based approaches. MRF-based methods viewed depth map SR as an optimization problem. Filtering-based methods perform weighted average of local depth map pixels for SR purposes. These methods aimed at obtaining a smooth HR depth map for regions belonging to the same object. However, two main issues of these methods are: 1) inaccurate edges in depth map can’t be well refined; 2) the HR depth map still suffer from blurring edges. In this paper, a novel texture edge guided depth reconstructing approach is proposed to tackle the issue of existing methods. We pay more attention to the depth edge refinement, which is usually ignored by existing methods. Method In the first stage, an initial HR depth map is obtained by general up-sampling methods such as interpolation and filters. Besides, initial depth edges are extracted from the initial HR depth map, many edge detectors can be used for edge detection, such as Sobel and Canny. Since the misalignment between LR depth map edges and texture edges, and up-sampling operation can cause further error of edge, the edges extracted from the initial HR depth map directly are not the true edges. In the next step, texture edges are extracted from the color image. Traditional approaches for edge detection don’t consider visually salient edges, the texture edges and illusory contours are all taken as image edges. Moreover, many edges of color image do not correspond to depth edges, such as edges inside object. Inspired by the advanced positive result of vision field, we propose a depth map edge detection method based on structured forest. Edge map of color image is extracted firstly by using the recently structured learning approach. By incorporating 3-D space information provided by the initial HR depth map, texture edges existing in objects’ inside are removed. Then, we get a clear and true depth edge map. Finally, depth values on each side of the depth edge are refined to align the depth edges and correct the depth errors in the initial HR depth map. We detect the incorrect depth regions between initial depth edges and corresponding true depth edges, and then fill the incorrect regions until the depth edges are consistent with the corresponding color image. The incorrect regions of initial HR depth map is refined by the joint bilateral filter in an outside-inward refining order regularized by detected true depth edges. Result We perform experiments on NYU dataset which offers real-world color-depth image pairs that are captured by Kinect camera. To evaluate the performance of our proposed method, we compare our results with two categories of the methods. 1) State-of-the-art single depth image super resolution methods: ScSR, PB, EG. 2) State-of-the-art color guided depth map super resolution approaches: JBU, GIU, MRF, WMF, JTU. We implemented most of these methods by using the same parameter settings as provided in the corresponding papers. We down-sample the original depth maps into LR ones and performing SR. We evaluate our proposed method with both recovered HR depth map and reconstructed point clouds. The recovered HR depth maps indicate that our proposed methods generate more visual appealing results than the compared approaches. Boundaries in our results are generally sharper and smoother along the edge direction. While the compared methods suffer from blur artifacts around boundaries visually. To further demonstrate the effectiveness of our proposed approach, we provide the 3D point clouds constructed from the up-scaled depth map with different methods. The results indicate that our proposed method yields relatively clear foreground and background, while the completing results suffer from obvious flying pixels and aliased planes. Conclusion We presented a novel method for depth map SR for Kinect depth. Experimental results demonstrate that the proposed method provides sharp and clear edges for the Kinect depth, and depth edges are aligned with the texture edges. The proposed framework synthesizes an HR depth map given its LR depth map and corresponding HR color image. Our proposed method first estimates an initial HR depth map via traditional upsampling approaches, then extract the true edges of RGB-D data and the fake edges of the initial HR depth map to identify the incorrect regions between the two edges. The incorrect regions of initial HR depth maps are further refined by joint bilateral filter in an outside-inward refining order so that the edges of color image and depth map are alignment. Key to our success is the use of the RGB-D depth edge detection, inspired by the structured forests-based edge detection. Besides, different from most depth enhancement methods that using raster-scan order to fill incorrect regions, our method is able to determine the filing order by taking the true edges into consideration. This is the reason why, compared to existing depth map SR, our produced HR depth map output exhibits improved quality with clear and aligned depth edges. However, texture based guidance might result in incorrect depth value due to a smooth object surface with rich color texture, and thus how to suppress texture copying artifacts is expected as our next research goal.