发布时间: 2018-05-16 摘要点击次数: 全文下载次数: DOI: 10.11834/jig.170375 2018 | Volume 23 | Number 5 GDC 2017会议专栏

 收稿日期: 2017-07-11; 修回日期: 2017-11-03 基金项目: 国家自然科学基金项目（61350005，51405129）；安徽高校省级自然科学研究基金项目（20130917249） 第一作者简介: 王雨(1991-), 男, 合肥工业大学精密仪器及机械专业博士研究生, 主要研究方向为几何建模与处理、3维打印技术。E-mail:wangyu@mail.hfut.edu.cn. 中图法分类号: TP301.6 文献标识码: A 文章编号: 1006-8961(2018)05-0740-08

# 关键词

Enhancement and automatic extraction of bronze pattern features
Wang Yu, Chen Haimei, Li Weishi
School of Instrument Science and Opto-electronics Engineering, Hefei University of Technology, Hefei 230009, China
Supported by: National Natural Science Foundation of China (61350005, 51405129)

# Abstract

Objective Bronzes are one of China's cultural treasures. However, most unearthed bronzes are broken and deformed, and restoration is needed to protect them. Recently, digital restoration technologies for cultural relics have attracted considerable attention with the development of 3D laser scanning technologies and the research progress in digital geometry processing. Patterns of the adjacent pieces of a broken bronze should be aligned during restoration to ensure the continuity of the patterns and guarantee high restoration quality. Consequently, the extraction of bronze patterns is a significant step in the restoration process. Method Generally, bronze patterns have apparent sharp edges, which distinguish decoration feature parts from non-decoration feature parts. Therefore, an algorithm for enhancing and extracting the sharp features of bronzes, which aims to extract pattern features, is proposed and implemented in this study. No interactive parameter setting is needed in the proposed algorithm, and the feature points are extracted automatically. First, a weighted projection distance is proposed to eliminate the adverse effect of the nonuniformity of the mesh on feature extraction. The projection distance of a vertex is the absolute value of the dot product between the normal at the vertex and the vector from the vertex to the center point of its one-ring neighborhood vertices. The projection distance of a vertex on sharp edges is always larger than that on non-sharp edges for a uniform triangular mesh model. Thus, it is easy to distinguish the feature point in this case. However, it is difficult to distinguish the feature point for the nonuniform mesh, which is more general in application. Therefore, the weighted projection distance, which means that all edges in the one-ring neighborhood of a vertex should be normalized first before calculating the projection distance, is more adaptive than the traditional projection distance for the triangular meshes in general. Then, reverse bilateral filtering is proposed and utilized to generate an anti-sharpening mask to enhance the weighted projection distance because the feature points are not obvious in the reconstructed mesh model of a real bronze as a result of the digital nature of the scanning process. The anti-sharpening mask is a three-step image enhancement process. First, the original image is filtered and subtracted from the smoothed image to obtain the weight. Second, the mask is added to the original image. Finally, the details of the image are enhanced. Reverse bilateral filtering is developed to filter all weighted projection distances with the intention to smooth all weighted projection distances of the feature points to the maximum extent instead of preserving features in bilateral filtering. The enhanced weighted projection distance is obtained by performing the three-step anti-sharpening mask in image enhancement. Consequently, a large weighted projection distance becomes even larger and a small weighted projection distance becomes even smaller. Finally, Otsu's method is applied to the histogram of the enhanced weighted projection distance to determine the optimal threshold automatically, and the vertices of the mesh model are classified into feature and non-feature point sets with the threshold. Result We compare the extraction results of our algorithm with those of Tran's algorithm with the scanned bronze models, including before and after weighted projection distance enhancement. All experiments show that better extraction results can be achieved with the proposed algorithm than the existing algorithms, and the identified feature points of the proposed algorithm are more continuous than those of the existing algorithms. The time consumed by all of the three models with vertex numbers between 6 000 and 800 000 is less than 10 s. Therefore, the proposed algorithm is effective, and its results are beneficial for the succeeding process. Conclusion The decoration features of bronzes can be extracted automatically and efficiently with the proposed algorithm.

# Key words

bronze restoration; feature extraction; feature enhancement; automatic segmentation

# 1.1 法向距离加权

 $d\left( {{v_i}} \right) = \left| {\overrightarrow {{v_i}{{\tilde v}_i}} \cdot{\mathit{\boldsymbol{n}}_{{v_i}}}} \right|$ (1)

 ${\mathit{\boldsymbol{n}}_{{v_i}}} = \frac{{\sum\limits_{j \in N\left( i \right)} {{\mathit{\boldsymbol{n}}_{{f_i}}}} }}{{N\left( i \right)}}$ (2)

 $d'\left( {{v_i}} \right) = \left| {\overrightarrow {{v_i}{{\tilde v'}_i}} \cdot{\mathit{\boldsymbol{n}}_{{v_i}}}} \right|$ (3)

# 1.2 法向距离增强

1) 对于1幅原图像$f\left( {x, y} \right)$，做滤波处理，得到平滑后图像$\bar f\left( {x, y} \right)$

2) 原图像减去平滑图像，得到的差值称为反锐化掩膜，即

 ${g_{{\rm{mask}}}}\left( {x, y} \right) = f\left( {x, y} \right) - \mathit{\bar f}\left( {x, y} \right)$

3) 将反锐化掩膜加到原图像中，得

 $f'\left( {x, y} \right) = f\left( {x, y} \right) + k \times {g_{{\rm{mask}}}}\left( {x, y} \right)$

 $d''\left( {{v_i}} \right) = d'\left( {{v_i}} \right) + k \times g\left( {{v_i}} \right)$ (4)

 $\begin{array}{*{20}{c}} {d'\left( {{v_{i}}} \right)= \frac{{\sum\limits_{j \in N\left( i \right)} {{w_{\rm{c}} }\left( {{t_j}} \right){w_{\rm{s}}}\left( {{h_j}} \right)d'\left( {{v_j}} \right)} }}{{\sum\limits_{j \in N\left( i \right)} {{w_{\rm{c}}}\left( {{t_j}} \right){w_{\rm{s}}}\left( {{h_j}} \right)} }}}\\ {{h_j} = 1 - \left| {{\mathit{\boldsymbol{n}}_{{v_i}}}\cdot\frac{{\overrightarrow {{v_i}{v_j}} }}{{\left\| {\overrightarrow {{v_i}{v_j}} } \right\|}}} \right|}\\ {{t_j} = \mathop {{\rm{max}}}\limits_{k \in N\left( j \right)} \left\| {\overrightarrow {{v_i}{v_k}} } \right\| - \left\| {\overrightarrow {{v_i}{v_j}} } \right\|}\\ {{w_{\rm{c}}}\left( {{t_j}} \right) = {{\rm{e}}^{ - \frac{{t_j^2}}{{2\sigma _c^2}}}}}\\ {{w_{\rm{s}}}\left( {{h_j}} \right) = {{\rm{e}}^{ - \frac{{h_j^2}}{{2\sigma _s^2}}}}} \end{array}$ (5)

# 1.3 阈值确定与特征点识别

 $f\left( {{v_i}} \right) = \frac{{d''\left( {{v_i}} \right) - \mathop {{\rm{min}}}\limits_{i = 0, \cdots, n} d''\left( {{v_i}} \right)}}{{\mathop {{\rm{max}}}\limits_{i = 0, \cdots, n} d''\left( {{v_i}} \right) - \mathop {{\rm{min}}}\limits_{i = 0, \cdots, n} d''\left( {{v_i}} \right)}},$

 $f\left( {{v_i}} \right) = \left\{ {\begin{array}{*{20}{c}} 1&{f\left( {{v_i}} \right) \ge T}\\ 0&{f\left( {{v_i}} \right) < T} \end{array}} \right.$ (6)

# 2 实验结果

Table 1 Timing of our algorithm

 模型 顶点个数 面片个数 用时/s Fandisk 6 475 12 954 0.09 Keding 342 823 683 807 2.20 Gu 714 347 1 428 399 9.54

# 参考文献

• [1] Chen Y, Cheng Z Q, Li J, et al. Relief extraction and editing[J]. Computer-Aided Design, 2011, 43(12): 1674–1682. [DOI:10.1016/j.cad.2011.07.011]
• [2] Zatzarinni R, Tal A, Shamir A. Relief analysis and extraction[J]. ACM Transactions on Graphics, 2009, 28(5): #136. [DOI:10.1145/1618452.1618482]
• [3] Tran T T, Cao V T, Nguyen V T, et al. Automatic method for sharp feature extraction from 3D data of man-made objects[C]//Proceedings of 2014 International Conference on Computer Graphics Theory and Applications. Lisbon, Portugal, Portugal: IEEE, 2014: 1-8.
• [4] Wang C C L. Bilateral recovering of sharp edges on feature-insensitive sampled meshes[J]. IEEE Transactions on Visualization and Computer Graphics, 2006, 12(4): 629–639. [DOI:10.1109/TVCG.2006.60]
• [5] Wang C C L. Incremental reconstruction of sharp edges on mesh surfaces[J]. Computer-Aided Design, 2006, 38(6): 689–702. [DOI:10.1016/j.cad.2006.02.009]
• [6] Attene M, Falcidieno B, Rossignac J, et al. Sharpen & Bend:recovering curved sharp edges in triangle meshes produced by feature-insensitive sampling[J]. IEEE Transactions on Visualization and Computer Graphics, 2005, 11(2): 181–192. [DOI:10.1109/TVCG.2005.34]
• [7] Kim H S, Choi H K, Lee K H. Feature detection of triangular meshes based on tensor voting theory[J]. Computer-Aided Design, 2009, 41(1): 47–58. [DOI:10.1016/j.cad.2008.12.003]
• [8] Hildebrandt K, Polthier K, Wardetzky M. Smooth feature lines on surface meshes[C]//Proceedings of the 3rd Eurographics Symposium on Geometry Processing. Switzerland: Eurographics Association, 2005: 85.
• [9] Ohtake Y, Belyaev A, Seidel H P. Ridge-valley lines on meshes via implicit surface fitting[J]. ACM Transactions on Graphics, 2004, 23(3): 609–612. [DOI:10.1145/1015706.1015768]
• [10] Vieira M, Shimada K. Surface mesh segmentation and smooth surface extraction through region growing[J]. Computer Aided Geometric Design, 2005, 22(8): 771–792. [DOI:10.1016/j.cagd.2005.03.006]
• [11] Zeng L, Liu Y J, Zhang D L. Feature-preserved contour editing for 3D printing[J]. Journal of Computer-Aided Design & Computer Graphics, 2015, 27(6): 974–983. [曾龙, 刘永进, 张东亮. 面向三维打印的特征驱动轮廓线编辑方法[J]. 计算机辅助设计与图形学学报, 2015, 27(6): 974–983. ]
• [12] Otsu N. A threshold selection method from gray-level histograms[J]. IEEE Transactions on Systems, Man, and Cybernetics, 1979, 9(1): 62–66. [DOI:10.1109/TSMC.1979.4310076]
• [13] Jin S S, Lewis R R, West D. A comparison of algorithms for vertex normal computation[J]. The Visual Computer, 2005, 21(1-2): 71–82. [DOI:10.1007/s00371-004-0271-1]
• [14] Fleishman S, Drori I, Cohen-Or D. Bilateral mesh denoising[J]. ACM Transactions on Graphics, 2003, 22(3): 950–953. [DOI:10.1145/882262.882368]