自适应的线条画绘制
Adaptive line drawing
- 2018年23卷第5期 页码:730-739
收稿:2017-07-11,
修回:2017-10-12,
纸质出版:2018-05-16
DOI: 10.11834/jig.170376
移动端阅览

浏览全部资源
扫码关注微信
收稿:2017-07-11,
修回:2017-10-12,
纸质出版:2018-05-16
移动端阅览
目的
2
基于参考图像的线条画生成是非真实感绘制最为常见的应用之一。尽可能模拟艺术家的创作风格生成疏密得当、具有层次感的线条画是这类工作的主要目标和挑战。本文提出一个自适应线画图绘制算法。
方法
2
首先,将场景图像分割成若干个区域,分别计算每个区域亮度的方差以及每个像素到边界的最小距离,将每个区域的方差和面积的比值作为该区域的复杂度。然后,计算能反映其显著视觉特征的边缘切向流场。最后,使用基于流的各向异性高斯差分滤波生成线条画。在构造边缘切向流时,每个位置的切向量由其邻域的切向量加权而得到。文中增加了一个新的系数项,对于邻域的任意一个位置,如果它和参考位置在区域分类中属同一个类别。则该位置的权值更大。基于流的高斯差分自适应滤波过程中,高斯差分滤波的尺度参数和复杂度以及到区域边界距离有关。细节越丰富,离边界越近,尺度参数取值越小,这样得到的边缘比较细,同时可以防止将相邻小细线条连接成粗线条。然后,将高斯差分滤波结果沿着流线方向进行高斯滤波,对于细节丰富的区域,边缘比较多,尺度参数取值比较小,所连接边缘比较短,可以减少错误边缘可能。
结果
2
对生物、树林、建筑、山河等具有代表性的图像,采用本文算法进行自动实时进行线条绘制,实验结果表明,采用本文算法所生成的线条随着区域场景的复杂程度呈现不同粗细和浓淡的变化,具有一定的层次感。因而本文算法能生成视觉特征鲜明、风格化效果突出的线条画,且能处理各种复杂场景的图像。
结论
2
本文自适应参数的线条画生成算法,其算法参数调节以及算法效果优于固定参数的算法,本文算法在处理日常生活中各类主题场景的图像时均能取得良好效果。
Objective
2
Generating line drawings based on reference images is the most common application of non-photorealistic rendering
which is widely used in creative arts
scientific graphing
animation
video games
and print advertisement. During the creation of a line drawing
artists outline the contours of objects. The artists emphasize the main structure with long and thick lines and present the simple details with short and thin lines. Meanwhile
the artists barely use any ink for those unimportant regions on visual observation. A good line drawing successfully balances density and thickness features
thereby providing a sense of layering.
Method
2
A line-drawing-simulating algorithm initially analyzes the features and visual importance of an input image and subsequently detects the edges to generate a contour line and form a line drawing with a certain flavor. Given its simplicity
difference of Gaussians (DoG) may be calculated as a simple approximation of the Marr operator
which is widely used for edge detection. A flow-based anisotropic filtering framework is produced to improve the continuity of edges. The new filtering framework initially forms an edge tangent field and subsequently applies a flow-based difference of Gaussians (FDoG) on the intermediate results. Finally
the hyperbolic tangent function in the framework will soften the calculated result and link the detected edge points to form a line drawing. Different spatial parameters of DoG can only detect the edge of different scales in the image. DoG with small-scale parameter finds thin edges
but most probably takes noise as an edge. By contrast
DoG with large-scale parameter finds thick edges and is capable of ignoring some noise
but most probably takes neighboring edges as noise. Thus
selecting the appropriate parameters of DoG is important. For images with multiple edge scales
the FDoG method based on fixed parameters is not adapted to detect the edges
which leads to unsatisfactory line drawings. This study presents an adaptive non-photorealistic rendering technique for stylizing a photograph in the line drawing style. Generating the final line drawing has three main steps. 1) We segment the reference image into different regions. In each region
we calculate the intensity variance and minimum distance to the region boundary of all of its pixels. We define the ratio of the intensity variance to the region boundary as its complexity. 2) We use the preprocessed results to construct a smooth and direction-enhanced edge flow field to indicate the visual significance of the region. 3) We use the flow field to guide the line drawing process with anisotropy Gaussian filter
in which the parameters are adaptively determined. Finally
the hyperbolic tangent function in the framework will soften the calculated result and link the detected edge points to form a line drawing. Several improvements have been made on the three steps. During the procedure acquiring the edge flow field
the tangential vector of each pixel is the weighted mean of the tangential vector of its neighbor. The tangent vectors from the same category have a similar direction
whereas the tangent vectors from different categories may behave differently. We introduce a new weight item to balance the weight of these vectors. If the pixel of the neighbor and the reference pixel are located in the same segmented region
then the weight is strong. During the DoG filtering process
the scale parameter of each pixel is based on the regional complexity of the pixel and the precomputed minimum distance between the pixel and the region boundary. If a pixel is in a detailed area or near the region boundary
then a small-scale parameter is set and weak and thin lines are highlighted. By applying this strategy
we prevent the formation of thick curves from thin curves. During Gaussian filtering of the DoG flow
if a pixel is in a detailed area or near the region boundary
then the scale parameter is small and short and thin lines are observed in complex areas. Thus
we have an improved chance of decreasing the possibility of incorrectly highlighting long and thick lines.
Result
2
Experimental results show that the thickness and shade of the line produced by our approach change with the complexity of the image. Therefore
our approach can produce attractive and impressive line illustrations with a variety of photographs.
Conclusion
2
Compared with the fixed-parameter line drawing algorithm
our line drawing algorithm is more adaptive and has better results.
Marr D, Hildreth E. Theory of edge detection[J]. Proceedings of the Royal Society B:Biological Sciences, 1980, 207(1167):187-217.[DOI:10.1098/rspb.1980.0020]
Gooch B, Reinhard E, Gooch A. Human facial illustrations:creation and psychophysical evaluation[J]. ACM Transactions on Graphics, 2004, 23(1):27-44.[DOI:10.1145/966131.966133]
Winnemöller H, Olsen S C, Gooch B. Real-time video abstraction[J]. ACM Transactions on Graphics, 2006, 25(3):1221-1226.[DOI:10.1145/1141911.1142018]
Kang H, Lee S, Chui C K. Coherent line drawing[C]//5th International Symposium on Non-Photorealistic Animation and Rendering. San Diego, California: ACM, 2007: 43-50. [ DOI:10.1145/1274871.1274878 http://dx.doi.org/10.1145/1274871.1274878 ]
Kang H, Lee S, Chui C K. Flow-based image abstraction[J]. IEEE Transactions on Visualization and Computer Graphics, 2009, 15(1):62-76.[DOI:10.1109/TVCG.2008.81]
Canny J. A computational approach to edge detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1986, PAMI-8(6):679-698.[DOI:10.1109/TPAMI.1986.4767851]
Meer P, Georgescu B. Edge detection with embedded confidence[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, 23(12):1351-1365.[DOI:10.1109/34.977560]
Decarlo D, Santella A. Stylization and abstraction of photographs[J]. ACM Transactions on Graphics, 2002, 21(3):769-776.[DOI:10.1145/566654.566650]
Fischer J, Bartz D, Straber W. Stylized augmented reality for improved immersion[C]//Proceedings of 2005 IEEE Virtual Reality. Bonn, Germany: IEEE, 2005: 195-202. [ DOI:10.1109/VR.2005.1492774 http://dx.doi.org/10.1109/VR.2005.1492774 ]
Kang H W, Chui C K, Chakraborty U K. A unified scheme for adaptive stroke-based rendering[J]. The Visual Computer, 2006, 22(9-11):814-824.[DOI:10.1007/s00371-006-0066-7]
Orzan A, Bousseau A, Barla P, et al. Structure-preserving manipulation of photographs[C]//5th International Symposium on Non-Photorealistic Animation and Rendering. San Diego, California: ACM, 2007: 103-110. [ DOI:10.1145/1274871.1274888 http://dx.doi.org/10.1145/1274871.1274888 ]
Son M, Kang H, Lee S, et al. Abstract line drawings from 2D images[C]//Proceedings of the 15th Pacific Conference on Computer Graphics and Applications. Maui, HI, USA: IEEE, 2007: 333-342. [ DOI:10.1109/PG.2007.63 http://dx.doi.org/10.1109/PG.2007.63 ]
Kyprianidis J E, Döllner J. Image abstraction by structure adaptive filtering[M]//Soo I, Tang W. EG UK Theory and Practice of Computer Graphics. Aire-la-Ville, The Eurographics Association, 2008: 51-58. [ DOI:10.2312/LocalChapterEvents/TPCG/TPCG08/051-058 http://dx.doi.org/10.2312/LocalChapterEvents/TPCG/TPCG08/051-058 ]
Liu Y Q, Wu Z S, Wang S D, et al. Line drawing technique for building images[J]. Journal of Software, 2012, 23(2):34-41.
Kim D, Son M, Lee Y, et al. Feature-guided Image Stippling[J]. Computer Graphics Forum, 2008, 27(4):1209-1216.[DOI:10.1111/j.1467-8659.2008.01259.x]
Wang S D, Wu E H, Liu Y Q, et al. Abstract line drawings from photographs using flow-based filters[J]. Computers&Graphics, 2012, 36(4):224-231.[DOI:10.1016/j.cag.2012.02.011]
王山东, 刘学慧, 陈彦云, 吴恩华.基于特征流的抽象线条画绘制[J].计算机学报, 2014, 3:011.[10.3721/SP.J.1016.2014.006]
Stone L A, Frank J A, Albert P S, et al. The effect of interferon-β on blood-brain barrier disruptions demonstrated by constrast-enhanced magnetic resonance imaging in relapsing-remitting multiple sclerosis[J]. Annals of Neurology, 1995, 37(5):611-619.[DOI:10.1002/ana.410370511]
Kyprianidis J E, Kang H, Döllner J. Image and video abstraction by anisotropic kuwahara filtering[J]. Computer Graphics Forum, 2009, 28(7):1955-1963.[DOI:10.1111/j.1467-8659.2009.01574.x]
Kyprianidis J E, Kang H, Döllner J. Anisotropic Kuwahara filtering on the GPU[M]//Engel W. GPU Pro-Advanced Rendering Techniques. Boca Raton, Florida: AK Peters, 2010: 247-264.
Kyprianidis J E, Kang H. Image and video abstraction by coherence-enhancing filtering[J]. Computer Graphics Forum, 2011, 30(2):593-602.[DOI:10.1111/j.1467-8659.2011.01882.x]
Winnemöller H, Kyprianidis J E, Olsen S C. XDoG:an extended difference-of-Gaussians compendium including advanced image stylization[J]. Computers&Graphics, 2012, 36(6):740-753.[DOI:10.1016/j.cag.2012.03.004]
Chen Y S, Chan A B. Enhanced figure-groundclassification with background prior propagation[J]. IEEE Transactions on Image Processing, 2015, 24(3):873-885.[DOI:10.1109/TIP.2015.2389612]
Liu C W, Liu T L. A sparse linear model for saliency-guided decolorization[C]//Proceedings of the 20th IEEE International Conference on Image Processing. Melbourne, VIC, Australia: IEEE, 2013: 1105-1109. [ DOI:10.1109/ICIP.2013.6738228 http://dx.doi.org/10.1109/ICIP.2013.6738228 ]
相关作者
相关机构
京公网安备11010802024621