Current Issue Cover
曲线笔触渲染的图像风格迁移

饶师瑾, 钱文华, 张结宝(云南大学信息学院)

摘 要
摘 要:目的 针对GANILLA、Paint Transformer、StrokeNet等已有的风格迁移算法存在生成图像笔触丢失、线条灵活度低、训练时间长等问题,提出一种基于曲线笔触渲染的图像风格迁移算法。方法 首先按照自定义的超像素数量将图像前景分割为小区域的子图像,保留更多图像细节,背景分割为较大区域的子图像,再对分割后的每个子区域选取控制点,采用bezier方程对控制点进行多尺度笔触生成,最后采用风格迁移算法将渲染后的图像与风格图像进行风格迁移。结果 与AST(Arbitrary Style Transfer)和Kotovenko等的方法相比,本文方法在欺骗率指标上分别提升了0.13和0.04,测试者欺骗率提升了0.13和0.01。结论 与Paint Transformer等基于笔触渲染的算法对比,本文能够在纹理丰富的前景区域生成细粒度笔触,背景区域生成粗粒度笔触,保存更多的图像细节。与GANILLA、AdaIN等风格迁移算法相比,本文采用图像分割算法取点生成笔触参数,无需训练,不仅提高了算法效率,而且生成的多风格图像保留风格化图像的笔触绘制痕迹,图像色彩鲜明。 关键词:非真实感渲染;风格迁移;笔触渲染;bezier曲线;超像素分割
关键词
Image Style Transfer via Curved Stroke Rendering

Rao Shijin, Qian Wenhua, Zhang Jiebao(School of Information Science , Yunnan University)

Abstract
Abstract: Objective The goal of image style transfer algorithm is to render the content of one image with style of another image. Image style transfer methods can be divided into traditional style transfer methods and neural style transfer methods. Traditional style transfer methods can be broadly classified to three categories: SBR(Stroke Based Rendering) and IA(Image Analogy). SBR method is to make the computer simulate human drawing with different size of strokes. The main idea of IA is: given a pair of images A(unprocessed source image) and A’(processed image), and the unprocessed image B, the processed image B’ is obtained by processing B in the same way as A to A’. Neural style transfer methods are divided into two categories: slow image reconstruction methods based on online image optimization and fast image reconstruction methods based on offline model optimization. The first type of image reconstruction is to optimize the image in pixel space and minimizing the objective function by gradient descent. Using a random noise as the starting image, then the pixel values of the noise images are continuously iteratively changed to find a target result image. Since each reconstruction result requires a lot of iterative optimizations in the pixel space, this approach is time-consuming and takes a lot of computational resources as well as the time overhead required. In order to speed up this process, feed-forward neural networks is proposed to train the network in advance with a large amount of data in a data-driven manner. The goal of the training is that given an input, the trained network only needs one forward transmission to output a style transfer image, which is the second type of method. In recent years, the seminal work on style transfer focus on how to build a neural network that can effectively extract the content features and style features of an image, then better combine the two kinds of features to generate more realistic images. However, building a model for each style is man-consuming, time-consuming and inefficient, such as NST(Neural Style Transfer )algorithms. NST aims at transfer the texture of style image to content image, optimize noise image on pixel level step by step. But hand-painted painting is made stroke by stroke, using different size of brushes from coarse to fine. Compared with human-created paintings, NST algorithms could just generate photo-realistic imageries and ignore the paint strokes or stipples. Since the existing style transfer algorithms such as Ganilla and Paint Transformer suffer from loss of brush strokes, poor stroke flexibility, we propose a novel style transfer algorithm to quickly recreate the content of one image with curved strokes, then transfer another style to the rerendered image. The images generated using our method are human-creation-like paintings. Method Firstly, we segment the content image into subregions with different scale via content mask according to the customized number of superpixels. Since we don"t pay more attention to the background, we segment the image background to small subregions. Contrarily, in order to preserve more details, the image foreground are saved the more the better. So, we segment the image foreground to smaller subregions. Numbers of segmentations of image foreground generally twice of background. For each subregion, we choose four control points in the convex hull of subregion, the bezier equation is used to generate thick strokes in background and thin strokes in foreground. Then the image which rendered with strokes is stylized with the style image by the style transfer algorithm to generate a stylized image that retains the stroke traces. Result Compared with the methods of AST(Arbitrary Style Transfer) and Kotovenko’s method, the deception rate is increased by 0.13 and 0.04, the human deception rate is increased by 0.13 and 0.01. Conclusion Compared with Paint Transformer and other stroke based rendering algorithms, our method could generate thin strokes in the texture-rich foreground region and thick strokes in the background, preserving more image details. Compared with WCT, AdaIN and other style transfer algorithms, we use image segmentation algorithm to generate stroke parameters without training, which impoves the efficiency and generates multi-style images that preserves the stroke drawing traces of stylized images with more vivid colors. Key words: non-photorealistic rendering; style transfer; stroke based rendering; bezier curve; superpixel segmentation
Keywords

订阅号|日报