多尺度双通道卷积神经网络下的刺绣模拟
Embroidery simulation based on multi-scale two-channel convolution neural network
- 2020年25卷第2期 页码:343-353
收稿:2019-06-21,
修回:2019-9-9,
录用:2019-9-16,
纸质出版:2020-02-16
DOI: 10.11834/jig.190299
移动端阅览

浏览全部资源
扫码关注微信
收稿:2019-06-21,
修回:2019-9-9,
录用:2019-9-16,
纸质出版:2020-02-16
移动端阅览
目的
2
针对现有刺绣模拟算法中针线感不强、针线轨迹方向单一等问题
提出了一种基于多尺度双通道卷积神经网络的刺绣模拟算法。
方法
2
1) 搭建多尺度双通道网络
选取一幅刺绣艺术作品作为风格图像
将MSCOCO(microsoft common objects in context)数据集作为训练集
输入网络得到VGG(visual geometry group)网络损失和拉普拉斯损失; 2)将总损失值传回到网络
通过梯度下降法更新网络参数
并且重复更新参数直到指定的训练次数完成网络训练; 3)选取一幅目标图像作为刺绣模拟的内容图像
输入训练完成的网络
获得具有刺绣艺术风格的结果图像; 4)使用掩模图像将得到的结果图像与绣布图像进行图像融合
即完成目标图像的刺绣模拟。
结果
2
本文算法能产生明显的针线感和多方向的针线轨迹
增强了刺绣模拟绘制艺术作品的表现力。
结论
2
本文将输入图像经过多尺度双通道卷积神经网络进行前向传播
并使用VGG19、VGG16和拉普拉斯模块作为损失网络进行刺绣模拟。实验结果表明
与现有卷积神经网络风格模拟算法对比
本文提出的网络能够学习到刺绣艺术风格图像的针线特征
得到的图像贴近真实刺绣艺术作品。
Objective
2
Embroidery
a unique handicraft technology in China
has high artistic and commodity value. The production of realistic embroidery works of art requires considerable time and strict manual skills. With the development of modern science and technology
the use of computer algorithms to simulate the generation of embroidery art style images is crucial for the protection and inheritance of embroidery culture. Traditional embroidery simulation algorithm has limitations and requires the designer to understand embroidery stitching. Convolutional neural network can learn the characteristics of embroidery artistic style to simulate the generation of embroidery artistic style on different images. Therefore
to address the problem that the existing embroidery art style simulation algorithm produces nonevident needle characteristics and single needle direction
this paper proposes an embroidery simulation algorithm based on multi-scale two-channel convolution neural network.
Method
2
Multi-scale two-channel convolution neural network
VGG19 network
VGG16 network
and Laplacian module are used to learn and extract texture features of embroidery art style images
and the learned texture features are simulated and generated in content images. The content image becomes the image with the characteristics of embroidery art. First
the overall structure of the convolution neural network is determined by using deep convolutional generative adversarial network. A multi-scale two-channel convolution neural network is constructed as the generating network. The concrete structure of the network consists of multi-scale input
dual-channel convolution blocks propagating simultaneously in the RGB and gray channels
and integrated convolution blocks. VGG19 network
VGG16 network
and Laplacian module are used as loss networks. The real image of embroidery works is selected as the style image of embroidery art
and the MSCOCO image dataset is used as the training data. The training data are input into the network
and the VGG network and Laplacian loss values are obtained by calculating the loss network. The total loss value is returned to the multi-scale two-channel convolution neural network
and the gradient descent method is used to update the convolution neural network. Parameters are repeated until a specified number of network training iterations complete the training of the multi-scale two-channel convolution neural network; a target image is selected as the content image input to the multi-scale two-channel convolution neural network that has been trained
such that the content image becomes an image with embroidery artistic characteristics. At this time
the embroidery art style simulation of the content image is preliminarily completed. Finally
the mask image of the content image is used to fuse the generated image with the background image. The content image part that needs to be simulated as the foreground image is retained
whereas the other part of the image is displayed as the background image to complete the simulation of the embroidery art style of the content image.
Result
2
The embroidery simulation result image obtained by the algorithm in this paper is compared with the embroidery simulation result image obtained by some existing convolutional neural networks. Result images obtained by the existing convolutional neural networks do not show fine needle-line characteristics and the style is further inclined to artistic painting style. Even if the obtained embroidery simulation result images show clear embroidery texture
but the needle-line direction is evidently single and the gap between the textures is large
the comparison with the real embroidery texture is insufficiently natural. The simulated image of embroidery artistic style obtained by the multi-scale and dual-channel convolution neural network presented in this paper has evident needle-line characteristics and multi-direction needle-line trajectory. This method solves the existing problems in generating the effect image of embroidery artistic style by using the existing simulation algorithm and efficiently displays the artistic characteristics of embroidery.
Conclusion
2
In this paper
the target image is forward-propagated through a multi-scale two-channel convolutional neural network
and VGG19 network
VGG16 network
and Laplacian module are used as the loss networks to simulate the embroidery art style. Experimental results show that in comparison with the results of the existing convolutional neural network for the style simulation
the proposed multi-scale two-channel convolutional neural network can learn the detailed stitching features in the embroidery style art image
and the final result image is displayed. With a clear sense of stitching and multi-directional stitching
the overall artistic style is close to real embroidery art. In future work
we will focus on embroidery texture connection and transition generation in different areas of the existing generated result image to make the final embroidery art style simulation image close to the actual embroidery art work.
Chen D D, Yuan L, Liao J, Yu N H and Hua G. 2017. StyleBank: an explicit representation for neural image style transfer//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, Hawaii, USA: IEEE: 2770-2779[ DOI: 10.1109/CVPR.2017.296 http://dx.doi.org/10.1109/CVPR.2017.296 ]
Chen S G, Sun Z X, Xiang J H and Zhang Y. 2011. Research on the technology of computer aided irregular needling embroidery. Chinese Journal of Computers, 34(3):526-538
陈圣国, 孙正兴, 项建华, 张岩. 2011.计算机辅助乱针绣制作技术研究.计算机学报, 34(3):526-538[DOI:10.3724/SP.J.1016.2011.00526]
Chen T Q and Schmidt M. 2016. Fast patch-based style transfer of arbitrary style[EB/OL].[2019-06-04] . https://arxiv.org/pdf/1612.04337.pdf https://arxiv.org/pdf/1612.04337.pdf
Dumoulin V, Shlens J and Kudlur M. 2016. A learned representation for artistic style[EB/OL].[2019-06-04] . https://arxiv.org/pdf/1610.07629.pdf https://arxiv.org/pdf/1610.07629.pdf
Gatys L A, Ecker A S and Bethge M. 2015. A neural algorithm of artistic style[EB/OL].[2019-06-04] . https://arxiv.org/pdf/1508.06576.pdf https://arxiv.org/pdf/1508.06576.pdf
Gatys L A, Ecker A S, Bethge M, Hertzmann A and Shechtman E. 2017. Controlling perceptual factors in neural style transfer//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, Hawaii, USA: IEEE: 3985-3993[ DOI: 10.1109/CVPR.2017.397] http://dx.doi.org/10.1109/CVPR.2017.397] .
Huang X and Belongie S. 2017. Arbitrary style transfer in real-time with adaptive instance normalization//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 1510-1519[ DOI: 10.1109/ICCV.2017.167 http://dx.doi.org/10.1109/ICCV.2017.167 ]
Johnson J, Alahi A and Li F F. 2016. Perceptual losses for real-time style transfer and super-resolution//Proceedings of the 14th European Conference on Computer Vision. Amsterdam, The Netherlands: ECCV: 694-711[ DOI: 10.1007/978-3-319-46475-6_43 http://dx.doi.org/10.1007/978-3-319-46475-6_43 ]
Li S H, Xu X X, Nie L Q and Chua T S. 2017a. Laplacian-steered neural style transfer//Proceedings of the 25th ACM International Conference on Multimedia. California, USA: ACM: 1716-1724[ DOI: 10.1145/3123266.3123425 http://dx.doi.org/10.1145/3123266.3123425 ]
Li Y J, Fang C, Yang J M, Wang Z W, Lu X and Yang M H. 2017b. Universal style transfer via feature transforms//Proceedings of the 31st Conference on Neural Information Processing Systems. Long Beach, CA, USA: NIPS: 385-395
Long J, Shelhamer E and Darrell T. 2015. Fully convolutional networks for semantic segmentation//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE: 3431-3440[ DOI: 10.1109/CVPR.2015.7298965 http://dx.doi.org/10.1109/CVPR.2015.7298965 ]
Lu S P and Zhang S H. 2010. Visual importance based painterly rendering for images. Journal of Computer-Aided Design and Computer Graphics, 22(7):1120-1125
卢少平, 张松海. 2010.基于视觉重要性的图像油画风格化绘制算法.计算机辅助设计与图形学学报, 22(7):1120-1125
Luan F J, Paris S, Shechtman E and Bala K. 2017. Deep photo style transfer//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, Hawaii, USA: IEEE: 6997-7005[ DOI: 10.1109/CVPR.2017.740 http://dx.doi.org/10.1109/CVPR.2017.740 ]
Qian W H, Xu D, Yue K, Guan Z and Pu Y Y. 2013. Rendering pyrography style painting basedon deviation mapping. Journal of Image and Graphics, 18(7):836-843
钱文华, 徐丹, 岳昆, 官铮, 普园媛. 2013.偏离映射的烙画风格绘制.中国图象图形学报, 18(7):836-843[DOI:10.11834/jig.20130713]
Radford A, Metz L and Chintala S. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks[EB/OL].[2019-06-04] . https://arxiv.org/pdf/1511.06434.pdf https://arxiv.org/pdf/1511.06434.pdf
Simonyan K and Zisserman A. 2014. Very deep convolutional networks for large-scale image recognition[EB/OL].[2019-06-04] . https://arxiv.org/pdf/1409.1556.pdf https://arxiv.org/pdf/1409.1556.pdf
Song X D, Luo Y P, Muto Y and Komiya R. 2001. Modeling and presentation for embroidery simulation. Journal of Computer-Aided Design and Computer Graphics, 13(10):876-880
宋晓丹, 罗予频, 武藤幸好, 小宫量平. 2001.刺绣仿真的建模与实现.计算机辅助设计与图形学学报, 13(10):876-880[DOI:10.3321/j.issn:1003-9775.2001.10.004]
Sonka M, Hlavac V and Boyle R. 2016. Image Processing, Analysis, and Machine Vision. Xing J L and Ai H Z, trans. 4th ed. Beijing: Tsinghua University Press: 99-100
Sonka M, Hlavac V, Boyle R. 2016.图像处理、分析与机器视觉.兴军亮, 艾海舟, 译. 4版.北京: 清华大学出版社: 99-100
Ulyanov D, Lebedev V, Vedaldi A and Lempitsky V. 2016a. Texture networks: feed-forward synthesis of textures and stylized images//Proceedings of the 33rd International Conference on Machine Learning. New York, USA: PMLR: 1349-1357
Ulyanov D, Vedaldi A and Lempitsky V. 2016b. Instance normalization: the missing ingredient for fast stylization[EB/OL].[2019-06-04] . https://arxiv.org/pdf/1607.08022.pdf https://arxiv.org/pdf/1607.08022.pdf
Wang X, Oxholm G, Zhang D and Wang Y F. 2017. Multimodal transfer: a hierarchical deep convolutional neural network for fast artistic style transfer//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, Hawaii, USA: IEEE: 5239-5247[ DOI: 10.1109/CVPR.2017.759 http://dx.doi.org/10.1109/CVPR.2017.759 ]
Xiang J H, Yang K W, Zhou J and Sun Z X. 2013. A novel image disintegration-based computerized embroidery method for random stitch. Journal of Graphics, 34(4):16-23
项建华, 杨克微, 周杰, 孙正兴. 2013.一种基于图像分解的乱针绣模拟生成方法.图学学报, 34(4):16-23[DOI:10.3969/j.issn.2095-302X.2013.04.003]
Yang L J, Xu T C and Wu E H. 2016. Animating strokes in drawing process of Chinese ink painting. Journal of Computer-Aided Design and Computer Graphics, 28(5):742-749
杨丽洁, 徐添辰, 吴恩华. 2016.动态模拟中国水墨画中的笔画绘制.计算机辅助设计与图形学学报, 28(5):742-749[DOI:10.3969/j.issn.1003-9775.2016.05.006]
Yang Y B, Guo L, Chen S F and Chen Z Q. 2003. Research on automatic character embroidery technology. Journal of Computer Research and Development, 40(1):88-93
杨育彬, 郭磊, 陈世福, 陈兆乾. 2003.字符自动刺绣编针技术的研究.计算机研究与发展, 40(1):88-93
Zheng S, Jayasumana S, Romera-Paredes B, Vineet V, Su Z Z, Du D L, Huang C and Torr P H S. 2015. Conditional random fields as recurrent neural networks//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE: 1529-1537[ DOI: 10.1109/ICCV.2015.179 http://dx.doi.org/10.1109/ICCV.2015.179 ]
Zhou J, Sun Z X, Yang K W and Hu Z Z. 2014. Parametric generation method for irregular needling embroidery rendering. Journal of Computer-Aided Design and Computer Graphics, 26(3):436-444
周杰, 孙正兴, 杨克微, 胡宗正. 2014.乱针绣模拟的参数化生成方法.计算机辅助设计与图形学学报, 26(3):436-444
相关作者
相关机构
京公网安备11010802024621