发布时间: 2018-08-16 摘要点击次数: 全文下载次数: DOI: 10.11834/jig.170545 2018 | Volume 23 | Number 8 图像分析和识别

 收稿日期: 2017-10-25; 修回日期: 2018-03-07 基金项目: 国家自然科学基金项目（61502331，11701410） 第一作者简介: 盛家川, 1982年生, 女, 副教授, 2013年于天津大学获计算机应用技术专业博士学位, 主要从事图像处理、模式识别方面的研究。E-mail:jiachuansheng@tjufe.edu.cn;李玉芝, 女, 讲师, 主要从事机器学习、多媒体处理方面的研究。E-mail:liyuzhi@tjufe.edu.cn. 中图法分类号: TP301.6 文献标识码: A 文章编号: 1006-8961(2018)08-1193-14

# 关键词

Learning artistic objects for improved classification of Chinese paintings
Sheng Jiachuan, Li Yuzhi
School of Science and Technology, Tianjin University of Finance and Economics, Tianjin 300222, China
Supported by: National Natural Science Foundation of China (61502331, 11701410)

# Abstract

Objective Presently, existing research on art classification is primarily based on feature extraction and hence feature-based classification. Although such feature-based methods reported in the literature achieve a certain level of success, a major weakness lies in the considerable dependence of classification performances on the effectiveness of the features in describing the content of Chinese paintings. Given that traditional Chinese artists tend to rely on popular objects, such as figures, trees, flowers, birds, mountains, horses, and houses to express their artistic feelings and emotions, we explore a new concept of artistic object-based approach to classify traditional Chinese paintings in this study. In this way, automated classification can be integrated with perception, understanding, and interpretation of artistic expressions and emotions via the segmented artistic objects. Such an approach also possibly enables our proposed methods to be further developed into an interactive object-based classification approach for other forms of paintings. In comparison with the existing state of the arts, one advantage of our proposed approach over those based on features or content is that objects provide direct and integrated art expressions inside paintings. Method Our proposed method includes three stages of processing and analytics for traditional Chinese paintings, that is, 1) interactive art object segmentation; 2) description and characterization of art objects via convolution neural network (CNN), the most popular deep learning unit; and 3) SVM-based classification and fusion across all art objects. Specifically, via an iterative linear clustering algorithm, super-pixels are constructed to detect the difference between the color and position of each individual pixel. By maximizing the similarity within the neighborhood of those super-pixels, a sequence of objects can be segmented, and an interactive scheme can be designed, allowing users to add, revise, and interact with the content of paintings to achieve the best possible balance between subjective demand and objective art description. Afterward, a CNN-based deep learning unit is added to describe those objects, so its classification can be carried out with regard to the individual art object. Finally, an SVM unit is adopted to achieve the final fusion of all these classifications via consideration of each individual object within the given window, which is influenced and initialized through the training process. Result Extensive experiments are carried out, which are in four phases, each of which considers one impact factor, such as consideration of the number of artists, comparison with the existing state of the arts, consideration of benchmarking via content-based classifications, and assessment of contributions from CNN alone. Experimental results show that our proposed algorithm:1) outperforms several existing representative approaches, including MHMM and fusion-based method, 2) achieves effective fusion of all different object classifications, including CNN and SVM units, 3) captures the artistic emotions through those segmented art objects, and 4) shows potential for interactive classification of Chinese paintings via segmentation of artistic objects. Conclusion This study proposes the computerized classification and recognition of art styles based on artistic objects in paintings rather than the whole paintings. Experimental results reveal that the proposed algorithm outperforms the existing representative benchmarks, providing potential for developing effective digital tools for computerized management of Chinese paintings. In addition, this method can be used to formulate an important tool for computerized management of Chinese traditional paintings, providing a range of techniques for effective and efficient digitization, manipulation, understanding, perception, and interpretation of Chinese traditional arts as well as its legacy.

# Key words

artistic object segmentation; classification of Chinese paintings; convolutional neural network; fusion algorithm; deep learning; superpixel segmentation

# 1 相关算法

SuperLattice[12]算法使用贪婪策略，每次在边界成本图最小处利用水平和垂直路径来分割图像，从而得到超像素。该方法保持了规则的图像拓扑结构，产生规整的超像素网格，具有良好的分割精度和稳定性，同时超像素数量是可以人为规定的。但该方法产生的超像素优劣与图像的边界图质量有很大关系。

MeanShift[15]算法是一种无参数的迭代算法，通过概率密度函数使中心点收敛至密度最大的点。该方法产生的超像素形状规整，在稳定性和抗躁性上保持良好的性能。但是该方法速度不快，对于超像素数量无法控制，并且存在过分割问题。

TurboPixel[16]算法是基于几何流的水平集方法，首先选择初始种子点，通过曲率演化模型和骨架化过程来扩张种子点的区域，从而得到网格状超像素。该算法的运行时间与图像尺寸是正相关关系，能够人为规定生成的超像素数目，超像素形状规则且能保留图像的轮廓结构，同时改良了欠分割问题。但是生成的超像素形状不可控，对于分辨率较大的图像，不能满足快速高质量的图像分割。

SLIC[17]算法是一种基于聚类算法的超像素分割，由像素LAB颜色空间和像素位置共5维空间来生成超像素。该方法首先初始化聚类中心，并通过重设聚类中心将其移到邻域内梯度最小的地方，然后在聚类中心2S×2S的邻域内，为每个聚类中心分配匹配点，进而计算新的聚类中心与之前聚类中心的距离，根据阈值判断是否需要重新设置聚类中心。该方法生成的超像素大小均匀、形状规整，同时边界信息保持较好，能控制生成的超像素的数目，且该数量是算法的唯一输入参数。

# 2.2 交互式艺术目标分割

 $\zeta (\boldsymbol{S}, \boldsymbol{T}) = \sum\limits_{\eta = 1}^{4\;096} {\sqrt {H_S^\eta \times H_T^\eta } }$ (5)

MSRMAO算法步骤如下：

1) 对中国画艺术目标区域$\boldsymbol{F}$进行融合:

(1) 对每个区域$\boldsymbol{P} \in \boldsymbol{F}$, 标记其相邻区域为${\mathit{\boldsymbol{\bar R}}_F} = \{ {\mathit{\boldsymbol{A}}_\mathit{\boldsymbol{i}}}\}, i = 1, 2, \cdots, p$

(2) 对每个${\boldsymbol{A}_i} \notin \boldsymbol{F}$, 标记其相邻区域为${{\mathit{\boldsymbol{\bar R}}}_{{A_i}}} = \{ \mathit{\boldsymbol{R}}_j^{{A_i}}\}, j = 1, 2, \cdots, k$。显然有: $\mathit{\boldsymbol{P}} \in {{\mathit{\boldsymbol{\bar R}}}_{{A_i}}}$

(3) 计算$\zeta ({\boldsymbol{A}_i}, \boldsymbol{R}_j^{{A_i}})$，如果$\zeta ({\mathit{\boldsymbol{A}}_i}, \mathit{\boldsymbol{P}}) = \mathop {\max }\limits_{j = 1, 2, \cdots, k} \zeta ({\mathit{\boldsymbol{A}}_i}, \mathit{\boldsymbol{R}}_j^{{A_i}})$，则将$\boldsymbol{P}$${\boldsymbol{A}_i}合并，\boldsymbol{P} = \boldsymbol{P} \cup {\boldsymbol{A}_i}，否则不合并。 (4) 更新\boldsymbol{F}$$\boldsymbol{M}$

(5) 如果$\boldsymbol{F}$中未找到新的可合并区域，则进行步骤2)；否则返回步骤(1)。

2) 对背景区域$\boldsymbol{B}$进行融合:

(1) 对于每个区域$\boldsymbol{Q} \in \boldsymbol{B}$，标记其相邻区域为${{\mathit{\boldsymbol{\bar R}}}_Q} = \{ {\mathit{\boldsymbol{N}}_i}\}, i = 1, 2, \cdots, q$

(2) 对每个${\boldsymbol{N}_i} \notin \boldsymbol{B}$，标记其相邻区域为${{\mathit{\boldsymbol{\bar R}}}_{{N_i}}} = \{ \mathit{\boldsymbol{R}}_j^{{N_i}}\}, j = 1, 2, \cdots, k$。显然有: $Q \in {{\mathit{\boldsymbol{\bar R}}}_{{N_i}}}$

(3) 计算$\zeta ({\boldsymbol{N}_i}, R_j^{{\boldsymbol{N}_i}})$，如果$\zeta ({\boldsymbol{N}_i}, \boldsymbol{Q}) = \mathop {\max }\limits_{j = 1, 2, \cdots, k} \zeta ({\boldsymbol{N}_i}, \boldsymbol{R}_j^{{N_i}})$，就将$\boldsymbol{Q}$${\boldsymbol{N}_i}合并，\boldsymbol{Q} = \boldsymbol{Q} \cup {\boldsymbol{N}_i}，否则不合并。 (4) 更新\boldsymbol{B}$$\boldsymbol{M}$

(5) 如果$\boldsymbol{B}$中找不到新的可合并区域，则进行下一步；否则返回步骤(1)。

3) 对未标记区域$\boldsymbol{M}$进行融合：

(1) 对于每个未标记区域$\boldsymbol{O} \in \boldsymbol{M}$，标记其相邻区域为${{\mathit{\boldsymbol{\bar R}}}_o} = \{ {\mathit{\boldsymbol{G}}_\mathit{\boldsymbol{i}}}\}, i = 1, 2, \cdots, o$

(2) 对于每一个${\boldsymbol{G}_i} \notin \boldsymbol{B}$，且${\boldsymbol{G}_i} \notin \boldsymbol{F}$，标记其相邻区域${{\mathit{\boldsymbol{\bar R}}}_{{G_i}}} = \{ \mathit{\boldsymbol{R}}_j^{{G_i}}\}, j = 1, 2, \cdots, k$。显然，$\mathit{\boldsymbol{O}} \in {{\mathit{\boldsymbol{\bar R}}}_{{G_i}}}$

# 参考文献

• [1] Qian W H, Xu D, Guan Z, et al. Simulating chalk art style painting[J]. Journal of Image and Graphics, 2017, 22(5): 622–630. [钱文华, 徐丹, 官铮, 等. 粉笔画艺术风格模拟[J]. 中国图象图形学报, 2017, 22(5): 622–630. ] [DOI:10.1142/S0218001417590261]
• [2] Sheng J C, Jiang J M. Recognition of Chinese artists via windowed and entropy balanced fusion in classification of their authored ink and wash paintings (IWPs)[J]. Pattern Recognition, 2014, 47(2): 612–622. [DOI:10.1016/j.patcog.2013.08.017]
• [3] Sheng J C. Automatic categorization of traditional Chinese paintings based on wavelet transform[J]. Computer Science, 2014, 41(2): 317–319. [盛家川. 基于小波变换的国画特征提取及分类[J]. 计算机科学, 2014, 41(2): 317–319. ] [DOI:10.3969/j.issn.1002-137X.2014.02.069]
• [4] Li J, Wang J Z. Studying digital imagery of ancient paintings by mixtures of stochastic models[J]. IEEE Transactions on Image Processing, 2004, 13(3): 340–353. [DOI:10.1109/TIP.2003.821349]
• [5] Jiang S Q, Huang Q M, Ye Q X, et al. An effective method to detect and categorize digitized traditional Chinese paintings[J]. Pattern Recognition Letters, 2006, 27(7): 734–746. [DOI:10.1016/j.patrec.2005.10.017]
• [6] Sun M J, Zhang D, Wang Z, et al. Monte Carlo convex hull model for classification of traditional Chinese paintings[J]. Neurocomputing, 2016, 171: 788–797. [DOI:10.1016/j.neucom.2015.08.013]
• [7] Wang Z, Sun M J, Han Y H, et al. Supervised heterogeneous sparse feature selection for Chinese paintings classification[J]. Journal of Computer-Aided Design & Computer Graphics, 2013, 25(12): 1848–1855. [王征, 孙美君, 韩亚洪, 等. 监督式异构稀疏特征选择的国画分类和预测[J]. 计算机辅助设计与图形学学报, 2013, 25(12): 1848–1855. ]
• [8] Guo F, Peng H, Tang J. A novel method of converting photograph into Chinese ink painting[J]. IEEJ Transactions on Electrical and Electronic Engineering, 2015, 10(3): 320–329. [DOI:10.1002/tee.22088]
• [9] Liu J A. The Palace Museum. Image Catalog of Authenticity Identification for Chinese Paintings and Calligraphy[M]. Beijing: Forbidden City Press, 2013: 1-360. [ 刘九庵. 故宫博物院.中国历代书画真伪对照图录[M]. 北京: 故宫出版社, 2013: 1-360.]
• [10] Guo X G. Judging Arts by Xiaoguang[M]. Beijing: China Renmin University Press, 2014: 10-77. [ 郭晓光. 晓光鉴画[M]. 北京: 中国人民大学出版社, 2014: 10-77.]
• [11] Song X Y, Zhou L L, Li Z G, et al. Review on superpixel methods in image segmentation[J]. Journal of Image and Graphics, 2015, 20(5): 599–608. [宋熙煜, 周利莉, 李中国, 等. 图像分割中的超像素方法研究综述[J]. 中国图象图形学报, 2015, 20(5): 599–608. ] [DOI:10.11834/jig.20150502]
• [12] Moore A P, Prince S J D, Warrell J, et al. Superpixel lattices[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Anchorage, Alaska, USA: IEEE, 2008: 1-8. [DOI: 10.1109/CVPR.2008.4587471]
• [13] Felzenszwalb P F, Huttenlocher D P. Efficient graph-based image segmentation[J]. International Journal of Computer Vision, 2004, 59(2): 167–181. [DOI:10.1023/B:VISI.0000022288.19776.77]
• [14] Shi J B, Malik J. Normalized cuts and image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(8): 888–905. [DOI:10.1109/34.868688]
• [15] Li H Y, W S P. Parallelization of Mean Shift image segmentation algorithm[J]. Journal of Image and Graphics, 2013, 18(12): 1610–1619. [李宏益, 吴素萍. Mean Shift图像分割算法的并行化[J]. 中国图象图形学报, 2013, 18(12): 1610–1619. ] [DOI:10.11834/jig.20131209]
• [16] Levinshtein A, Stere A, Kutulakos K N, et al. TurboPixels:fast superpixels using geometric flows[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 31(12): 2290–2297. [DOI:10.1109/TPAMI.2009.96]
• [17] Achanta R, Shaji A, Smith K, et al. SLIC superpixels compared to state-of-the-art superpixel methods[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(11): 2274–2282. [DOI:10.1109/TPAMI.2012.120]
• [18] Yang D X. Authenticity Identification of Chinese Paintings and Calligraphy[M]. 3rd ed. Shenyang: Liaoning People's Publishing House, 2016: 149-196. [ 杨丹霞. 中国书画真伪识别[M]. 3版. 沈阳: 辽宁人民出版社, 2016: 149-196.]
• [19] Meyer F, Beucher S. Morphological segmentation[J]. Journal of Visual Communication and Image Representation, 1990, 1(1): 21–46. [DOI:10.1016/1047-3203(90)90014-M]
• [20] Schmidhuber J. Deep learning in neural networks:an overview[J]. Neural Networks, 2015, 61: 85–117. [DOI:10.1016/j.neunet.2014.09.003]
• [21] Zhang X P, Xiong H K, Zhou W G, et al. Picking deep filter responses for fine-grained image recognition[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE, 2016: 1134-1142. [DOI: 10.1109/CVPR.2016.128]
• [22] Liu J, Gao C Q, Meng D Y, et al. Two-stream contextualized CNN for fine-grained image classification[C]//Proceedings of the 30th AAAI Conference on Artificial Intelligence. Phoenix, Arizona, USA: AAAI Press, 2016: 4232-4233.
• [23] He X T, Peng Y X. Fine-grained image classification via combining vision and language[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE, 2017: 7332-7340. [DOI: 10.1109/CVPR.2017.775]
• [24] Feng Y S, Wang Z L. Fine-grained image categorization with segmentation based on top-down attention map[J]. Journal of Image and Graphics, 2016, 21(9): 1147–1154. [冯语姗, 王子磊. 自上而下注意图分割的细粒度图像分类[J]. 中国图象图形学报, 2016, 21(9): 1147–1154. ] [DOI:10.11834/jig.20160904]
• [25] Chatfield K, Simonyan K, Vedaldi A, et al. Return of the devil in the details: delving deep into convolutional nets[C]//Proceedings of the British Machine Vision Conference. London, UK: BMVA Press, 2014: 1-12. [DOI: 10.5244/C.28.6]