结合面料属性和触觉感测的织物识别
Cloth recognition based on fabric properties and tactile sensing
- 2020年25卷第9期 页码:1800-1812
收稿:2019-11-05,
修回:2020-3-13,
录用:2020-3-20,
纸质出版:2020-09-16
DOI: 10.11834/jig.190525
移动端阅览

浏览全部资源
扫码关注微信
收稿:2019-11-05,
修回:2020-3-13,
录用:2020-3-20,
纸质出版:2020-09-16
移动端阅览
目的
2
织物识别是提高纺织业竞争力的重要计算机辅助技术。与通用图像相比,织物图像通常只在纹理和形状特征方面呈现细微差异。目前常见的织物识别算法仅考虑图像特征,未结合织物面料的视觉和触觉特征,不能反映出织物本身面料属性,导致识别准确率较低。本文以常见服用织物为例,针对目前常见织物面料识别准确率不高的问题,提出一种结合面料属性和触觉感测的织物图像识别算法。
方法
2
针对输入的织物样本,建立织物图像的几何测量方法,量化分析影响织物面料属性的3个关键因素,即恢复性、拉伸性和弯曲性,并进行面料属性的参数化建模,得到面料属性的几何度量。通过传感器设置对织物进行触感测量,采用卷积神经网络(convolutional neural network,CNN)提取测量后的织物触感图像的底层特征。将面料属性几何度量与提取的底层特征进行匹配,通过CNN训练得到织物面料识别模型,学习织物面料属性的不同参数,实现织物面料的识别并输出识别结果。
结果
2
在构建的常见服用织物样本上验证了本文方法,与同任务的方法比较,本文方法识别率更高,平均识别率达到89.5%。
结论
2
提出了一种基于面料属性和触觉感测的织物图像识别方法,能准确识别常用的服装织物面料,有效提高了织物识别的准确率,能较好地满足实际应用需求。
Objective
2
With the development of the textile industry
the manual identification of cloth has been unable to meet the growing demand for production. More and more image recognition technologies are applied to cloth recognition. Image recognition is a technology that combines feature extraction and feature learning; it plays an important role in improving the competitiveness of the clothing industry. Compared with general-purpose images
cloth images usually only show subtle differences in texture and shape. Current clothing recognition algorithms are based on machine learning; that is
they learn the features of clothing images through machine learning and compare the features of known fabric to determine the clothing category. However
these clothing recognition algorithms usually have low recognition rates because they only consider the vision attribute
which cannot fully describe the fabric and ignores the properties of the fabric itself. Touch and vision are two important sensing modalities for humans
and they offer complementary information for sensing cloth. Machine learning can also benefit from such multimodal sensing ability. To solve the problem of low recognition accuracy of common fabrics
a fabric image recognition method based on fabric properties and tactile sensing is proposed.
Method
2
The proposed method involves four steps
including image measurement
tactile sensing
fabric learning
and fabric recognition. The main idea of the method is to use AlexNet to extract tactile image features adaptively and match the fabric properties extracted by MATLAB morphology. First
the geometric measurement method is established to measure the input fabric image samples
and a parametric model is obtained after quantitatively analyzing the three key factors by testing the recovery
stretching
and bending behavior of different real cloth samples. The geometric measures of fabric properties can be obtained through parametric modeling. Second
fabric tactile sensing is measured through tactile sensor settings
and the low-level features of tactile images are extracted using convolutional neural network (CNN). Third
the fabric identification model is trained by matching the fabric geometric measures with the extracted features of tactile image and parameter learning through the CNN to learn the different parameters of fabric properties. Finally
the fabric is recognized
and results are obtained. In this study
the issue on cloth recognition is addressed by the basis of tactile image and vision; in this manner
missing sensory information can be avoided. Furthermore
a new fusion method named deep maximum covariance analysis (DMCA) is utilized to learn a joint latent space for sharing features through vision and tactile sensing
which can match weak paired vision and tactile data. Considering that the current fabric dataset contains only a few fabric types
which cannot be classified as everyday fabric
two fabric sample datasets are constructed. The first is a fabric image dataset for fabric property measurement
including the recovery
stretching
and bending images of 12 kinds of fabric types
such as coarse cotton
fine cotton
and canvas. Each type of fabric has 10 images
thus having a total of 360 images. The second is a fabric tactile image dataset
which includes 12 fabric types
each comprising 500 images with a total of 6 000 images. The size of all images are set to 227×227 pixels for the convenience of the experiment.
Result
2
To verify the effectiveness of the proposed method
experiments are performed on 12 common fabric samples. Experimental results show that the recognition average accuracy can reach 89.5%. Compared with the method of using only a single and three kinds of fabric attributes
the proposed method obtains a higher recognition rate. The proposed method also possesses better recognition effect compared with that of the mainstream methods. For example
compared with recognition accuracy of sparse coding (SC) combined with support vector machine (SVM)
that of the proposed method increases to 89.5%.
Conclusion
2
A fabric image recognition method of combining vision and tactile sensing is proposed. The method can accurately identify fabric for clothing and improve the accuracy of fabric recognition. For the feature extraction task
the AlexNet network achieves simplified high-dimensional features
which can adaptively extract effective features to avoid manual screening. Moreover
the DMCA model performs well in cross-modal matching. Compared with other clothing recognition methods
our method shows several advantages in terms of accuracy
without the cost of expensive equipment. However
our method does not consider the recognition accuracy problem
which is influenced by a small number of samples
a low image measurement data dimension
and a lack of tactile information. In the future
the issues to improve the recognition accuracy of various fabric types will be focused on further.
Allen P. 1984. Surface descriptions from vision and touch//Proceedings of 1984 IEEE International Conference on Robotics and Automation. Atlanta, GA, USA: IEEE: 394-397[ DOI:10.1109/ROBOT.1984.1087191 http://dx.doi.org/10.1109/ROBOT.1984.1087191 ]
Fan K C, Wang Y K, Chang B L, Wang T P, Jou C H and Kao I F. 1998. Fabric classification based on recognition using a neural network and dimensionality reduction. Textile Research Journal, 68(3):179-185[DOI:10.1177/004051759806800305]
Feng Z L, Liang W X, Tao D C, Sun L, Zeng A X and Song M L. 2019. CU-Net:component unmixing network for textile fiber identification. International Journal of Computer Vision, 127(10):1443-1454[DOI:10.1007/s11263-019-01199-9]
Hu Z P, Xu B and Bai Y. 2013. Sparse representation for image recognition based on Gabor feature set and discriminative dictionary learning. Journal of Image and Graphics, 18(2):189-194
胡正平, 徐波, 白洋. 2013. Gabor特征集结合判别式字典学习的稀疏表示图像识别.中国图象图形学报, 18(2):189-194)[DOI:10.11834/jig.20130209]
Johnson M K and Adelson E H. 2009. Retrographic sensing for the measurement of surface texture and shape//Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami: IEEE: 1070-1077[ DOI:10.1109/CVPR.2009.5206534 http://dx.doi.org/10.1109/CVPR.2009.5206534 ]
Johnson M K, Cole F, Raj A and Adelson E H. 2011. Microgeometry capture using an elastomeric sensor. ACM Transactions on Graphics, 30(4):#46[DOI:10.1145/1964921.1964941]
Krizhevsky A, Sutskever I and Hinton G E. 2012. Imagenet classification with deep convolutional neural networks//Proceedings of the 25th International Conference on Neural Information Processing Systems. Lake Tahoe: ACM: 1097-1105[ DOI:10.5555/2999134.2999257 http://dx.doi.org/10.5555/2999134.2999257 ]
Kroemer O, Lampert C H and Peters J. 2011. Learning dynamic tactile sensing with robust vision-based training. IEEE Transactions on Robotics, 27(3):545-557[DOI:10.1109/TRO.2011.2121130]
Liu C X. 2017. Investigation on the novel measurement for fabric wrinkle simulating actual wear. The Journal of the Textile Institute, 108(2):279-286[DOI:10.1080/00405000.2016.1165384]
Liu L, Wang R M and Luo X N. 2015. Realistic cloth simulation based on geometrical measurement and deformation. Journal of Software, 26(7):1785-1799
刘骊, 王若梅, 罗笑南. 2015.基于几何测量和变形的真实感织物模拟.软件学报, 26(7):1785-1799)[DOI:10.13328/j.cnki.jos.004614]
Lomov S V, Barburski M, Stoilova T, Verpoest I, Akkerman R, Loendersloot R and Ten Thije R H W. 2005. Carbon composites based on multiaxial multiply stitched preforms. Part 3:biaxial tension, picture frame and compression tests of the preforms. Composites Part A:Applied Science and Manufacturing, 36(9):1188-1206[DOI:10.1016/j.compositesa.2005.01.015]
Lowe D G. 2004. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91-110[DOI:10.1023/b:visi.0000029664.99615.94]
Luo S, Mou W X, Althoefer K and Liu H B. 2015. Localizing the object contact through matching tactile features with visual map//Proceedings of 2015 IEEE International Conference on Robotics and Automation (ICRA). Seattle: IEEE: 3903-3908[ DOI:10.1109/ICRA.2015.7139743 http://dx.doi.org/10.1109/ICRA.2015.7139743 ]
Luo S, Yuan W Z, Adelson E, Cohn A G and Fuentes R. 2018. Vitac: feature sharing between vision and tactile sensing for cloth texture recognition//Proceedings of 2018 IEEE International Conference on Robotics and Automation (ICRA). Brisbane: IEEE: 2722-2727[ DOI:10.1109/ICRA.2018.8460494 http://dx.doi.org/10.1109/ICRA.2018.8460494 ]
Magnenat-Thalmann N. 2010. Modeling and Simulating Bodies and Garments. London: Springer[ DOI:10.1007/978-1-84996-263-6 http://dx.doi.org/10.1007/978-1-84996-263-6 ]
Mishra R, Jamshaid H and Militky J. 2017. Investigation of mechanical properties of basalt woven fabrics by theoretical and image analysis methods. Fibers and Polymers, 18(7):1369-1381[DOI:10.1007/s12221-017-1082-0]
Ojala T, Pietikäinen M and Mäenpää T. 2002. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(7):971-987[DOI:10.1109/tpami.2002.1017623]
Lai S, Shyr T W and Lin J. 2002. Comparison between KES-FB and FAST in discrimination of fabric characteristics. Journal of Textile Engineering, 48(2):43-49[DOI:10.4188/jte.48.43]
Song A G, Han Y Z, Hu H H and Li J Q. 2014. A novel texture sensor for fabric texture measurement and classification. IEEE Transactions on Instrumentation and Measurement, 63(7):1739-1747[DOI:10.1109/TIM.2013.2293812]
Sun J J, Yao M, Xu B G and Bel P. 2011. Fabric wrinkle characterization and classification using modified wavelet coefficients and support-vector-machine classifiers. Textile Research Journal, 81(9):902-913[DOI:10.1177/0040517510391702]
Wang H M, O'Brien J F and Ramamoorthi R. 2011. Data-driven elastic models for cloth:modeling and measurement. ACM Transactions on Graphics, 30(4):#71[DOI:10.1145/2010324.1964966]
Xu P H, Ding X M, Wu X Y, and Wang R W. 2018. Characterization and assessment of fabric smoothness appearance based on sparse coding. Textile Research Journal, 88(4):367-378[DOI:10.1177/0040517516679148]
Yang J C, Yu K, Gong Y H and Huang T. 2009. Linear spatial pyramid matching using sparse coding for image classification//Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami: IEEE: 1794-1801[ DOI:10.1109/CVPR.2009.5206757 http://dx.doi.org/10.1109/CVPR.2009.5206757 ]
Yang S, Liang J B and Lin M C. 2017. Learning-based cloth material recovery from video//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice: IEEE: 4393-4403[ DOI:10.1109/ICCV.2017.470 http://dx.doi.org/10.1109/ICCV.2017.470 ]
Yuan W Z, Li R, Srinivasan M A and Adelson E H. 2015. Measurement of shear and slip with a GelSight tactile sensor//Proceedings of 2015 IEEE International Conference on Robotics and Automation (ICRA). Seattle: IEEE: 304-311[ DOI:10.1109/ICRA.2015.7139016 http://dx.doi.org/10.1109/ICRA.2015.7139016 ]
Yuan W Z, Mo Y C, Wang S X and Adelson E H. 2018. Active clothing material perception using tactile sensing and deep learning//Proceedings of 2018 IEEE International Conference on Robotics and Automation (ICRA). Brisbane: IEEE: 4842-4849[ DOI:10.1109/ICRA.2018.8461164 http://dx.doi.org/10.1109/ICRA.2018.8461164 ]
Yuan W Z, Wang S X, Dong S Y and Adelson E. 2017. Connecting look and feel: associating the visual and tactile properties of physical materials//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu: IEEE: 4494-4502[ DOI:10.1109/CVPR.2017.478 http://dx.doi.org/10.1109/CVPR.2017.478 ]
相关作者
相关机构
京公网安备11010802024621