基于虹膜纹理深度特征和Fisher向量的人种分类
Race classification based on deep features and Fisher vectors of iris texture
- 2018年23卷第1期 页码:28-38
收稿:2017-05-22,
修回:2017-9-21,
纸质出版:2018-01-16
DOI: 10.11834/jig.170219
移动端阅览

浏览全部资源
扫码关注微信
收稿:2017-05-22,
修回:2017-9-21,
纸质出版:2018-01-16
移动端阅览
目的
2
虹膜是位于人眼表面黑色瞳孔和白色巩膜之间的圆环形区域,有着丰富的纹理信息。虹膜纹理具有高度的区分性和稳定性。人种分类是解决虹膜识别在大规模数据库上应用难题的主要方法之一。现有的虹膜图像人种分类方法主要采用手工设计的特征,而且针对亚洲人和非亚洲人的基本人种分类,无法很好地解决亚种族分类问题。为此提出一种基于虹膜纹理深度特征和Fisher向量的人种分类方法。
方法
2
首先用CNN(convolutional neural network)对归一化后的虹膜纹理图像提取深度特征向量,作为底层特征;然后使用高斯混合模型提取Fisher向量作为最终的虹膜特征表达;最后用支持向量机分类得到最终结果。
结果
2
本文方法在亚洲人和非亚洲人的数据集上采用non-person-disjoint的方式取得99.93%的准确率,采用person-disjoint的方式取得91.94%的准确率;在汉族人和藏族人的数据集上采用non-person-disjoint的方式取得99.69%的准确率,采用person-disjoint的方式取得82.25%的准确率。
结论
2
本文通过数据驱动的方式从训练数据中学习到更适合人种分类的特征,可以很好地实现对基本人种以及亚种族人种的分类,提高了人种分类的精度。同时也首次证明了用虹膜图像进行亚种族分类的可行性,对人种分类理论进行了进一步地丰富和完善。
Objective
2
Iris is the annular part between the pupil and white sclera of human eyes and possesses rich texture information. The iris texture is highly discriminative and stable
which makes iris an important part of the human body for biometric identification. Iris recognition aims to assign a unique identity label to each iris image based on automatic preprocessing
feature analysis
and feature matching. As a reliable method for personal identification
iris recognition has numerous important applications in public and personal security areas. The rapid development of iris recognition in commercial applications has dramatically increased the sizes of iris databases
thereby resulting in large database sizes and slow system responses. Race classification is a key method in solving large-scale iris classification problems. Iris is initially classified according to race
and a rough classification result is obtained. Iris is then matched with the subclass where it belongs. In this way
the runtime for iris recognition can be reduced effectively. Several applications can be adopted for race classification. In the era of information
if a computer can automatically detect the race information of a user
then it can match the computer language with the user language and provide a personalized login interface. Existing approaches to iris-based race classification mainly focus on Asian and non-Asian iris image classification
and the features used for classification are manually designed. Sub-ethnic classification
such as the classification of Koreans
Japanese
and Chinese
has also emerged in recent years. However
no sub-ethnic classification based on iris images has been conducted. No significant difference in iris texture exists among subspecies compared with the iris of the basic race
and manually designed features for basic race are not suited to sub-ethnic classification. These problems pose a great challenge to sub-ethnic classification on iris images. This study proposes a novel race classification method based on deep features and Fisher vectors of iris texture. The study focuses on basic race classification of Asian and non-Asian iris images and sub-ethnic classification of Han and Tibetan iris images.
Method
2
The original iris image contains not only the annular iris but also the pupil
eyelids
eyelashes
and other eye areas
as well as some light reflection of the formation of light spots. Therefore
the iris image should be preprocessed before features are extracted. The iris image preprocessing mainly includes iris detection
localization
segmentation
and normalization. Normalized unwrapped iris images are obtained. Our method feeds the preprocessed iris images to a convolutional neural network to extract deep features as low-level features. We use a Gaussian mixture model to cluster the features to obtain iris texture textons
and the model is then used with Fisher vector to extract high-level features. A support vector machine is used for classification.
Results
2
We evaluate our proposed method on two iris image databases
namely
CASIA multi-race iris race database and Han-Tibetan iris race database
for basic race and sub-ethnic classifications
respectively. Thus far
no iris database dedicated to sub-ethnic classification has been developed. We establish a Han-Tibetan sub-ethnic classification database to further study race classification. We perform an evaluation in two different dataset settings
namely non-person-disjoint and person disjoint. Non-person-disjoint experimental setting refers to the random selection of certain iris images as the training set and the remaining iris images are the test set. In this way
the iris image of the same person can appear in the training and testing sets. Person-disjoint experimental setting means we randomly select iris images of some people as the training set and the iris images of remaining people as the testing set. This method can ensure that the iris images of the same person do not appear in the training and testing sets simultaneously. When we design the experiment
we compare the two methods on the two databases. Experimental results show that the proposed method can achieve 99.93% accuracy in non-person-disjoint approach and 91.94% in person-disjoint manner on the Asian and non-Asian dataset. For the Han-Tibetan dataset
the proposed method can obtain 99.69% accuracy in non-person-disjoint approach and 82.25% accuracy in person-disjoint manner.
Conclusion
2
This study proposes a race classification method based on deep features and Fisher vector of iris texture. The method learns low-level visual features highly suitable for iris race classification from training data to solve the problem of traditional methods
in which a strong prior knowledge is required to design discriminating features. In a data-driven manner
the proposed method can learn features significantly suitable for basic race classification and sub-ethnic classification
which improves the accuracy of race classification. Fisher vector is used to encode low-level visual features. The obtained features can describe the global texture features of iris images
as well as retain local texture features
which is favorable to race classification. We use iris images to solve the sub-ethnic classification of Han and Tibetan for the first time and prove the feasibility and validity of sub-ethnic classification based on iris images. A new iris image database
which is suitable for sub-ethnic classification based on iris images is also established. The experimental results show that the differences in sub-ethnic iris images are insignificant and that the classification is challenging.
Dantcheva A, Elia P, Ross A. What else does your biometric data reveal? A survey on soft biometrics[J]. IEEE Transactions on Information Forensics and Security, 2016, 11(3):441-467.[DOI:10.1109/TIFS.2015.2480381]
Fu S Y, He H B, Hou Z G. Learning race from face:a survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(12):2483-2509.[DOI:10.1109/TPAMI.2014.2321570]
Manesh F S, Ghahramani M, Tan Y P. Facial part displacement effect on template-based gender and ethnicity classification[C]//Proceedings of the 11th International Conference on Control Automation Robotics & Vision. Singapore:IEEE, 2010:1644-1649.[ DOI:10.1109/ICARCV.2010.5707882 http://dx.doi.org/10.1109/ICARCV.2010.5707882 ]
Lu X G, Jain A K. Ethnicity identification from face images[C]//Proceedings of SPIE 5404, Biometric Technology for Human Identification. Orlando, Florida, United States:SPIE, 2004:114-123.[ DOI:10.1117/12.542847 http://dx.doi.org/10.1117/12.542847 ]
Lyle J R, Miller P E, Pundlik S J, et al. Soft biometric classification using periocular region features[C]//Proceedings of the 4th IEEE International Conference on Biometrics:Theory Applications and Systems. Washington, DC, USA:IEEE, 2010:1-7.[ DOI:10.1109/BTAS.2010.5634537 http://dx.doi.org/10.1109/BTAS.2010.5634537 ]
Li Y H, Savvides M, Chen T. Investigating useful and distinguishing features around the eyelash region[C]//Proceedings of the 37th IEEE Applied Imagery Pattern Recognition Workshop. Washington DC, USA:IEEE, 2008:1-6.[ DOI:10.1109/AIPR.2008.4906451 http://dx.doi.org/10.1109/AIPR.2008.4906451 ]
Qiu X C, Sun Z N, Tan T N. Global texture analysis of iris images for ethnic classification[C]//Proceedings of 2006 International Conference on Advances in Biometrics. Hong Kong, China:Springer, 2006:411-418.[ DOI:10.1007/11608288_55 http://dx.doi.org/10.1007/11608288_55 ]
Lagree S, Bowyer K W. Predicting ethnicity and gender from iris texture[C]//Proceedings of 2011 IEEE International Conference on Technologies for Homeland Security. Waltham, MA, USA:IEEE, 2011:440-445.[ DOI:10.1109/THS.2011.6107909 http://dx.doi.org/10.1109/THS.2011.6107909 ]
Zarei A, Mou D X. Artificial neural network for prediction of ethnicity based on iris texture[C]//Proceedings of the 11th International Conference on Machine Learning and Applications. Boca Raton, FL, USA:IEEE, 2012:514-519.[ DOI:10.1109/ICMLA.2012.94 http://dx.doi.org/10.1109/ICMLA.2012.94 ]
Qiu X C, Sun Z N, Tan T. Learning appearance primitives of iris images for ethnic classification[C]//Proceedings of 2007 IEEE International Conference on Image Processing. San Antonio, TX, USA:IEEE, 2007:Ⅱ-405-Ⅱ-408.[ DOI:10.1109/ICIP.2007.4379178 http://dx.doi.org/10.1109/ICIP.2007.4379178 ]
Zhang H, Sun Z N, Tan T, et al. Ethnic classification based on iris images[C]//Proceedings of the 6th Chinese Conference on Biometric Recognition. Beijing, China:Springer, 2011:82-90.[ DOI:10.1007/978-3-642-25449-9_11 http://dx.doi.org/10.1007/978-3-642-25449-9_11 ]
Sun Z N, Zhang H, Tan T N, et al. Iris image classification based on hierarchical visual codebook[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(6):1120-1133.[DOI:10.1109/TPAMI.2013.234]
Wang Y, Liao H F, Feng Y, et al. Do they all look the same? Deciphering Chinese, Japanese and Koreans by fine-grained deep learning[J]. arXiv preprint arXiv:1610.01854, 2016.
Girshick R, Donahue J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, OH, USA:IEEE, 2014:580-587.[ DOI:10.1109/CVPR.2014.81 http://dx.doi.org/10.1109/CVPR.2014.81 ]
Sharif R A, Azizpour H, Sullivan J, et al. CNN features off-the-shelf:an astounding baseline for recognition[C]//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Columbus, OH, USA:IEEE, 2014:512-519.[ DOI:10.1109/CVPRW.2014.131 http://dx.doi.org/10.1109/CVPRW.2014.131 ]
Zeiler M D, Fergus R. Visualizing and understanding convolutional networks[C]//Proceedings of the 13th European Conference on Computer Vision. Zurich, Switzerland:Springer, 2014:818-833.[ DOI:10.1007/978-3-319-10590-1_53 http://dx.doi.org/10.1007/978-3-319-10590-1_53 ]
Donahue J, Jia Y Q, Vinyals O, et al. DeCAF:a Deep convolutional activation feature for generic visual recognition[C]//Proceedings of the 31st International Conference on Machine Learning. Beijing, China:ACM, 2014, 32:647-655.
Branson S, Van Horn G, Belongie S, et al. Bird species categorization using pose normalized deep convolutional nets[J]. arXiv preprint arXiv:1406.2952, 2014.
Krause J, Jin H L, Yang J C, et al. Fine-grained recognition without part annotations[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA:IEEE, 2015:5546-5555.[ DOI:10.1109/CVPR.2015.7299194 http://dx.doi.org/10.1109/CVPR.2015.7299194 ]
Xiao T J, Xu Y C, Yang K Y, et al. The application of two-level attention models in deep convolutional neural network for fine-grained image classification[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA:IEEE, 2015:842-850.[ DOI:10.1109/CVPR.2015.7298685 http://dx.doi.org/10.1109/CVPR.2015.7298685 ]
Zhang N, Donahue J, Girshick R, et al. Part-based R-CNNs for fine-grained category detection[C]//Proceedings of the 13th European Conference on Computer Vision. Zurich, Switzerland:Springer, 2014:834-849.[ DOI:10.1007/978-3-319-10590-1_54 http://dx.doi.org/10.1007/978-3-319-10590-1_54 ]
Cimpoi M, Maji S, Vedaldi A. Deep filter banks for texture recognition and segmentation[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA:IEEE, 2015:3828-3836.[ DOI:10.1109/CVPR.2015.7299007 http://dx.doi.org/10.1109/CVPR.2015.7299007 ]
Chatfield K, Simonyan K, Vedaldi A, et al. Return of the devil in the details:delving deep into convolutional nets[C]//Proceedings of the British Machine Vision Conference, 2014.
Zhu S C, Guo C E, Wang Y Z, et al. What are textons?[J]. International Journal of Computer Vision, 2005, 62(1-2):121-143.[DOI:10.1023/B:VISI.0000046592.70770.61]
Perronnin F, Sánchez J, Mensink T. Improving the fisher kernel for large-scale image classification[C]//Proceedings of the 11th European Conference on Computer Vision. Heraklion, Crete, Greece:Springer, 2010:143-156.[ DOI:10.1007/978-3-642-15561-1_11 http://dx.doi.org/10.1007/978-3-642-15561-1_11 ]
Vedaldi A, Fulkerson B. VLFeat:an open and portable library of computer vision algorithms[C]//Proceedings of the 18th ACM International Conference on Multimedia. Firenze, Italy:ACM, 2010:1469-1472.[ DOI:10.1145/1873951.1874249]. http://dx.doi.org/10.1145/1873951.1874249].
He Z F, Tan T N, Sun Z N, et al. Toward accurate and fast iris segmentation for iris biometrics[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 31(9):1670-1684.[DOI:10.1109/TPAMI.2008.183]
相关作者
相关机构
京公网安备11010802024621