增强型灰度图像空间实现虹膜活体检测
Enhanced gray-level image space for iris liveness detection
- 2020年25卷第7期 页码:1421-1435
收稿:2019-10-15,
修回:2020-1-2,
录用:2020-1-9,
纸质出版:2020-07-16
DOI: 10.11834/jig.190503
移动端阅览

浏览全部资源
扫码关注微信
收稿:2019-10-15,
修回:2020-1-2,
录用:2020-1-9,
纸质出版:2020-07-16
移动端阅览
目的
2
虹膜作为一种具有高稳定性与区分性的生物特征,使得虹膜识别在应用场景中十分普及,但很多虹膜识别系统在抵御各类演示攻击时无法保证十足的可靠性,导致虹膜识别在高级安全场景中的应用受限,使得虹膜活体检测成为生物识别技术中亟需解决的问题之一。现有的区分真实与假体虹膜最先进的算法主要依靠在原始灰度空间中提取的虹膜纹理深度特征,但这类特征差异不明显,只能辨别单源假体虹膜。为此,提出一种基于增强型灰度图像空间的虹膜活体检测方法。
方法
2
利用残差网络(ResNet)将原始虹膜图像映射到可分离的灰度图像空间,使真假虹膜特征具有明显的判别性;用预训练LightCNN(light convolational neural networks)-4网络提取新空间中的虹膜纹理特征;设计三元组损失函数与softmax损失函数训练模型实现二分类任务。
结果
2
在两个单源假虹膜数据库上采用闭集检测方式分别取得100%和99.75%的准确率;在多源假虹膜数据库上采用开集检测方式分别取得98.94%和99.06%的准确率。
结论
2
本文方法通过空间映射的方式增强真假虹膜纹理之间清晰度的差异,设计三元组损失函数与softmax损失函数训练模型,既增加正负样本集之间的距离差,又提升模型收敛速度。实验结果表明,基于图像空间的分析与变换可有效解决真实虹膜与各类假体虹膜在原始灰度空间中不易区分的问题,并且使网络能够准确检测未知类型的假体虹膜样本,实现虹膜活体检测的最新性能,进一步提升了虹膜活体检测方法的泛化性。
Objective
2
Iris refers to the annular area between the black pupil and the white sclera of human eyes. It is a biological characteristic with high stability and discrimination ability. Therefore
the iris recognition technology has already been applied to various practical scenarios. However
many iris recognition systems still fail to assure full reliability when facing different presentation attacks. Thus
iris recognition systems cannot be deployed in conditions with strict security requirements. Iris anti-spoofing is a concerned problem in biometrics demanding robust solutions. Iris anti-spoofing (or iris presentation attack detection) aims to judge whether the input iris image is captured from a living subject. This technology is important to prevent invalid authentication from spoofing attacks and protect the security or benefits of users; it needs to be applied to all authentication systems. Existing state-of-the-art algorithms mainly rely on the deep features of iris textures extracted in original gray-scaled space to distinguish fake irises from genuine ones. However
these features can identify only single fake iris pattern
and genuine and various fake irises are overlapped in the original gray-scaled space. A novel iris anti-spoofing method is thus proposed in this study. The triplet network is equipped with a space transformer that is designed to map the original gray-scaled space into a newly enhanced gray-level space
in which the iris features between a specific genuine iris and various fake irises are highly discriminative.
Method
2
A raw eye image includes sclera
pupil
iris
and periocular areas
but only the information of iris is needed. Consequently
the raw image should be preprocessed before image space mapping. The preprocessing mainly includes iris detection
localization
segmentation
and normalization. After a series of image-preprocessing steps
the original gray-scaled normalized iris images are mapped into a newly enhanced gray-level space using the residual network
in which the iris features are highly discriminative. The pretrained LightCNN (light convolutional neural networks)-4 network is used to extract deep features of iris images in the new space. Triplet and softmax losses are adopted to train the model to accomplish a binary classification task.
Result
2
We evaluate the proposed method on three available iris image databases. The ND-Contact (notre dame cosmetic contact lenses) and CRIPAC (center for research on intelligent perception and computing)-Printed-Iris databases include only cosmetic contact lenses and printed iris images
respectively. The CASIA (Institute of Automation
Chinese Academy of Sciences)-Iris-Fake database includes various fake iris patterns
such as cosmetic contact lens
printed iris
plastic iris (fake iris textures are printed on plastic eyeball models)
and synthetic iris (image generation technology based on GAN(generative adversarial networks)). CRIPAC-Printed-Iris is a newly established fake iris database by us for researching on anti-spoofing attack detection algorithms. We perform evaluations in two different detection settings
namely
"close set" and "open set" detection. The "close set" setup means that the training and testing sets share the same type of presentation attack. The "open set" setup refers to the condition in which the presentation attacks unseen in the training set may appear in the testing set; it is considered in our experiments to provide a comprehensive analysis of the proposed method. Experimental results show that the proposed method can achieve accuracy of 100% and 99.75% on two single fake iris pattern databases in the "close set" detection setup. For the hybrid fake iris pattern database
the proposed method can achieve accuracies of 98.94% and 99.06% in the "open set" detection setup. The ablation study shows that the deep features of iris images in the enhanced gray-level image space are more discriminative than those in the original gray-scaled space.
Conclusion
2
The proposed method first enhances the difference in sharpness between genuine and fake iris textures by a space-mapping approach. Then
it minimizes the intraclass distance
maximizes the interclass distance
and keeps a safe margin by triplet loss between genuine and fake iris samples. Thus
the classification accuracy is improved. The model convergence speed is improved by softmax loss. The triplet loss only aims to local samples; as a result
the network training is unstable
and the convergence speed is slow. Thus
we combine the softmax loss to train the classification network
which can provide a global classification interface (aiming to global samples) to increase the convergence speed of network training. Experimental results demonstrate that the analysis and transformation based on image space can effectively solve the difficulty in separating genuine iris and various types of fake irises in the original gray-scaled space. To the best of our knowledge
this is the first time a deep network can distinguish a genuine iris from various types of fake irises because an obvious difference exists between the genuine iris image and various fake iris images in the enhanced gray-level image space. In the "close set" and "open set" detection settings
the trained network can accurately identify the deep features of a specific genuine iris image and distinguish it from various other fake iris images. Therefore
our proposed method can achieve a state-of-the-art performance in iris anti-spoofing
which indicates its effectiveness. The proposed method further enriches and perfects the generalization of a detection method for iris anti-spoofing attacks.
Baker S E, Hentz A, Bowyer K W and Flynn P J. 2010. Degradation of iris recognition performance due to non-cosmetic prescription contact lenses. Computer Vision and Image Understanding, 114(9):1030-1044[DOI:10.1016/j.cviu.2010.06.002]
Chen C J and Ross A. 2018. A multi-task convolutional neural network for joint iris detection and presentation attack detection//Proceedings of 2018 IEEE Winter Applications of Computer Vision Workshops (WACVW). Lake Tahoe, NV, USA: IEEE: 44-51[ DOI: 10.1109/WACVW.2018.00011 http://dx.doi.org/10.1109/WACVW.2018.00011 ]
Daugman J. 2004. How iris recognition works. IEEE Transactions on Circuits and Systems for Video Technology, 14(1):21-30[DOI:10.1109/TCSVT.2003.818350]
Doyle J S, Bowyer K W and Flynn P J. 2013. Variation in accuracy of textured contact lens detection based on sensor and lens pattern//Proceedings of the 6th IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS). Arlington, VA, USA: IEEE: 1-7[ DOI: 10.1109/BTAS.2013.6712745 http://dx.doi.org/10.1109/BTAS.2013.6712745 ]
Goodfellow I J, Warde-Farley D, Mirza M, Courville A and Bengio Y. 2013. Maxout networks[EB/OL].[2019-10-01] . https://arxiv.org/pdf/1302.4389.pdf https://arxiv.org/pdf/1302.4389.pdf
Gupta P, Behera S, Vatsa M and Singh R. 2014. On iris spoofing using print attack//Proceedings of the 22nd International Conference on Pattern Recognition. Stockholm, Sweden: IEEE: 1681-1686[ DOI: 10.1109/ICPR.2014.296 http://dx.doi.org/10.1109/ICPR.2014.296 ]
He K M, Zhang X Y, Ren S Q and Sun J. 2015. Delving deep into rectifiers:surpassing human-level performance on imagenet classification//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE: 1026-1034[ DOI: 10.1109/ICCV.2015.123 http://dx.doi.org/10.1109/ICCV.2015.123 ]
He L X, Li H Q, Liu F, Liu N F, Sun Z N and He Z F. 2016. Multi-patch convolution neural network for iris liveness detection//Proceedings of the 8th IEEE International Conference on Biometrics Theory, Applications and Systems. Niagara Falls, NY, USA: IEEE: 1-7[ DOI: 10.1109/BTAS.2016.7791186 http://dx.doi.org/10.1109/BTAS.2016.7791186 ]
He X F, An S J and Shi P F. 2007. Statistical texture analysis-based approach for fake iris detection using support vector machines//Proceedings of the International Conference on Advances in Biometrics. Seoul, Korea: Springer: 540-546[ DOI: 10.1007/978-3-540-74549-5_57 http://dx.doi.org/10.1007/978-3-540-74549-5_57 ]
He X F, Lu Y and Shi P F. 2008. A fake iris detection method based on FFT and quality assessment//Proceedings of 2008 Chinese Conference on Pattern Recognition. Beijing, China: IEEE: 1-4[ DOI: 10.1109/CCPR.2008.68 http://dx.doi.org/10.1109/CCPR.2008.68 ]
He Z F, Sun Z N, Tan T N and Wei Z S. 2009. Efficient iris spoof detection via boosted local binary patterns//Proceedings of the 3rd International Conference on Advances in Biometrics. Alghero, Italy: Springer: 1080-1090[ DOI: 10.1007/978-3-642-01793-3_109 http://dx.doi.org/10.1007/978-3-642-01793-3_109 ]
Hoffman S, Sharma R and Ross A. 2018. Convolutionalneural networks for iris presentation attack detection: toward cross-dataset and cross-sensor generalization//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Salt Lake City, UT, USA: IEEE: 1701-17018[ DOI: 10.1109/CVPRW.2018.00213 http://dx.doi.org/10.1109/CVPRW.2018.00213 ]
Kohli N, Yadav D, Vatsa M, Singh R and Noore A. 2016. Detecting medley of iris spoofing attacks using DESIST//Proceedings of the 8th IEEE International Conference on Biometrics Theory, Applications and Systems. Niagara Falls, NY, USA: IEEE: 1-6[ DOI: 10.1109/BTAS.2016.7791168 http://dx.doi.org/10.1109/BTAS.2016.7791168 ]
Kohli N, Yadav D, Vatsa M, Singh R and Noore A. 2017. Synthetic iris presentation attack using iDCGAN//Proceedings of 2017 IEEE International Joint Conference on Biometrics. Denver, CO, USA: IEEE: 674-680[ DOI: 10.1109/BTAS.2017.8272756 http://dx.doi.org/10.1109/BTAS.2017.8272756 ]
Li L, Xia Z Q, Hadid A, Jiang X Y, Roli F and Feng X Y. 2018. Face presentation attack detection in learned color-liked space[EB/OL].[2019-10-01] . https://arxiv.org/pdf/1810.13170.pdf https://arxiv.org/pdf/1810.13170.pdf
Menotti D, Chiachia G, Pinto A, Schwartz W R, Pedrini H, Falcão A X and Rocha A. 2015. Deep representations for iris, face, and fingerprint spoofing detection. IEEE Transactions on Information Forensics and Security, 10(4): 864-879[ DOI: 10.1109/TIFS.2015.2398817 http://dx.doi.org/10.1109/TIFS.2015.2398817 ]
Rathgeb C and Uhl A. 2010. Attacking iris recognition: an efficient hill-climbing technique//Proceedings of the 20th International Conference on Pattern Recognition. Istanbul, Turkey: IEEE: 1217-1220[ DOI: 10.1109/ICPR.2010.303 http://dx.doi.org/10.1109/ICPR.2010.303 ]
Rathgeb C and Uhl A. 2011. Statistical attack against iris-biometric fuzzy commitment schemes//CVPR 2011 WORKSHOPS. Colorado Springs, CO, USA: IEEE: 23-30[ DOI: 10.1109/CVPRW.2011.5981720 http://dx.doi.org/10.1109/CVPRW.2011.5981720 ]
Sun Z N and Tan T N. 2014. Institute of automation, Chinese academy of sciences, CASIA-Iris-Image-Fake database[EB/OL].[2019-10-01] . http://www.cripac.ia.ac.cn/people/znsun/irisclassification/casia-iris-fake.rar http://www.cripac.ia.ac.cn/people/znsun/irisclassification/casia-iris-fake.rar
Sun Z N, Zhang H, Tan T N and Wang J Y. 2014. Irisimage classification based on hierarchical visual codebook. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(6):1120-1133[DOI:10.1109/TPAMI.2013.234]
Yan Z H, He L X, Zhang M, Sun Z N and Tan T N. 2018. Hierarchical multi-class iris classification for liveness detection//Proceedings of 2018 International Conference on Biometrics. Gold Coast, QLD, Australia: IEEE: 47-53[ DOI: 10.1109/ICB2018.2018.00018 http://dx.doi.org/10.1109/ICB2018.2018.00018 ]
Zhang Q, Li H Q, Sun Z N and Tan T N. 2018. Deep feature fusion for iris and periocular biometrics on mobile devices. IEEE Transactions on Information Forensics and Security, 13(11): 2897-2912[ DOI: 10.1109/TIFS.2018.2833033 http://dx.doi.org/10.1109/TIFS.2018.2833033 ]
Zuo J Y, Schmid N A and Chen X H. 2006. On performance comparison of real and synthetic iris images//Proceedings of 2016 International Conference on Image Processing. Atlanta, GA, USA: IEEE: 305-308[ DOI: 10.1109/ICIP.2006.313154 http://dx.doi.org/10.1109/ICIP.2006.313154 ]
相关文章
相关作者
相关机构
京公网安备11010802024621