人脸伪造及检测技术综述
A review of human face forgery and forgery-detection technologies
- 2022年27卷第4期 页码:1023-1038
纸质出版日期: 2022-04-16 ,
录用日期: 2020-12-22
DOI: 10.11834/jig.200466
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2022-04-16 ,
录用日期: 2020-12-22
移动端阅览
曹申豪, 刘晓辉, 毛秀青, 邹勤. 人脸伪造及检测技术综述[J]. 中国图象图形学报, 2022,27(4):1023-1038.
Shenhao Cao, Xiaohui Liu, Xiuqing Mao, Qin Zou. A review of human face forgery and forgery-detection technologies[J]. Journal of Image and Graphics, 2022,27(4):1023-1038.
人脸伪造技术的恶意使用,不仅损害公民的肖像权和名誉权,而且会危害国家政治和经济安全。因此,针对伪造人脸图像和视频的检测技术研究具有重要的现实意义和实践价值。本文在总结人脸伪造和伪造人脸检测的关键技术与研究进展的基础上,分析现有伪造和检测技术的局限。在人脸伪造方面,主要包括利用生成对抗技术的全新人脸生成技术和基于现有人脸的人脸编辑技术,介绍生成对抗网络在人脸图像生成的发展进程,重点介绍人脸编辑技术中的人脸交换技术和人脸重现技术,从网络结构、通用性和生成效果真实性等角度对现有的研究进展进行深入阐述。在伪造人脸检测方面,根据媒体载体的差异,分为伪造人脸图像检测和伪造人脸视频检测,首先介绍利用统计分布差异、拼接残留痕迹和局部瑕疵等特征的伪造人脸图像检测技术,然后根据提取伪造特征的差异,将伪造人脸视频检测技术分为基于帧间信息、帧内信息和生理信号的伪造视频检测技术,并从特征提取方式、网络结构设计特点和使用场景类型等方面进行详细阐述。最后,分析了当前人脸伪造技术和伪造人脸检测技术的不足,提出可行的改进意见,并对未来发展方向进行展望。
Face image synthesis is one of the most important sub-topics in image synthesis. Deep learning methods like the generative adversarial networks and autoencoder networks enable the current generation technology to generate facial images that are indistinguishable by human eyes. The illegal use of face forgery technology has damaged citizens' portrait rights and reputation rights and weakens the national political and economic security. Based on summarizing the key technologies and critical review of face forgery and forged-face detection
our research analyzes the limitations of current forgery and detection technologies
which is intended to provide a reference for subsequent research on fake-face detection. Our analysis is shown as bellows: 1) the technologies for face forgery are mainly divided into the use of generative confrontation technology to generate a category of new faces and the use of existing face editing techniques. First
our review introduces the development of generative adversarial network and its application in human face image generation
shows the face images generated at different development stages
and targets that generative adversarial network provides the possibility of generating fake face images with high resolution
real look and feel
diversified styles and fine details; furthermore
it introduces face editing technology like face swap
face reenactment and the open-source implementation of the current face swap and face reenactment technology on the aspects of network structure
versatility and authenticity of the generated image. In particular
face exchange and face reconstruction technologies both decompose the face into two spaces of appearance and attributes
design different network structures and loss functions to transfer targeted features
and use an integrated generation adversarial network to improve the reality of the generated results. 2) The technologies for fake face detection
according to the difference of media carriers
can be divided into fake face image detection and fake face video detection. Our review first details the use of statistical distribution differences
splicing residual traces
local defects and other features to identify fake facial image generated from straightforward generative adversarial network and face editing technologies. Next
in terms of the difference analysis of extracting forged features
the fake facial video detection technology is classified into technology based on inter-frame information
intra-frame information and physiological signals. The methodology of extracting features
the design of network structures and the use scenarios were illustrated in detail. The current fake image detection technology mainly uses convolutional neural networks to extract fake features
and realizes the location and detection of fake regions simultaneously
while fake video detection technologies mainly use a integration of convolutional neural networks and recurrent neural networks to extract the same features inter and inner frames; after that
the public data sets of fake-face detection are sorted out
and the comparison results of multiple fake-face detection methods are illustrated for multiple public data sets. 3) The summary and the prospect part analyze the weaknesses of the current face forgery technologies and forged-face detection technologies
and gives feasible directions for improvement. The current face video forgery technology mainly uses the method of partially modifying the face area with the following defects. There are forgery traces in a single video frame
such as blurred side faces and missing texture details in the face parts. The relevance of video frames was not considered and there were inconsistencies amongst the generated video frames
such as frame jumps
and the large difference in the position of key points of the two frames before and after; and the generated face video lacks normal biological information
such as blinks and micro expressions. The current forgery-detection technologies have poor robustness to real scenes and poor robustness against image and video compression algorithms. The detection methods trained on high-resolution datasets are not suitable for low-resolution images and videos. Forgery detection technologies are difficult to review the issue of continuous upgrade and evolution of forged technology. The further improvement is illustrated on forgery-detection technologies. For instance
when generating videos
it would be useful to add the location information of the face into the network to improve the coherence of the generated video. In related to forgery detection
the forgery features in the space and frequency domains can be fused together for feature extraction
and the 3D convolution and metric learning can be used to form a targeted feature distribution for forged faces and the genuine faces. The development of face forgery is featured by few-shot learning
strong versatility and high fidelity. Forgery-face detection technology is intended to high versatility
strong compression resistance
few-shot learning and efficient computing.
人脸伪造伪造人脸检测生成对抗网络(GAN)人脸交换人脸重现
face forgeryface forgery detectiongenerative adversarial network (GAN)face swapface reenactment
Afchar D, Nozick V, Yamagishi J and Echizen I. 2018. MesoNet: a compact facial video forgery detection network//Proceedings of 2018 IEEE International Workshop on Information Forensics and Security. Hong Kong, China: IEEE: 1-7[DOI: 10.1109/WIFS.2018.8630761http://dx.doi.org/10.1109/WIFS.2018.8630761]
Amerini I, Galteri L, Caldelli R and Del Bimbo A. 2019. Deepfake video detection through optical flow based CNN//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision Workshop. Seoul, Korea (South): IEEE: 1205-1207[DOI: 10.1109/ICCVW.2019.00152http://dx.doi.org/10.1109/ICCVW.2019.00152]
Baldi P. 2011. Autoencoders, unsupervised learning and deep architectures//Proceedings of 2011 International Conference on Unsupervised and Transfer Learning workshop. Washington, USA: JMLR. org: 37-50
Cao S H, Zou Q, Mao X Q, Ye D P and Wang Z Y. 2021. Metric learning for anti-compression facial forgery detection//Proceedings of 2021 ACM International Conference on Multimedia. Chengdu, China: ACM: 1-10[DOI: 10.1145/3474085.3475347http://dx.doi.org/10.1145/3474085.3475347]
Cao Y J, Jia L L, Chen Y X, Lin N and Li X X. 2018. Review of computer vision based on generative adversarial networks. Journal of Image and Graphics, 23(10): 1433-1449
曹仰杰, 贾丽丽, 陈永霞, 林楠, 李学相. 2018. 生成式对抗网络及其计算机视觉应用研究综述. 中国图象图形学报, 23(10): 1433-1449[DOI: 10.11834/jig.180103]
Choe J and Shim H. 2019. Attention-based dropout layer for weakly supervised object localization//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 2214-2223[DOI: 10.1109/CVPR.2019.00232http://dx.doi.org/10.1109/CVPR.2019.00232]
Dang H, Liu F, Stehouwer J, Liu X M and Jain A K. 2020. On the detection of digital face manipulation//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE: 5780-5789[DOI: 10.1109/CVPR42600.2020.00582http://dx.doi.org/10.1109/CVPR42600.2020.00582]
Dolhansky B, Howes R, Pflaum B, Baram N and Ferrer C C. 2019. The deepfake detection challenge (DFDC) preview dataset[EB/OL]. [2020-08-10].https://arxiv.org/pdf/1910.08854.pdfhttps://arxiv.org/pdf/1910.08854.pdf
Fang F M, Yamagishi J, Echizen I and Lorenzo-Trueba J. 2018. High-quality nonparallel voice conversion based on cycle-consistent adversarial network//Proceedings of 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Calgary, Canada: IEEE: 5279-5283[DOI: 10.1109/ICASSP.2018.8462342http://dx.doi.org/10.1109/ICASSP.2018.8462342]
Gatys L A, Ecker A S and Bethge M. 2015. Texture synthesis using convolutional neural networks//Proceedings of the 28th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press: 262-270
Goodfellow I J, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A and Bengio Y. 2014. Generative adversarial nets//Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press: 2672-2680
Güera D and Delp E J. 2018. Deepfake video detection using recurrent neural networks//Proceedings of the 15th IEEE International Conference on Advanced Video and Signal Based Surveillance. Auckland, New Zealand: IEEE: 1-6[DOI: 10.1109/AVSS.2018.8639163http://dx.doi.org/10.1109/AVSS.2018.8639163]
He K M, Zhang X Y, Ren S Q and Sun J. 2016. Deep residual learning for image recognition//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 770-778[DOI: 10.1109/CVPR.2016.90http://dx.doi.org/10.1109/CVPR.2016.90]
Hochreiter S and Schmidhuber J. 1997. Long short-term memory. Neural Computation, 9(8): 1735-1780[DOI: 10.1162/neco.1997.9.8.1735]
Hu J S, Yu C Y and Guan F Q. 2019. Non-parallel many-to-many singing voice conversion by adversarial learning//Proceedings of 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference. Lanzhou, China: IEEE: 125-132[DOI: 10.1109/APSIPAASC47483.2019.9023357http://dx.doi.org/10.1109/APSIPAASC47483.2019.9023357]
Hu Y J, Gao Y F, Liu B B and Liao G J. 2021. Deepfake videos detection based on image segmentation with deep neural networks. Journal of Electronics and Information Technology, 43(1): 162-170
胡永健, 高逸飞, 刘琲贝, 廖广军. 2021. 基于图像分割网络的深度假脸视频篡改检测. 电子与信息学报, 43(1): 162-170[DOI: 10.11999/JEIT200077]
Huang X and Belongie S. 2017. Arbitrary style transfer in real-time with adaptive instance normalization//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 1510-1519[DOI: 10.1109/ICCV.2017.167http://dx.doi.org/10.1109/ICCV.2017.167]
Huang Y H, Juefei-Xu F, Wang R, Guo Q, Xie X F, Ma L, Li J W, Miao W K, Liu Y and Pu G G. 2020. FakeLocator: robust localization of GAN-based face manipulations[EB/OL]. [2020-08-10].https://arxiv.org/pdf/2001.09598.pdfhttps://arxiv.org/pdf/2001.09598.pdf
Ioffe S and Szegedy C. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift//Proceedings of the 32nd International Conference on International Conference on Machine Learning. Lille, France: JMLR. org: 448-456
Isola P, Zhu J Y, Zhou T H and Efros A A. 2017. Image-to-image translation with conditional adversarial networks//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 5967-5976[DOI: 10.1109/CVPR.2017.632http://dx.doi.org/10.1109/CVPR.2017.632]
Jiang L M, Li R, Wu W, Qian C and Loy C C. 2020. Deeperforensics-1.0: a large-scale dataset for real-world face forgery detection//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 2886-2895[DOI: 10.1109/CVPR42600.2020.00296http://dx.doi.org/10.1109/CVPR42600.2020.00296]
Johnson J, Alahi A and Li F F. 2016. Perceptual losses for real-time style transfer and super-resolution//Proceedings of the 14th European Conference on Computer Vision. Amsterdam, the Netherlands: Springer: 694-711[DOI: 10.1007/978-3-319-46475-6_43http://dx.doi.org/10.1007/978-3-319-46475-6_43]
Karras T, Aila T, Laine S and Lehtinen J. 2018. Progressive growing of gans for improved quality, stability, and variation//Proceedings of the 6th International Conference on Learning Representations (ICLR). Vancouver, Canada: 1-26
Karras T, Laine S and Aila T. 2019. A style-based generator architecture for generative adversarial networks//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 4396-4405[DOI: 10.1109/CVPR.2019.00453http://dx.doi.org/10.1109/CVPR.2019.00453]
Kim H, Garrido P, Tewari A, Xu W P, Thies J, Niessner M, Pérez P, Richardt C, Zollhöfer M and Theobalt C. 2018. Deep video portraits. ACM Transactions on Graphics, 37(4): #163[DOI: 10.1145/3197517.3201283]
Kingma D P and Welling M. 2014. Auto-encoding variational Bayes//Proceedings of the 2nd International Conference on Learning Representations. Ithaca, USA: #14
Korshunov P and Marcel S. 2018. DeepFakes: a new threat to face recognition? Assessment and detection[EB/OL]. [2020-08-10].https://arxiv.org/pdf/1812.08685.pdfhttps://arxiv.org/pdf/1812.08685.pdf
Korshunova I, Shi W Z, Dambre J and Theis L. 2017. Fast face-swap using convolutional neural networks//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 3697-3705[DOI: 10.1109/ICCV.2017.397http://dx.doi.org/10.1109/ICCV.2017.397]
Li H and Zheng J B. 2017. Blind image forgery detection method based on noise variance estimation. Application Research of Computers, 34(1): 314-316
李杭, 郑江滨. 2017. 基于噪声方差估计的伪造图像盲检测方法. 计算机应用研究, 34(1): 314-316[DOI: 10.3969/j.issn.1001-3695.2017.01.071]
Li L Z, Bao J M, Yang H, Chen D and Wen F. 2020a. Advancing high fidelity identity swapping for forgery detection//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE: 5073-5082[DOI: 10.1109/CVPR42600.2020.00512http://dx.doi.org/10.1109/CVPR42600.2020.00512]
Li L Z, Bao J M, Zhang T, Yang H, Chen D, Wen F and Guo B N. 2020b. Face X-ray for more general face forgery detection//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE: 5000-5009[DOI: 10.1109/CVPR42600.2020.00505http://dx.doi.org/10.1109/CVPR42600.2020.00505]
Li X D, Lang Y N, Chen Y F, Mao X F, He Y, Wang S H, Xue H and Lu Q. 2020c. Sharp multiple instance learning for DeepFake video detection//Proceedings of the 28th ACM International Conference on Multimedia. Seattle, USA: ACM: 1864-1872[DOI: 10.1145/3394171.3414034http://dx.doi.org/10.1145/3394171.3414034]
Li Y Z and Lyu S. 2019. Exposing DeepFake videos by detecting face warping artifacts//Proceedings of 2019 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Washington, USA: IEEE: 46-52
Li Y Z, Yang X, Sun P, Qi H G and Lyu S W. 2020d. Celeb-DF: a large-scale challenging dataset for DeepFake forensics//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 3207-3216
Li Y Z, Chang M C and Lyu S W. 2018. In ictu oculi: exposing AI created fake videos by detecting eye blinking//Proceedings of 2018 IEEE International Workshop on Information Forensics and Security (WIFS). Hong Kong, China: IEEE: 1-7[DOI: 10.1109/WIFS.2018.8630787http://dx.doi.org/10.1109/WIFS.2018.8630787]
Liu Z Z, Qi X J and Torr P H S. 2020. Global texture enhancement for fake face detection in the wild//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE: 8057-8066[DOI: 10.1109/CVPR42600.2020.00808http://dx.doi.org/10.1109/CVPR42600.2020.00808]
Masi I, Killekar A, Mascarenhas R M, Gurudatt S P and Abdalmageed W. 2020. Two-branch recurrent network for isolating deepfakes in videos//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer: 667-684[DOI: 10.1007/978-3-030-58571-6_39http://dx.doi.org/10.1007/978-3-030-58571-6_39]
Matern F, Riess C and Stamminger M. 2019. Exploiting visual artifacts to expose deepfakes and face manipulations//Proceedings of 2019IEEE Winter Applications of Computer Vision Workshops. Waikoloa, USA: IEEE: 83-92[DOI: 10.1109/WACVW.2019.00020http://dx.doi.org/10.1109/WACVW.2019.00020]
McCloskey S and Albright M. 2018. Detecting GAN-generated imagery using color cues[EB/OL]. [2020-08-10].https://arxiv.org/pdf/1812.08247.pdfhttps://arxiv.org/pdf/1812.08247.pdf
Natsume R, Yatagawa T and Morishima S. 2018a. RSGAN: face swapping and editing using face and hair representation in latent spaces//Proceedings of Special Interest Group on Computer Graphics and Interactive Techniques Conference. Vancouver, Canada: ACM: #69[DOI: 10.1145/3230744.3230818http://dx.doi.org/10.1145/3230744.3230818]
Natsume R, Yatagawa T and Morishima S. 2018b. FSNet: an identity-aware generative model for image-based face swapping//Proceedings of the 14th Asian Conference on Computer Vision. Perth, Australia: Springer: 117-132[DOI: 10.1007/978-3-030-20876-9_8http://dx.doi.org/10.1007/978-3-030-20876-9_8]
Nguyen H H, Fang F M, Yamagishi J and Echizen I. 2019a. Multi-task learning for detecting and segmenting manipulated facial images and videos//Proceedings of 2019 IEEE 10th International Conference on Biometrics Theory, Applications and Systems (BTAS). Tampa, USA: IEEE: 1-8[DOI: 10.1109/BTAS46853.2019.9185974http://dx.doi.org/10.1109/BTAS46853.2019.9185974]
Nguyen T T, Nguyen Q V H, Nguyen C M, Nguyen D, Nguyen D T and Nahavandi S. 2019b. Deep learning for deepfakes creation and detection: a survey[EB/OL]. [2020-08-10].https://arxiv.org/pdf/1909.11573.pdfhttps://arxiv.org/pdf/1909.11573.pdf
Qian Y Y, Yin G J, Sheng L, Chen Z X and Shao J. 2020. Thinking in frequency: face forgery detection by mining frequency-aware clues//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer: 86-103[DOI: 10.1007/978-3-030-58610-2_6http://dx.doi.org/10.1007/978-3-030-58610-2_6]
Radford A, Metz L and Chintala S. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks[EB/OL]. [2020-08-10].https://arxiv.org/pdf/1511.06434.pdfhttps://arxiv.org/pdf/1511.06434.pdf
Ronneberger O, Fischer P and Brox T. 2015. U-Net: convolutional networks for biomedical image segmentation//Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich, Germany: Springer: 234-241[DOI: 10.1007/978-3-319-24574-4_28http://dx.doi.org/10.1007/978-3-319-24574-4_28]
Rössler A, Cozzolino D, Verdoliva L, Riess C, Thies J and Nieβner M. 2018. FaceForensics: a large-scale video dataset for forgery detection in human faces[EB/OL]. [2020-08-10].https://arxiv.org/pdf/1803.09179.pdfhttps://arxiv.org/pdf/1803.09179.pdf
Rössler A, Cozzolino D, Verdoliva L, Riess C, Thies J and Niessner M. 2019. FaceForensics++: learning to detect manipulated facial images//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 1-11[DOI: 10.1109/ICCV.2019.00009http://dx.doi.org/10.1109/ICCV.2019.00009]
Sabir E, Cheng J X, Jaiswal A, Wael A, Masi I and Natarajan P. 2019. Recurrent convolutional strategies for face manipulation detection in videos//Proceedings of 2019 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Long Beach, USA: IEEE: 80-87
Simonyan K and Zisserman A. 2015. Very deep convolutional networks for large-scale image recognition//Proceedings of the 3rd International Conference on Learning Representations. San Diego, USA
Song H Z and Wu X J. 2019. High-quality image generation model for face aging/processing. Journal of Image and Graphics, 24(4): 592-602
宋昊泽, 吴小俊. 2019. 人脸老化/去龄化的高质量图像生成模型. 中国图象图形学报, 24(4): 592-602[DOI: 10.11834/jig.180272]
Sun P, Lang Y B, Gong J C and Shen Z. 2017. Authentication method for splicing manipulation with inconsistencies in color shift. Journal of Computer-Aided Design and Computer Graphics, 29(8): 1408-1415
孙鹏, 郎宇博, 巩家昌, 沈喆. 2017. 拼接篡改伪造图像的色彩偏移量不一致取证方法. 计算机辅助设计与图形学学报, 29(8): 1408-1415
Suwajanakorn S, Seitz S M and Kemelmacher-Shlizerman I. 2015. What makes tom hanks look like tom hanks//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE: 3952-3960[DOI: 10.1109/ICCV.2015.450http://dx.doi.org/10.1109/ICCV.2015.450]
Thies J, Zollhöfer M, Nieβner M, Valgaerts L, Stamminger M and Theobalt C. 2015. Real-time expression transfer for facial reenactment. ACM Transactions on Graphics, 34(6): #183[DOI: 10.1145/2816795.2818056]
Thies J, Zollhofer M, Stamminger M, Theobalt C and Nieβner M. 2016. Face2face: real-time face capture and reenactment of RGB videos//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 2387-2395[DOI: 10.1109/CVPR.2016.262http://dx.doi.org/10.1109/CVPR.2016.262]
Wang R, Juefei-Xu F, Ma L, Xie X F, Huang Y H, Wang J and Liu Y. 2020. FakeSpotter: a simple yet robust baseline for spotting AI-synthesized fake faces[EB/OL]. [2020-08-10].https://arxiv.org/pdf/1909.06122.pdfhttps://arxiv.org/pdf/1909.06122.pdf
Wu W, Zhang Y X, Li C, Qian C and Loy C C. 2018. ReenactGAN: learning to reenact faces via boundary transfer//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer: 622-638[DOI: 10.1007/978-3-030-01246-5_37http://dx.doi.org/10.1007/978-3-030-01246-5_37]
Xuan X S, Peng B, Wang W and Dong J. 2019. On the generalization of GAN image forensics//Proceedings of the 14th Chinese Conference on Biometric Recognition. Zhuzhou, China: Springer: 2019: 134-141[DOI: 10.1007/978-3-030-31456-9_15http://dx.doi.org/10.1007/978-3-030-31456-9_15]
Yang X, Li Y Z and Lyu S. 2019. Exposing deep fakes using inconsistent head poses//Proceedings of ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing. Brighton, UK: IEEE: 8261-8265[DOI: 10.1109/ICASSP.2019.8683164http://dx.doi.org/10.1109/ICASSP.2019.8683164]
Zhang Y X, Li G, Cao Y and Zhao X F. 2020. A Method for Detecting Human-face-tampered Videos based on Interframe Difference. Journal of Cyber Security, 5(2): 49-72
张怡暄, 李根, 曹纭, 赵险峰. 2020. 基于帧间差异的人脸篡改视频检测方法. 信息安全学报, 5(2): 49-72[DOI: 10.19363/J.cnki.cn10-1380/tn.2020.02.05]
Zhang Y X, Zhang S W, He Y, Li C, Loy C C and Liu Z W. 2019. One-shot face reenactment//Proceedings of the 30th British Machine Vision Conference 2019. Cardiff, UK: BMVA Press: 1-10
Zhu J Y, Park T, Isola P and Efros A A. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 2242-2251[DOI: 10.1109/ICCV.2017.244http://dx.doi.org/10.1109/ICCV.2017.244]
相关作者
相关机构