An overview of visual DeepFake detection techniques
- Vol. 27, Issue 1, Pages: 43-62(2022)
Published: 16 January 2022 ,
Accepted: 26 October 2021
DOI: 10.11834/jig.210410
移动端阅览
浏览全部资源
扫码关注微信
Published: 16 January 2022 ,
Accepted: 26 October 2021
移动端阅览
Renying Wang, Beilin Chu, Zhen Yang, Linna Zhou. An overview of visual DeepFake detection techniques. [J]. Journal of Image and Graphics 27(1):43-62(2022)
随着生成式深度学习算法的发展,深度伪造技术发展并应用于各个领域。深度伪造技术的滥用使人们逐渐意识到其带来的威胁,伪造检测技术随之而生。本文基于视觉深度伪造技术研究进行综述。1)简要介绍了视觉深度伪造技术的发展历程及技术原理,包括生成对抗网络在深度伪造制品中的应用;2)对现有的视觉深度伪造数据集进行汇总并归类;3)对目前的视觉深度伪造检测技术进行了分类,将现有的检测方法归纳为基于具体伪影的、基于数据驱动的、基于信息不一致和其他类型视觉深度伪造检测等4种分类。其中,基于伪影的检测方法着重于寻找伪造制品与真实图像之间的像素级差异,通过机器学习识别深度伪造制品中的人工伪影痕迹,基于信息不一致的方法则着重于寻找伪造制品与真实图像或视频之间的信息级差异,这两种方法都具有识别效率高、训练便捷等优点;基于数据驱动的方法通过大量的数据集和机器学习训练,直接使用神经网络本身对深度伪造制品进行训练,并通过改善网络架构增进模型以提高训练效率,因为其模型的多变和高精确率成为目前深度伪造检测的热门方向。同时,本文分析了4种方法的具体优缺点,并进一步给出了未来视觉深度伪造检测研究的重点和难点。
The word "DeepFake" is an integration of the two words "deep learning" and "fake"
mainly relates to artificial neural networks. This research summarizes Deepfake technique based on the visual depth forgery techniques. This review has evolved the aspects as shown below: 1) the history and technical principles of visual deep forgery techniques
including the application of generative countermeasure networks in deep forgery products. The current visual depth forgery methods can be toughly divided into three types: synthetic new face
face modification and face swapping. The method called new face synthesis uses some powerful generation of confrontation networks to generate overall non-existent face images completely. Currently
the popular databases of the new face synthesis technique were generated based on ProGAN and StyleGAN. Each forgery image generated will conduct its own specific generative adversarial network(GAN) fingerprint. The face modification method means to add some facial modifications to the target face. For instance
it can change one's hair color or skin color. It can also modify the gender of the targeted person or add a pair of glasses to the targeted person. This method uses GAN to generate images. The latest StarGAN database can divide the face into multiple areas and modify them simultaneously. The face swap method consists of two parts. The first part is using another person's face to replace the target person in the video. This way is used in the most popular algorithm in visual depth forgery currently
such as DeepFake and FaceSwap. The second part is facial expression exchange
which is also called face reproduction. Face reproduction means replacing one's facial expressions with the facial others
such as changing Obama's expressions and actions to complete a fake "speech". At present
Face2Face and NeuralTextures become popular in visual deep forgery via using face reproduction. Meanwhile
there are some mobile applications can also make fake information in faces. FaceApp which is based on StarGAN is modified various emotional expressions. 2) The current visual deep forgery datasets are summarized and classified. The deep forgery datasets are constantly developing with the improvement of deep forgery techniques and deep forgery detection techniques. This review collects the deep forgery datasets that have received widespread attention recently and puts them together in a demonstrated table to reveal the advantages and disadvantages. 3) The current visual deep forgery detection techniques are segmented. Current deepfake detection methods and models are summarized into four classifications in this overview. The DeepFake detection has relied on specific artifacts
data-driven
inconsistent information and other types of visual depth forgery detection. The overview divided these four types of DeepFake detection methods into more subcategories. The DeepFake subdivided detection method is related to five subcategories based on artifacts
including the fake face blending frame
the artifacts in the middle area of the fake face
the color inconformity of the deep fake products
the artifact inconformity of light source and GAN fingerprints. The review subdivided the data-driven The detection method that attempts to locate the location of a tampered area
and the detection method based on an improved neural network architecture. The classification in DeepFake detection methods based on inconsistent information was devided into three parts based on the aspects as shown such asinconsistent biological signals
inconsistent time series
and detection methods based on inconsistent behavior with real targets. Among the four classfications of DeepFake detection techniques
the one detection method based on artifacts focuses more on the findings of the pixel-level difference between the fake products and real images and vedios. These methods paid more attention to finding discoverable arifacts which made by GANs. On the other hand
the method based on the inconsistency of information focuses on finding information-level differences between fake products and real products. These methods have the advantages of high recognition efficiency and convenient training. While the data-driven method uses various DeepFake datasets and real datas and based on machine learning training to directly use the neural network itself to identify fake products. This overview analyzes the unique advantages and disadvantages of the four classifications and implements visual depth forgery detection. This research has its contributions as shown below: 1) an understanding of the DeepFake generation technology and emergingDeepFak detected method for readers
2) inform readers of the latest developments
trends and challenges in DeepFake study these years and 3) identify the attacker-defender of the latest trends in the future development of DeepFake and strive to yield priority to the DeepFake detection.
数字取证深度伪造机器学习深度学习面部篡改
digital forensicsDeepFakemachine learningdeep learningface manipulation
Afchar D, Nozick V, Yamagishi J and Echizen I. 2018. MesoNet: a compact facial video forgery detection network//Proceedings of 2018 IEEE International Workshop on Information Forensics and Security (WIFS). Hong Kong, China: IEEE: 1-7[DOI: 10.1109/WIFS.2018.8630761http://dx.doi.org/10.1109/WIFS.2018.8630761]
Agarwal A, Singh R, Vatsa M and Noore A. 2017. SWAPPED! Digital face presentation attack detection via weighted local magnitude pattern//Proceedings of 2017 IEEE International Joint Conference on Biometrics (IJCB). Denver, USA: IEEE: 659-665[DOI: 10.1109/BTAS.2017.8272754http://dx.doi.org/10.1109/BTAS.2017.8272754]
Agarwal S and Farid H. 2019. Protecting world leaders against deep fakes[EB/OL]. [2021-03-26].https://openaccess.thecvf.com/content_CVPRW_2019/papers/Media%20Forensics/Agarwal_Protecting_World_Leaders_Against_Deep_Fakes_CVPRW_2019_paper.pdfhttps://openaccess.thecvf.com/content_CVPRW_2019/papers/Media%20Forensics/Agarwal_Protecting_World_Leaders_Against_Deep_Fakes_CVPRW_2019_paper.pdf
Amerini I and Caldelli R. 2020. Exploiting prediction error inconsistencies through LSTM-based classifiers to detect deepfake videos//Proceedings of the 2020 ACM Workshop on Information Hiding and Multimedia Security. Denver, USA: ACM: 97-102[DOI: 10.1145/3369412.3395070http://dx.doi.org/10.1145/3369412.3395070]
Amerini I, Galteri L, Caldelli R and del Bimbo A. 2020. Deepfake video detection through optical flow based CNN//2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). Seoul, Korea (South): IEEE: 1205-1207[DOI: 10.1109/ICCVW.2019.00152http://dx.doi.org/10.1109/ICCVW.2019.00152]
Bay H, Tuytelaars T and Van Gool L. 2006. SURF: speeded up robust features//Proceedings of the 9th European Conference on Computer Vision. Graz, Austria: Springer: 404-417[DOI: 10.1007/11744023_32http://dx.doi.org/10.1007/11744023_32]
Carlini N and Farid H. 2020. Evading DeepFake-image detectors with white-and black-box attacks//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Seattle, USA: IEEE: 2804-2813[DOI: 10.1109/CVPRW50498.2020.00337http://dx.doi.org/10.1109/CVPRW50498.2020.00337]
Chen Z K, Xie L X, Pang S M, He Y and Zhang B. 2021. MagDR: mask-guided detection and reconstruction for defending deepfakes[EB/OL]. [2021-03-26].https://arxiv.org/pdf/2103.14211.pdfhttps://arxiv.org/pdf/2103.14211.pdf
Choi Y, Choi M, Kim M, Ha J W, Kim S and Choo J. 2018. StarGAN: unified generative adversarial networks for multi-domain image-to-image translation//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 8789-8797[DOI: 10.1109/CVPR.2018.00916http://dx.doi.org/10.1109/CVPR.2018.00916]
Chollet F. 2017. Xception: deep learning with depthwise separable convolutions//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 1800-1807[DOI: 10.1109/CVPR.2017.195http://dx.doi.org/10.1109/CVPR.2017.195]
Chopra S, Hadsell R and LeCun Y. 2005. Learning a similarity metric discriminatively, with application to face verification//Proceedings of 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Diego, USA: IEEE: 539-546[DOI: 10.1109/CVPR.2005.202http://dx.doi.org/10.1109/CVPR.2005.202]
Ciftci U A, Demir I and Yin L J. 2020a. How do the hearts of deep fakes beat? Deep fake source detection via interpreting residuals with biological signals//Proceedings of 2020 IEEE International Joint Conference on Biometrics (IJCB). Houston, USA: IEEE: 1-10[DOI: 10.1109/IJCB48548.2020.9304909http://dx.doi.org/10.1109/IJCB48548.2020.9304909]
Ciftci U A, Demir I and Yin L J. 2020b. FakeCatcher: detection of synthetic portrait videos using biological signals. IEEE Transactions on Pattern Analysis and Machine Intelligence: #3029287[DOI: 10.1109/TPAMI.2020.3009287]
Conotter V, Bodnari E, Boato G and Farid H. 2014. Physiologically-based detection of computer generated faces in video//Proceedings of 2014 IEEE International Conference on Image Processing (ICIP). Paris, France: IEEE: 248-252[DOI: 10.1109/ICIP.2014.7025049http://dx.doi.org/10.1109/ICIP.2014.7025049]
Dang H, Liu F, Stehouwer J, Liu X M and Jain A K. 2020. On the detection of digital face manipulation//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 5780-5789[DOI: 10.1109/CVPR42600.2020.00582http://dx.doi.org/10.1109/CVPR42600.2020.00582]
de Lima O, Franklin S, Basu S, Karwoski B and George A. 2020. DeepFake detection using spatiotemporal convolutional networks[EB/OL]. [2021-03-26].https://arxiv.org/pdf/2006.14749.pdfhttps://arxiv.org/pdf/2006.14749.pdf
Ding X Y, Raziei Z, Larson E C, Olinick E V, Krueger P and Hahsler M. 2020. Swapped face detection using deep learning and subjective assessment. EURASIP Journal on Information Security, 2020: #6[DOI: 10.1186/s13635-020-00109-8http://dx.doi.org/10.1186/s13635-020-00109-8]
Do N T, Na I S, Yang H J, Lee G S and Kim S H. 2018. Forensics face detection from GANs using convolutional neural network//2018 International Symposium on Information Technology Convergence (ISITC 2018). Jeonju, Korea(South): [s. n.]: 376-379
Dolhansky B, Bitton J, Pflaum B, Lu J K, Howes R, Wang M L and Ferrer C C. 2020. The DeepFake detection challenge (DFDC) dataset[EB/OL]. [2021-03-26].https://arxiv.org/pdf/2006.07397.pdfhttps://arxiv.org/pdf/2006.07397.pdf
Du M N, Pentyala S, Li Y N and Hu X. 2019. Towards generalizable DeepFake detection with locality-aware AutoEncoder[EB/OL]. [2021-03-26].https://arxiv.org/pdf/1909.05999.pdfhttps://arxiv.org/pdf/1909.05999.pdf
Durall R, Keuper M, Pfreundt F J and Keuper J. 2019. Unmasking DeepFakes with simple features[EB/OL]. [2021-03-26].https://arxiv.org/pdf/1911.00686.pdfhttps://arxiv.org/pdf/1911.00686.pdf
Fernandes S, Raj S, Ewetz R, Pannu S J, Jha S K, Ortiz E, Vintila I and Salter M. 2020. Detecting DeepFake videos using attribution-based confidence metric//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Seattle, USA: IEEE: 1250-1259[DOI: 10.1109/CVPRW50498.2020.00162http://dx.doi.org/10.1109/CVPRW50498.2020.00162]
Fernando T, Fookes C, Denman S and Sridharan S. 2019. Exploiting human social cognition for the detection of fake and fraudulent faces via memory networks[EB/OL]. [2021-03-26].https://arxiv.org/pdf/1911.07844.pdfhttps://arxiv.org/pdf/1911.07844.pdf
Geng P Z, Tang Y Q, Fan H X and Zhu X T. 2021. Research on deep forgery detection based on CutMix algorithm and improved Xception network. Laser and Optoelectronics Progress, 59(16): #1615007
耿鹏志, 唐云祁, 樊红兴, 朱新同. 2021. 基于CutMix算法和改进Xception网络的深度伪造检测研究. 激光与光电子学进展, 59(16): #1615007
Guarnera L, Giudice O and Battiato S. 2020. DeepFake detection by analyzing convolutional traces//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Seattle, USA: IEEE: 2841-2850[DOI: 10.1109/CVPRW50498.2020.00341http://dx.doi.org/10.1109/CVPRW50498.2020.00341]
Güera D and Delp E J. 2018. Deepfake video detection using recurrent neural networks//Proceedings of the 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). Auckland, New Zealand: IEEE: 1-6[DOI: 10.1109/AVSS.2018.8639163http://dx.doi.org/10.1109/AVSS.2018.8639163]
Guo Z Q, Yang G B, Chen J Y and Sun X M. 2020. Fake face detection via adaptive residuals extraction network[EB/OL]. [2021-03-26].https://www.researchgate.net/publication/341310624_Fake_Face_Detection_via_Adaptive_Residuals_Extraction_Networkhttps://www.researchgate.net/publication/341310624_Fake_Face_Detection_via_Adaptive_Residuals_Extraction_Network
Haliassos A, Vougioukas K, Petridis S and Pantic M. 2021. Lips don't lie: a generalisable and robust approach to face forgery detection[EB/OL]. [2021-03-26].https://arxiv.org/pdf/2012.07657.pdfhttps://arxiv.org/pdf/2012.07657.pdf
He K M, Zhang X Y, Ren S Q and Sun J. 2016. Deep residual learning for image recognition//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 770-778[DOI: 10.1109/CVPR.2016.90http://dx.doi.org/10.1109/CVPR.2016.90]
Hsu C C, Zhuang Y X and Lee C Y. 2020. Deep fake image detection based on pairwise learning. Applied Sciences, 10(1): #370[DOI: 10.3390/app10010370]
Huang D and de La Torre F. 2012. Facial action transfer with personalized bilinear regression//Proceedings of the 12th European Conference on Computer Vision. Florence, Italy: Springer: 144-158[DOI: 10.1007/978-3-642-33709-3_11http://dx.doi.org/10.1007/978-3-642-33709-3_11]
Huang G, Liu Z, Van Der Maaten L and Weinberger K Q. 2017. Densely connected convolutional networks//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 2261-2269[DOI: 10.1109/CVPR.2017.243http://dx.doi.org/10.1109/CVPR.2017.243]
Huang G B, Mattar M, Berg T and Learned-Miller E. 2008. Labeled faces in the wild: a database forstudying face recognition in unconstrained environments[EB/OL]. [2021-03-26].https://www.researchgate.net/publication/29622837_Labeled_Faces_in_the_Wild_A_Database_forStudying_Face_Recognition_in_Unconstrained_Environmentshttps://www.researchgate.net/publication/29622837_Labeled_Faces_in_the_Wild_A_Database_forStudying_Face_Recognition_in_Unconstrained_Environments
Jain A, Singh R and Vatsa M. 2018. On detecting GANs and retouching based synthetic alterations//Proceedings of IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS). Redondo Beach, USA: IEEE: 1-7[DOI: 10.1109/BTAS.2018.8698545http://dx.doi.org/10.1109/BTAS.2018.8698545]
Jiang L M, Li R, Wu W, Qian C and Loy C C. 2020. Deeperforensics-1.0: a large-scale dataset for real-world face forgery detection//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 2886-2895[DOI: 10.1109/CVPR42600.2020.00296http://dx.doi.org/10.1109/CVPR42600.2020.00296]
Karras T, Laine S and Aila T. 2019. A style-based generator architecture for generative adversarial networks//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 4396-4405[DOI: 10.1109/CVPR.2019.00453http://dx.doi.org/10.1109/CVPR.2019.00453]
Kee E, Brien J and Farid H. 2014. Exposing photo manipulation fromshading and shadows//Proceedings of 2014 ACM Transactions on Graphics. New York, USA: ACM: 33-38[DOI: 110.1145/2629646http://dx.doi.org/110.1145/2629646]
Khodabakhsh A, Ramachandra R, Raja K, Wasnik P and Busch C. 2018. Fake face detection methods: can they be generalized?//Proceedings of 2018 International Conference of the Biometrics Special Interest Group (BIOSIG). Darmstadt, Germany: IEEE: 1-6[DOI: 10.23919/BIOSIG.2018.8553251http://dx.doi.org/10.23919/BIOSIG.2018.8553251]
Koopman M, Rodriguez A M and Geradts Z. 2018. Detection of DeepFake video manipulation[EB/OL]. [2021-03-26].https://www.researchgate.net/publication/329814168_Detection_of_Deepfake_Video_Manipulationhttps://www.researchgate.net/publication/329814168_Detection_of_Deepfake_Video_Manipulation
Korshunov P and Marcel S. 2018a. DeepFakes: a new threat to face recognition? Assessment and detection[EB/OL]. [2021-03-26].http://export.arxiv.org/pdf/1812.08685.pdfhttp://export.arxiv.org/pdf/1812.08685.pdf
Korshunov P and Marcel S. 2018b. Speaker inconsistency detection in tampered video//Proceedings of the 26th European Signal Processing Conference (EUSIPCO). Rome, Italy: IEEE: 2375-2379[DOI: 10.23919/EUSIPCO.2018.8553270http://dx.doi.org/10.23919/EUSIPCO.2018.8553270]
Li J, Shen T, Zhang W, Ren H, Zeng D and Mei T. 2019. Zooming into face forensics: a pixel-level analysis[EB/OL]. [2021-03-26].https://arxiv.org/pdf/1912.05790.pdfhttps://arxiv.org/pdf/1912.05790.pdf
Li L Z, Bao J M, Zhang T, Yang H, Chen D, Wen F and Guo B N. 2020a. Face X-ray for more general face forgery detection//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 5000-5009[DOI: 10.1109/CVPR42600.2020.00505http://dx.doi.org/10.1109/CVPR42600.2020.00505]
Li X R, Yu K, Ji S L, Wang Y, Wu C M and Xue H. 2020b. Fighting against DeepFake: Patch&Pair convolutional neural networks (PPCNN)//Companion Proceedings of the Web Conference 2020. Taipei, China: ACM: 88-89[DOI: 10.1145/3366424.3382711http://dx.doi.org/10.1145/3366424.3382711]
Li X R, Ji S L, Wu C M, Liu Z G, Deng S G, Cheng P, Yang M and Kong X W. 2021. Survey on deepfakes and detection techniques. Journal of Software, 32(2): 496-518
李旭嵘, 纪守领, 吴春明, 刘振广, 邓水光, 程鹏, 杨珉, 孔祥维. 2021. 深度伪造与检测技术综述. 软件学报, 32(2): 496-518)[DOI: 10.13328/j.cnki.jos.006140]
Li Y Z, Chang M C and Lyu S. 2018a. In Ictu Oculi: exposing AI created fake videos by detecting eye blinking//Proceedings of 2018 IEEE International Workshop on Information Forensics and Security (WIFS). Hong Kong, China: IEEE: 1-7[DOI: 10.1109/WIFS.2018.8630787http://dx.doi.org/10.1109/WIFS.2018.8630787]
Li Y Z and Lyu S. 2018b. Exposing DeepFake videos by detecting face warping artifacts[EB/OL]. [2021-03-26].https://arxiv.org/pdf/1811.00656v2.pdfhttps://arxiv.org/pdf/1811.00656v2.pdf
Li Y Z, Yang X, Sun P, Qi H G and Lyu S. 2020c. Celeb-DF: a large-scale challenging dataset for DeepFake forensics//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 3204-3213[DOI: 10.1109/CVPR42600.2020.00327http://dx.doi.org/10.1109/CVPR42600.2020.00327]
Liang R G, Lv P Z, Zhao Y, Chen P, Xing H, Zhang Y J, Han J Z, He R, Zhao X F, Li M and Chen K. 2020. A survey of audiovisual DeepFake detection techniques. Journal of Cyber Security, 5(2): 1-17
梁瑞刚, 吕培卓, 赵月, 陈鹏, 邢豪, 张颖君, 韩冀中, 赫然, 赵险峰, 李明, 陈恺. 2020. 视听觉深度伪造检测技术研究综述. 信息安全学报, 5(2): 1-17)[DOI: 10.19363/J.cnki.cn10-1380/tn.2020.02.01]
Liu Z W, Luo P, Wang X G and Tang X O. 2015. Deep learning face attributes in the wild//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE: 3730-3738[DOI: 10.1109/ICCV.2015.425http://dx.doi.org/10.1109/ICCV.2015.425]
Lukas J, Fridrich J and Goljan M. 2006. Digital camera identification from sensor pattern noise. IEEE Transactions on Information Forensics and Security, 1(2): 205-214[DOI: 10.1109/TIFS.2006.873602]
Marra F, Gragnaniello D, Verdoliva L and Poggi G. 2019a. Do GANs leave artificial fingerprints?//Proceedings of 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). San Jose, USA: IEEE: 506-511[DOI: 10.1109/MIPR.2019.00103http://dx.doi.org/10.1109/MIPR.2019.00103]
Marra F, Saltori C, Boato G and Verdoliva L. 2019b. Incremental learning for the detection and classification of GAN-generated images//Proceedings of 2019 IEEE International Workshop on Information Forensics and Security (WIFS). Delft, Netherlands: IEEE: 1-6[DOI: 10.1109/WIFS47025.2019.9035099http://dx.doi.org/10.1109/WIFS47025.2019.9035099]
Masi I, Killekar A, Mascarenhas R M, Gurudatt S P and AbdAlmageed W. 2020. Two-branch recurrent network for isolating deepfakes in videos[EB/OL]. [2021-03-26].https://arxiv.org/pdf/2008.03412.pdfhttps://arxiv.org/pdf/2008.03412.pdf
Matern F, Riess C and Stamminger M. 2019. Exploiting visual artifacts to expose deepfakes and face manipulations//Proceedings of 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW). Waikoloa, USA: IEEE: 83-92[DOI: 10.1109/WACVW.2019.00020http://dx.doi.org/10.1109/WACVW.2019.00020]
McCloskey S and Albright M. 2018. Detecting GAN-generated imagery using color cues[EB/OL]. [2021-03-26].https://arxiv.org/pdf/1812.08247.pdfhttps://arxiv.org/pdf/1812.08247.pdf
Mittal T, Bhattacharya U, Chandra R, Bera A and Manocha D. 2020. Emotions don't lie: an audio-visual DeepFake detection method using affective cues[EB/OL]. [2021-03-26].https://arxiv.org/pdf/2003.06711.pdfhttps://arxiv.org/pdf/2003.06711.pdf
Mo H X, Chen B L and Luo W Q. 2018. Fake faces identification via convolutional neural network//Proceedings of the 6th ACM Workshop on Information Hiding and Multimedia Security. Innsbruck, Austria: ACM: 43-47[DOI: 10.1145/3206004.3206009http://dx.doi.org/10.1145/3206004.3206009]
Moon T K. 1996. The expectation-maximization algorithm. IEEE Signal Processing Magazine, 13(6): 47-60[DOI: 10.1109/79.543975]
Nataraj L, Mohammed T M, Chandrasekaran S, Flenner A, Bappy J H, Roy-Chowdhury A K and Manjunath B S. 2019. Detecting GAN generated fake images using co-occurrence matrices[EB/OL]. [2021-03-26].https://arxiv.org/pdf/1903.06836v2.pdfhttps://arxiv.org/pdf/1903.06836v2.pdf
Nguyen H H, Tieu T N D, Nguyen-Son H Q, Nozick V, Yamagishi J and Echizen I. 2018. Modular convolutional neural network for discriminating between computer-generated images and photographic images//Proceedings of the 13th International Conference on Availability, Reliability and Security. Hamburg, Germany: ACM: #1[DOI: 10.1145/3230833.3230863http://dx.doi.org/10.1145/3230833.3230863]
Nguyen H H, Yamagishi J and Echizen I. 2019a. Capsule-forensics: using capsule networks to detect forged images and videos//Proceeding of 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Brighton, UK: IEEE: 2307-2311[DOI: 10.1109/ICASSP.2019.8682602http://dx.doi.org/10.1109/ICASSP.2019.8682602]
Nguyen H H, Yamagishi J and Echizen I. 2019b. Use of a capsule network to detect fake images and videos[EB/OL]. [2021-03-26].https://arxiv.org/pdf/1910.12467.pdfhttps://arxiv.org/pdf/1910.12467.pdf
Nguyen H H, Fang F M, Yamagishi J and Echizen I. 2019c. Multi-task learning for detecting and segmenting manipulated facial images and videos//Proceeding of IEEE 10th International Conference on Biometrics Theory, Applications and Systems (BTAS). Tampa, USA: IEEE: 1-8[DOI: 10.1109/BTAS46853.2019.9185974http://dx.doi.org/10.1109/BTAS46853.2019.9185974]
Nirkin Y, Keller Y and Hassner T. 2019. FSGAN: subject agnostic face swapping and reenactment//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 7183-7192[DOI: 10.1109/ICCV.2019.00728http://dx.doi.org/10.1109/ICCV.2019.00728]
Nirkin Y, Wolf L, Keller Y and Hassner T. 2020. DeepFake detection based on the discrepancy between the face and its context[EB/OL]. [2021-03-26].https://arxiv.org/pdf/2008.12262.pdfhttps://arxiv.org/pdf/2008.12262.pdf
Papernot N, McDaniel P and Goodfellow I. 2016. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples[EB/OL]. [2021-03-26].https://arxiv.org/pdf/1605.07277v1.pdfhttps://arxiv.org/pdf/1605.07277v1.pdf
Peng B, Wang W, Dong J and Tan T N. 2017. Optimized 3D lighting environment estimation for image forgery detection. IEEE Transactions on Information Forensics and Security, 12(2): 479-494[DOI: 10.1109/TIFS.2016.2623589]
Rana M S and Sung A H. 2020. DeepfakeStack: a deep ensemble-based learning technique for DeepFake detection//Proceeding of the 7th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/6th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). New York, USA: IEEE: 70-75[DOI: 10.1109/CSCloud-EdgeCom49738.2020.00021http://dx.doi.org/10.1109/CSCloud-EdgeCom49738.2020.00021]
Ronneberger O, Fischer P and Brox T. 2015. U-net: convolutional networks for biomedical image segmentation//Proceeding of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich, Germany: Springer: 234-241[DOI: 10.1007/978-3-319-24574-4_28http://dx.doi.org/10.1007/978-3-319-24574-4_28]
Rössler A, Cozzolino D, Verdoliva L, Riess C, Thies J and Nieβner M. 2018. FaceForensics: a large-scale video dataset for forgery detection in human faces[EB/OL]. [2021-03-26].https://arxiv.org/pdf/1803.09179.pdfhttps://arxiv.org/pdf/1803.09179.pdf
Rössler A, Cozzolino D, Verdoliva L, Riess C, Thies J and Niessner M. 2019. FaceForensics++: learning to detect manipulated facial images//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 1-11[DOI: 10.1109/ICCV.2019.00009http://dx.doi.org/10.1109/ICCV.2019.00009]
Sabir E, Cheng J X, Jaiswal A, AbdAlmageed W, Masi I and Natarajan P. 2019. Recurrent convolutional strategies for face manipulation detection in videos[EB/OL]. [2021-03-26].https://arxiv.org/pdf/1905.00582.pdfhttps://arxiv.org/pdf/1905.00582.pdf
Simonyan K and Zisserman A. 2015. Very deep convolutional networks for large-scale image recognition[EB/OL]. [2021-03-26].https://arxiv.org/pdf/1409.1556v6.pdfhttps://arxiv.org/pdf/1409.1556v6.pdf
Sun Y, Wang X G and Tang X O. 2014. Deep learning face representation by joint identification-verification[EB/OL]. [2021-03-26].https://arxiv.org/pdf/1406.4773.pdfhttps://arxiv.org/pdf/1406.4773.pdf
Tariq S, Lee S, Kim H, Shin Y and Woo S S. 2018. Detecting both machine and human created fake face images in the wild//Proceedings of the 2nd International Workshop on Multimedia Privacy and Security. Toronto, Canada: ACM: 81-87[DOI: 10.1145/3267357.3267367http://dx.doi.org/10.1145/3267357.3267367]
Wang C R and Deng W H. 2021. Representative forgery mining for fake face detection[EB/OL]. [2021-03-26].https://arxiv.org/pdf/2104.06609.pdfhttps://arxiv.org/pdf/2104.06609.pdf
Wang R, Juefei-Xu F, Ma L, Xie X F, Huang Y H, Wang J and Liu Y. 2020a. FakeSpotter: a simple yet robust baseline for spotting AI-synthesized fake faces[EB/OL]. [2021-03-26].https://arxiv.org/pdf/1909.06122.pdfhttps://arxiv.org/pdf/1909.06122.pdf
Wang S Y, Wang O, Zhang R, Owens A and Efros A A. 2020b. CNN-generated images are surprisingly easy to spot… for now//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE: 8692-8701[DOI: 10.1109/CVPR42600.2020.00872http://dx.doi.org/10.1109/CVPR42600.2020.00872]
Yang X, Li Y Z and Lyu S. 2019. Exposing deep fakes using inconsistent head poses//Proceedings of 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Brighton, UK: IEEE: 8261-8265[DOI: 10.1109/ICASSP.2019.8683164http://dx.doi.org/10.1109/ICASSP.2019.8683164]
Yu N, Davis L and Fritz M. 2019. Attributing fake images to GANs: learning and analyzing GAN fingerprints//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 7555-7565[DOI: 10.1109/ICCV.2019.00765http://dx.doi.org/10.1109/ICCV.2019.00765]
Zakharov E, Shysheya A, Burkov E and Lempitsky V. 2019. Few-shot adversarial learning of realistic neural talking head models//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 9458-9467[DOI: 10.1109/ICCV.2019.00955http://dx.doi.org/10.1109/ICCV.2019.00955]
Zhang X, Karaman S and Chang S F. 2019. Detecting and simulating artifacts in GAN fake images//Proceedings of 2019 IEEE International Workshop on Information Forensics and Security (WIFS). Delft, Netherlands: IEEE: 1-6[DOI: 10.1109/WIFS47025.2019.9035107http://dx.doi.org/10.1109/WIFS47025.2019.9035107]
Zhang Y, Jin X, Jiang Q, Li X J, Dong Y Y and Yao S W. 2021. Deepfake image detection method based on autoencoder. Journal of Computer Applications, 41(10): 2985-2990
张亚, 金鑫, 江倩, 李昕洁, 董云云, 姚绍文. 2021. 基于自动编码器的深度伪造图像检测方法. 计算机应用, 41(10): 2985-2990)[DOI: 10.11772/j.issn.1001-9081.2020122046]
Zhang Y, Zheng L L and Thing V L L. 2017. Automated face swapping and its detection//Proceedings of IEEE 2nd International Conference on Signal and Image Processing (ICSIP). Singapore, Singapore: IEEE: 15-19[DOI: 10.1109/SIPROCESS.2017.8124497http://dx.doi.org/10.1109/SIPROCESS.2017.8124497]
Zhao H Q, Zhou W B, Chen D D, Wei T Y, Zhang W M and Yu N H. 2021. Multi-attentional DeepFake detection[EB/OL]. [2021-03-26].https://arxiv.org/pdf/2103.02406.pdfhttps://arxiv.org/pdf/2103.02406.pdf
Zhou P, Han X T, Morariu V I and Davis L S. 2017. Two-stream neural networks for tampered face detection//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Honolulu, USA: IEEE: 1831-1839[DOI: 10.1109/CVPRW.2017.229http://dx.doi.org/10.1109/CVPRW.2017.229]
Zhu J Y, Park T, Isola P and Efros A A. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks//Proceedings of the 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 2242-2251[DOI: 10.1109/ICCV.2017.244http://dx.doi.org/10.1109/ICCV.2017.244]
Zhu X T, Tang Y Q and Geng P Z. 2021. Detection algorithm of tamper and DeepFake image based on feature fusion. Netinfo Security, 21(8): 70-81
朱新同, 唐云祁, 耿鹏志. 2021. 基于特征融合的篡改与深度伪造图像检测算法. 信息网络安全, 21(8): 70-81)[DOI: 10.3969/j.issn.1671-1122.2021.08.009]
Zi B J, Chang M H, Chen J J, Ma X J and Jiang Y G. 2020. WildDeepfake: a challenging real-world dataset for DeepFake detection//Proceedings of the 28th ACM International Conference on Multimedia. Seattle, USA: ACM: 2382-2390[DOI: 10.1145/3394171.3413769http://dx.doi.org/10.1145/3394171.3413769]
Zoph B, Vasudevan V, Shlens J and Le Q V. 2018. Learning transferable architectures for scalable image recognition//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 8697-8710[DOI: 10.1109/CVPR.2018.00907http://dx.doi.org/10.1109/CVPR.2018.00907]
相关作者
相关机构