Data-free model compression for light-weight DeepFake detection
- Vol. 28, Issue 3, Pages: 820-835(2023)
Published: 16 March 2023 ,
Accepted: 27 September 2022
DOI: 10.11834/jig.220559
移动端阅览
浏览全部资源
扫码关注微信
Published: 16 March 2023 ,
Accepted: 27 September 2022
移动端阅览
Wenqi Zhuo, Dongze Li, Wei Wang, Jing Dong. Data-free model compression for light-weight DeepFake detection. [J]. Journal of Image and Graphics 28(3):820-835(2023)
目的
2
尽管现有的深度伪造检测方法已在各大公开数据集上展现出了极佳的真伪鉴别性能,但考虑到运行过程中耗费的巨大内存占用和计算成本,如何实现此类模型的在线部署仍是一个具有挑战性的任务。对此,本文尝试利用无数据量化的方法开发轻量级的深度伪造检测器。
方法
2
在保证准确率损失较少的前提下,对提前训练好的高精度深度伪造检测模型进行压缩处理,不再使用32 bit浮点数表示模型的权重参数与激活值,而是将其全部转化为低位宽的整型数值。此外,由于人脸数据涉及隐私保护问题,本文中所有的量化操作都是基于无数据场景完成的,即利用合成数据作为校准集来获取正确的激活值范围。这些数据经过不断优化迭代,完美匹配了存储在预训练模型各批归一化层中的统计信息,与原始训练数据具备非常相似的分布特征。
结果
2
在两个经典的人脸伪造数据集FaceForensics++和Celeb-DF v2上,4种预先训练好的深度伪造检测模型ResNet50、Xception、EfficientNet-b3和MobileNetV2经过所提方法的量化压缩处理后,均能保持甚至超越原有的性能指标。即使当模型的权重和激活值被压缩为6 bit时,所得轻量级模型的最低检测准确率也能达到81%。
结论
2
通过充分利用蕴含在深度伪造检测预训练模型中的有价值信息,本文提出了一种基于无数据模型压缩的轻量级人脸伪造检测器,该检测器能够准确高效地识别出可疑人脸样本的真实性,与此同时,检测所需的资源和时间成本大幅降低。
Objective
2
Deep generative models-based human facial images and videos analyses have been developing in recent years. To cope with the faked issues effectively
a novel DeepFake detection (DFD) technique has emerged. Multiple DFD methods are yielded the detector to discriminate between the real and fake faces analysis with over 95% precision. However
it is still a great challenge to deploy them online because of the memory and computational cost. So
we develop a quantified model to DFD domain. Quantization-related model compression can be used to optimize model size through converting a model's key parameters from high precision floating points into low precision integers. However
the degradation issue is still being challenged. To resolve degradation problem
it can be segmented into 2 categories: 1) quantification-oriented fine-tuning and 2) post-training quantification. To optimize cost effective
the latter one is optioned to develop a light-weight DFD detector. In addition
to clarify the privacy concerns and information security
data-free scenario-oriented models-quantified are constructed and optimized with no prior training set.
Method
2
The proposed framework consists of 2 steps: 1) key parameters-related quantification and 2) activation-ranged calibration. First
the weights and activations of a well-trained high accuracy DFD model are optioned as the target parameters to be quantified. A linear transformation-asymmetric is used to convert them from 32-bit floating points into lower bit-width representation like INT8 and INT6. Next
the activation-ranged errors are validated based on calibration set. For data-free scenario
it is challenged to collect data from prior training set. Therefore
to produce more effective calibration data
a batch of normalization layers of a pre-trained DFD model is tailored to guide the generator. Such statistics knowledge is often used to reflect the distribution of training data like running-relevant means and variances. We can optimize the input data of those are sampled in random from a standard Gaussian distribution in terms of our L2-norm constraint. Furthermore
to reduce the accuracy loss
the ReLU6 are employed to optimize its activation function for all DFD models. The interval [0
6] is introduced to ReLU6 as a natural range for activations
which is beneficial to the quantification. The data-coordinated is fed into the quantified model and the activation-ranged is calibrated during the inference-forward process.
Result
2
our proposed scheme is tested with popular multi-DFD models of those are ResNet50
Xception
EfficientNet-b3 and MobileNetV2 in relevant to the deepfake datasets-popular like FaceForensics++ and Celeb-DF v2. For FaceForensics++
the Xception and MobileNetV2 achieve Acc scores of 93.98% and 92.25% in W8A8 quantitatively and optimized by 0.01% and 0.92% to benchmark. The detection accuracy of ResNet50 is reached to 92.56% in W6A8. The performance of EfficientNet-b3 is required to be resolved and calibrated further. For Celeb-DF v2
each MobileNetV2 precision gains in W8A8
W8A6 and W6A6 are improved by 0.07%
0.77% and 0.09% of each compared to its benched model. For 3 sorts of DFD models-relevant
the detection accuracy of their quantified versions is higher than 92%
even in W6A6 quantization. In comparison with a similar work "DefakeHop"
it can construct a DFD-featured light-weight network as well. For the quantified DFD models
they can get higher scores of AUC(area under the ROC curve) on public datasets although the parameter amount is unchanged and larger than DefakeHop. Actually
to make DFD models more light-weight
we can use the proposed scheme to compress DefakeHop. To evaluate our approach better
a series of ablation experiments are carried out to analyze the bit-width setting of weights and activation-derived quantification impact
the type of calibration data
and activation function as well.
Conclusion
2
The model-compressed methods are melted into DFD tasks and a data-free post-quantization scheme is also developed. It can convert a pre-trained DFD model into light-weight. Experiments are implemented on FaceForensics++ and Celeb-DF v2 with a range of typical DFD models
including ResNet50
Xception
EfficientNet-b3 and MobileNetV2. The quantified DFD models can be customized to recognize fake faces accurately and efficiently. Future research direction is potential to assign the DFD models online or on some resource-constrained platforms like mobile and edge devices.
深度伪造检测虚假人脸模型压缩低位宽表示无数据蒸馏轻量级模型
DeepFake detectionfake facemodel compressionlow bit-width representationdata-free distillationlight-weight model
Afchar D, Nozick V, Yamagishi J and Echizen I. 2018. MesoNet: a compact facial video forgery detection network//Proceedings of 2018 IEEE International Workshop on Information Forensics and Security (WIFS). Hong Kong, China: IEEE: 1-7 [DOI: 10.1109/WIFS.2018.8630761http://dx.doi.org/10.1109/WIFS.2018.8630761]
Banner R, Nahshan Y, Hoffer E and Soudry D. 2019. Post-training 4-bit quantization of convolution networks for rapid-deployment[EB/OL]. [2022-05-29].https://arxiv.org/pdf/1810.05723v3.pdfhttps://arxiv.org/pdf/1810.05723v3.pdf
Bhardwaj K, Suda N and Marculescu R. 2019. Dream distillation: a data-independent model compression framework[EB/OL]. [2022-05-17].https://arxiv.org/pdf/1905.07072v1.pdfhttps://arxiv.org/pdf/1905.07072v1.pdf
Cai Y H, Yao Z W, Dong Z, Gholami A, Mahoney M W and Keutzer K. 2020. ZeroQ: a novel zero shot quantization framework//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE: 13166-13175 [DOI: 10.1109/CVPR42600.2020.01318http://dx.doi.org/10.1109/CVPR42600.2020.01318]
Cao S H, Liu X H, Mao X Q and Zou Q. 2022. A review of human face forgery and forgery-detection technologies. Journal of Image and Graphics, 27(4): 1023-1038
曹申豪, 刘晓辉, 毛秀青, 邹勤. 2022. 人脸伪造及检测技术综述. 中国图象图形学报, 27(4): 1023-1038 [DOI: 10.11834/jig.200466]
Chen H S, Rouhsedaghat M, Ghani H, Hu S, You S and Kuo C C J. 2021. DefakeHop: a light-weight high-performance deepfake detector//Proceedings of 2021 IEEE International Conference on Multimedia and Expo (ICME). Shenzhen, China: IEEE: 1-6 [DOI: 10.1109/ICME51207.2021.9428361http://dx.doi.org/10.1109/ICME51207.2021.9428361]
Cheng Y, Wang D, Zhou P and Zhang T. 2018. Model compression and acceleration for deep neural networks: the principles, progress, and challenges. IEEE Signal Processing Magazine, 35(1): 126-136 [DOI: 10.1109/MSP.2017.2765695]
Choi J, Wang Z, Venkataramani S, Chuang P I J, Srinivasan V and Gopalakrishnan K. 2018. Pact: parameterized clipping activation for quantized neural networks[EB/OL]. [2022-07-17].https://arxiv.org/pdf/1805.06085v2.pdfhttps://arxiv.org/pdf/1805.06085v2.pdf
Choi Y, Choi J, El-Khamy M and Lee J. 2020. Data-free network quantization with adversarial knowledge distillation//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Seattle, USA: IEEE: 3047-3057 [DOI: 10.1109/CVPRW50498.2020.00363http://dx.doi.org/10.1109/CVPRW50498.2020.00363]
Chollet F. 2017. Xception: deep learning with depthwise separable convolutions//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE: 1800-1807 [DOI: 10.1109/CVPR.2017.195http://dx.doi.org/10.1109/CVPR.2017.195]
Choukroun Y, Kravchik E, Yang F and Kisilev P. 2019. Low-bit quantization of neural networks for efficient inference//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). Seoul, Korea (South): IEEE: 3009-3018 [DOI: 10.1109/ICCVW.2019.00363http://dx.doi.org/10.1109/ICCVW.2019.00363]
Ciftci U A, Demir I and Yin L J. 2020. FakeCatcher: detection of synthetic portrait videos using biological signals[EB/OL]. [2020-07-17].https://ieeexplore.ieee.org/document/9141516https://ieeexplore.ieee.org/document/9141516
Courbariaux M, Bengio Y and David J P. 2015. BinaryConnect: training deep neural networks with binary weights during propagations//Proceedings of the 28th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press: 3123-3131
Ding X Y, Raziei Z, Larson E C, Olinick E V, Krueger P and Hahsler M. 2020. Swapped face detection using deep learning and subjective assessment. EURASIP Journal on Information Security, 2020: #6 [DOI: 10.1186/s13635-020-00109-8]
Dong Z, Yao Z W, Gholami A, Mahoney M and Keutzer K. 2019. HAWQ: hessian aware quantization of neural networks with mixed-precision//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, Korea (South): IEEE: 293-302 [DOI: 10.1109/ICCV.2019.00038http://dx.doi.org/10.1109/ICCV.2019.00038]
Gholami A, Kwon K, Wu B C, Tai Z Z, Yue X Y, Jin P, Zhao S C and Keutzer K. 2018. SqueezeNext: hardware-aware neural network design//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Salt Lake City, USA: IEEE: 1719-171909 [DOI: 10.1109/CVPRW.2018.00215http://dx.doi.org/10.1109/CVPRW.2018.00215]
Han K, Wang Y H, Tian Q, Guo J Y, Xu C J and Xu C. 2020. GhostNet: more features from cheap operations//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE: 1577-1586 [DOI: 10.1109/CVPR42600.2020.00165http://dx.doi.org/10.1109/CVPR42600.2020.00165]
Han S, Pool J, Tran J and Dally W J. 2015. Learning both weights and connections for efficient neural networks//Proceedings of the 28th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press: 1135-1143
Haroush M, Hubara I, Hoffer E and Soudry D. 2020. The knowledge within: methods for data-free model compression//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE: 8491-8499 [DOI: 10.1109/CVPR42600.2020.00852http://dx.doi.org/10.1109/CVPR42600.2020.00852]
He K M, Zhang X Y, Ren S Q and Sun J. 2016. Deep residual learning for image recognition//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE: 770-778 [DOI: 10.1109/CVPR.2016.90http://dx.doi.org/10.1109/CVPR.2016.90]
Hinton G, Vinyals O and Dean J. 2015. Distilling the knowledge in a neural network[EB/OL]. [2022-03-09].https://arxiv.org/pdf/1503.02531v1.pdfhttps://arxiv.org/pdf/1503.02531v1.pdf
Horowitz M. 2014. 1.1 computing's energy problem (and what we can do about it)//Proceedings of 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC). San Francisco, USA: IEEE: 10-14 [DOI: 10.1109/ISSCC.2014.6757323http://dx.doi.org/10.1109/ISSCC.2014.6757323]
Howard A, Sandler M, Chen B, Wang W J, Chen L C, Tan M X, Chu G, Vasudevan V, Zhu Y K, Pang R M, Adam H and Le Q. 2019. Searching for MobileNetV3//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, Korea (South): IEEE: 1314-1324 [DOI: 10.1109/ICCV.2019.00140http://dx.doi.org/10.1109/ICCV.2019.00140]
Iandola F N, Han S, Moskewicz M W, Ashraf K, Dally W J and Keutzer K.2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5 MB model size[EB/OL]. [2022-06-10].https://arxiv.org/pdf/1602.07360v4.pdfhttps://arxiv.org/pdf/1602.07360v4.pdf
Jacob B, Kligys S, Chen B, Zhu M L, Tang M, Howard A, Adam H and Kalenichenko D. 2018. Quantization and training of neural networks for efficient integer-arithmetic-only inference//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 2704-2713 [DOI: 10.1109/CVPR.2018.00286http://dx.doi.org/10.1109/CVPR.2018.00286]
Lee J H, Ha S, Choi S, Lee W J and Lee S. 2018. Quantization for rapid deployment of deep neural networks[EB/OL]. [2022-06-10].https://arxiv.org/pdf/1810.05488v1.pdfhttps://arxiv.org/pdf/1810.05488v1.pdf
Li F F, Zhang B and Liu B. 2022. Ternary weight networks[EB/OL]. [2022-11-20].https://arxiv.org/pdf/1605.04711v3.pdfhttps://arxiv.org/pdf/1605.04711v3.pdf
Li H, Kadav A, Durdanovic I, Samet H and Graf H P. 2017. Pruning filters for efficient convnets[EB/OL]. [2022-03-10].https://arxiv.org/pdf/1608.08710v3.pdfhttps://arxiv.org/pdf/1608.08710v3.pdf
Li L Z, Bao J M, Zhang T, Yang H, Chen D, Wen F and Guo B N. 2020a. Face X-ray for more general face forgery detection//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE: 5000-5009 [DOI: 10.1109/CVPR42600.2020.00505http://dx.doi.org/10.1109/CVPR42600.2020.00505]
Li X L, Yu N H, Zhang X P, Zhang W M, Li B, Lu W, Wang W and Liu X L. 2021. Overview of digital media forensics technology. Journal of Image and Graphics, 26(6): 1216-1226
李晓龙, 俞能海, 张新鹏, 张卫明, 李斌, 卢伟, 王伟, 刘晓龙. 2021. 数字媒体取证技术综述. 中国图象图形学报, 26(6): 1216-1226 [DOI: 10.11834/jig.210081]
Li Y Z, Chang M C and Lyu S W. 2018. In Ictu oculi: exposing AI created fake videos by detecting eye blinking//Proceedings of 2018 IEEE International Workshop on Information Forensics and Security (WIFS). Hong Kong, China: IEEE: 1-7 [DOI: 10.1109/WIFS.2018.8630787http://dx.doi.org/10.1109/WIFS.2018.8630787]
Li Y Z, Yang X, Sun P, Qi H G and Lyu S. 2020b. Celeb-DF: a large-scale challenging dataset for DeepFake forensics//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE: 3204-3213 [DOI: 10.1109/CVPR42600.2020.00327http://dx.doi.org/10.1109/CVPR42600.2020.00327]
Lopes R G, Fenu S and Starner T. 2017. Data-free knowledge distillation for deep neural networks. [EB/OL]. [2022-06-10].https://arxiv.org/pdf/1710.07535.pdfhttps://arxiv.org/pdf/1710.07535.pdf
Mao H Z, Han S, Pool J, Li W S, Liu X Y, Wang Y and Dally W J. 2017. Exploring the regularity of sparse structure in convolutional neural networks[EB/OL]. [2022-06-05].https://arxiv.org/pdf/1705.08922v3.pdfhttps://arxiv.org/pdf/1705.08922v3.pdf
Masi I, Killekar A, Mascarenhas R M, Gurudatt S P and AbdAlmageed W. 2020. Two-branch recurrent network for isolating deepfakes in videos//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer: 667-684 [DOI: 10.1007/978-3-030-58571-6_39http://dx.doi.org/10.1007/978-3-030-58571-6_39]
Matern F, Riess C and Stamminger M. 2019. Exploiting visual artifacts to expose deepfakes and face manipulations//Proceedings of 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW). Waikoloa, USA: IEEE: 83-92 [DOI: 10.1109/WACVW.2019.00020http://dx.doi.org/10.1109/WACVW.2019.00020]
Mordvintsev A, Olah C and Tyka M D. 2015. Inceptionism: going deeper into neural networks[EB/OL]. [2022-06-10].http://googleresearch.blogspot.it/2015/06/inceptionism-going-deeper-into-neural.htmlhttp://googleresearch.blogspot.it/2015/06/inceptionism-going-deeper-into-neural.html
Nagel M, van Baalen M, Blankevoort T and Welling M. 2019. Data-free quantization through weight equalization and bias correction//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, Korea (South): IEEE: 1325-1334 [DOI: 10.1109/ICCV.2019.00141http://dx.doi.org/10.1109/ICCV.2019.00141]
Nguyen H H, Yamagishi J and Echizen I. 2019. Capsule-forensics: using capsule networks to detect forged images and videos//Proceedings of 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Brighton, UK: IEEE: 2307-2311 [DOI: 10.1109/ICASSP.2019.8682602http://dx.doi.org/10.1109/ICASSP.2019.8682602]
Qian Y Y, Yin G J, Sheng L, Chen Z X and Shao J. 2020. Thinking in frequency: face forgery detection by mining frequency-aware clues//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer: 86-103 [DOI: 10.1007/978-3-030-58610-2_6http://dx.doi.org/10.1007/978-3-030-58610-2_6]
Rahmouni N, Nozick V, Yamagishi J and Echizen I. 2017. Distinguishing computer graphics from natural images using convolution neural networks//Proceedings of 2017 IEEE Workshop on Information Forensics and Security (WIFS). Rennes, France: IEEE: 1-6 [DOI: 10.1109/WIFS.2017.8267647http://dx.doi.org/10.1109/WIFS.2017.8267647]
Rastegari M, Ordonez V, Redmon J and Farhadi A. 2016. XNOR-net: ImageNet classification using binary convolutional neural networks//Proceedings of the 14th European Conference on Computer Vision (ECCV). Amsterdam, the Netherlands: Springer: 525-542 [DOI: 10.1007/978-3-319-46493-0_32http://dx.doi.org/10.1007/978-3-319-46493-0_32]
Romero A, Ballas N, Kahou S E, Chassang A, Gatta C and Bengio Y. 2015. Fitnets: hints for thin deep nets[EB/OL]. [2022-06-10].https://arxiv.org/pdf/1412.6550v4.pdfhttps://arxiv.org/pdf/1412.6550v4.pdf
Rössler A, Cozzolino D, Verdoliva L, Riess C, Thies J and Nießner M. 2019. FaceForensics++: learning to detect manipulated facial images//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, Korea (South): IEEE: 1-11 [DOI: 10.1109/ICCV.2019.00009http://dx.doi.org/10.1109/ICCV.2019.00009]
Sandler M, Howard A, Zhu M L, Zhmoginov A and Chen L C. 2018. MobileNetV2: inverted residuals and linear bottlenecks//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, USA: IEEE: 4510-4520 [DOI: 10.1109/CVPR.2018.00474http://dx.doi.org/10.1109/CVPR.2018.00474]
Simonyan K and Zisserman A. 2015. Very deep convolutional networks for large-scale image recognition[EB/OL]. [2022-06-10].https://arxiv.org/pdf/1409.1556v6.pdfhttps://arxiv.org/pdf/1409.1556v6.pdf
Sze V, Chen Y H, Yang T J and Emer J S. 2017. Efficient processing of deep neural networks: a tutorial and survey. Proceedings of the IEEE, 105(12): 2295-2329 [DOI: 10.1109/JPROC.2017.2761740]
Tan M X and Le Q. 2020. EfficientNet: rethinking model scaling for convolutional neural networks[EB/OL]. [2022-09-11].https://arxiv.org/pdf/1905.11946v5.pdfhttps://arxiv.org/pdf/1905.11946v5.pdf
Wang R Y, Chu B L, Yang Z and Zhou L N. 2022. An overview of visual DeepFake detection techniques. Journal of Image and Graphics, 27(1):43-62
王任颖, 储贝林, 杨震, 周琳娜. 2022. 视觉深度伪造检测技术综述. 中国图象图形学报, 27(1): 43-62 [DOI: 10.11834/jig.210410]
Wu J X, Leng C, Wang Y H, Hu Q H and Cheng J. 2016. Quantized convolutional neural networks for mobile devices//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE: 4820-4828 [DOI: 10.1109/CVPR.2016.521http://dx.doi.org/10.1109/CVPR.2016.521]
Yang X, Li Y Z and Lyu S. 2019. Exposing deep fakes using inconsistent head poses//Proceedings of 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Brighton, UK: IEEE: 8261-8265 [DOI: 10.1109/ICASSP.2019.8683164http://dx.doi.org/10.1109/ICASSP.2019.8683164]
Zhang D Q, Yang J L, Ye D QZ and Hua G. 2018a. LQ-nets: learned quantization for highly accurate and compact deep neural networks//Proceedings of the 15th European Conference on Computer Vision (ECCV). Amsterdam, the Netherlands: Springer: 373-390 [DOI: 10.1007/978-3-030-01237-3_23http://dx.doi.org/10.1007/978-3-030-01237-3_23]
Zhang X Y, Zhou X Y, Lin M X and Sun J. 2018b. ShuffleNet: an extremely efficient convolutional neural network for mobile devices//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 6848-6856 [DOI: 10.1109/CVPR.2018.00716http://dx.doi.org/10.1109/CVPR.2018.00716]
Zhao H Q, Wei T Y, Zhou W B, Zhang W M, Chen D D and Yu N H. 2021. Multi-attentional deepfake detection//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE: 2185-2194 [DOI: 10.1109/CVPR46437.2021.00222http://dx.doi.org/10.1109/CVPR46437.2021.00222]
Zhao R, Hu Y W, Dotzel J, De Sa C and Zhang Z R. 2019. Improving neural network quantization using outlier channel splitting[EB/OL]. [2022-03-22].https://arxiv.org/pdf/1901.09504v1.pdfhttps://arxiv.org/pdf/1901.09504v1.pdf
Zhou A J, Yao A B, Guo Y W, Xu L and Chen Y R. 2017a. Incremental network quantization: towards lossless CNNs with low-precision weights[EB/OL]. [2022-08-25].https://arxiv.org/pdf/1702.03044v2.pdfhttps://arxiv.org/pdf/1702.03044v2.pdf
Zhou P, Han X T, Morariu V I and Davis L S. 2017b. Two-stream neural networks for tampered face detection//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Honolulu, USA: IEEE: 1831-1839 [DOI: 10.1109/CVPRW.2017.229http://dx.doi.org/10.1109/CVPRW.2017.229]
Zhou S C, Wu Y X, Ni Z K, Zhou X Y, Wen H and Zou Y H. 2018. DoReFa-Net: training low bitwidth convolutional neural networks with low bitwidth gradients[EB/OL]. [2022-02-02].https://arxiv.org/pdf/1606.06160v3.pdfhttps://arxiv.org/pdf/1606.06160v3.pdf
Zhu K M, Xu W B, Lu W and Zhao X F. 2022. Deepfake video detection with feature interaction amongst key frames. Journal of Image and Graphics, 27(1): 188-202
祝恺蔓, 徐文博, 卢伟, 赵险峰. 2022. 多关键帧特征交互的人脸篡改视频检测. 中国图象图形学报, 27(1): 188-202 [DOI: 10.11834/jig.210408]
相关作者
相关机构