面向多模态自监督特征融合的音视频对抗对比学习
Audio-visual adversarial contrastive learning-based multi-modal self-supervised feature fusion
- 2023年28卷第1期 页码:317-332
纸质出版日期: 2023-01-16 ,
录用日期: 2022-09-14
DOI: 10.11834/jig.220168
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2023-01-16 ,
录用日期: 2022-09-14
移动端阅览
盛振涛, 陈雁翔, 齐国君. 面向多模态自监督特征融合的音视频对抗对比学习[J]. 中国图象图形学报, 2023,28(1):317-332.
Zhentao Sheng, Yanxiang Chen, Guojun Qi. Audio-visual adversarial contrastive learning-based multi-modal self-supervised feature fusion[J]. Journal of Image and Graphics, 2023,28(1):317-332.
目的
2
同一视频中的视觉与听觉是两个共生模态,二者相辅相成,同时发生,从而形成一种自监督模式。随着对比学习在视觉领域取得很好的效果,将对比学习这一自监督表示学习范式应用于音视频多模态领域引起了研究人员的极大兴趣。本文专注于构建一个高效的音视频负样本空间,提高对比学习的音视频特征融合能力。
方法
2
提出了面向多模态自监督特征融合的音视频对抗对比学习方法:1)创新性地引入了视觉、听觉对抗性负样本集合来构建音视频负样本空间;2)在模态间与模态内进行对抗对比学习,使得音视频负样本空间中的视觉和听觉对抗性负样本可以不断跟踪难以区分的视听觉样本,有效地促进了音视频自监督特征融合。在上述两点基础上,进一步简化了音视频对抗对比学习框架。
结果
2
本文方法在Kinetics-400数据集的子集上进行训练,得到音视频特征。这一音视频特征用于指导动作识别和音频分类任务,取得了很好的效果。具体来说,在动作识别数据集UCF-101和HMDB-51(human metabolome database)上,本文方法相较于Cross-AVID(cross-audio visual instance discrimination)模型,视频级别的TOP1准确率分别高出了0.35%和0.83%;在环境声音数据集ECS-50上,本文方法相较于Cross-AVID模型,音频级别的TOP1准确率高出了2.88%。
结论
2
音视频对抗对比学习方法创新性地引入了视觉和听觉对抗性负样本集合,该方法可以很好地融合视觉特征和听觉特征,得到包含视听觉信息的音视频特征,得到的特征可以提高动作识别、音频分类任务的准确率。
Objective
2
Video clip-based vision and audition are two kind of interactive and synchronized symbiotic modalities to develop a self-supervised mode. Current researches demonstrate that human-perception is derived from visual auditory vision to understand dynamic events. Therefore
the feature extracted from audio-visual clips contains richer information. In recent years
data feature-based contrastive learning has promoted visual domain dramatically via the mutual information prediction between pairs of samples. Much more concerns are related to the application of contrastive learning
a self-supervised representation learning paradigm for the audio-visual multi-modal domain. It is essential to deal with the issue of an audio-visual negative sample space construction
where contrastive learning can extract negative samples. To improve the audio-visual feature fusion capability of contrastive learning
our research is focused on building up an efficient audio-visual negative sample space.
Method
2
We develop a method of audio-visual adversarial contrastive learning for multi-modal self-supervised feature fusion. Visual and auditory negative sample sets are initialized as standard normal distribution
which can construct the audio-visual negative sample space. In order to ensure the scaled audio-visual negative sample space
the number of visual and auditory adversarial negative samples is defined as 65 536. The path of cross-modal adversarial contrastive learning is described as following: 1) we used the paired visual feature and auditory feature extracted from the same video clip as the positive sample
while the auditory adversarial negative samples are used to construct the negative sample space
the visual feature will be close to the corresponding auditory positive sample during the training of cross-modal contrastive learning
while discretes from the auditory adversarial negative samples farther. 2) Auditory adversarial negative samples are updated during cross-modal adversarial learning
which makes them closer to the visual feature. If there is just cross-modal adversarial contrastive learning there
the modal can be actually degenerated into the inner-modal adversarial contrastive learning. The visual and auditory negative samples sets are initialized as standard normal distribution without visual or auditory information
so inner-modal adversarial contrastive learning is also required. We used a pair of visual features in different view as the positive sample further. The negative sample space is still constructed by the visual adversarial negative samples. 3) Visual and auditory feature is composed of inner-modality and cross-modality information both
which can be used to guide downstream tasks like action recognition and audio classification. Specifically
(1)to construct audio-visual negative sample space
visual and audio adversarial negative samples are introduced; (2) to track the indistinguishable audio and visual samples in consistency
the combination of inner-modality and cross-modality adversarial contrastive learning is adopted
which can improve the proposed method effectively to fuse audio-visual self-supervised feature. On the basis of (1) and (2) mentioned above
the audio-visual adversarial contrastive learning framework is simplified further.
Result
2
The subset of Kinetics-400 dataset is selected for pre-training to obtain audio-visual feature. 1) The audio-visual feature is analyzed qualitatively. The visual feature is applied to guide the supervised network of action recognition. After fine-tuning the supervised network
we visualized the final convolutional layer of the network. Comparing with Cross-cross-audio visual instance discrimination(AVID) method
our visual feature makes the supervised network pay more attention to the various body parts of the person-targeted
which is an effective information source to recognize action.2) The quality of the audio-visual adversarial negative samples are analyzed qualitatively via visualizing the t-distributed stochastic neighbor embedding(t-SNE) figure about the audio-visual feature and the audio-visual adversarial negative samples. The audio-visual adversarial negative sample distribution of our method is looped and similar to an oval shape
while the audio-visual negative sample distribution of Cross-AVID method has small clusters and deletions. It demonstratess that the proposed audio-visual adversarial negative samples can track the audio-visual feature in the iterative process closely
and build a more efficient audio-visual negative sample space. The audio-visual feature is analyzed in quantitative as well. This feature is applied to motion recognition and audio classification. In particular
1)visual-based Cross-AVID model comparison: our analysis achieves 0.35% and 0.83% of each on the UCF-101 and human metabolome database(HMDB-51) action recognition datasets; 2) audio-based Cross-AVID model comparison: our analysis achieves 2.88% on the ECS-50 environmental sound classification dataset.
Conclusion
2
Audio-visual adversarial contrastive learning method can introduce visual and audio adversarial negative samples effectively. To obtain audio-visual feature information
qualitative and quantitative experiments show that the proposed method can well fuse visual and auditory feature. This feature can be implied to improve the accuracy of action recognition and audio classification tasks.
自监督特征融合对抗对比学习音视频多模态视听觉对抗性负样本预训练
self-supervised feature fusionadversarial contrastive learningaudio-visual cross-modalityaudio-visual adversarial negative samplepre-training
Abdi H and Williams L J. 2010. Principal component analysis. WIREs Computational Statistics, 2(4): 433-459 [DOI: 10.1002/wics.101]
Alayrac J B, Recasens A, Schneider R, Arandjelovic' R, Ramapuram J, De Fauw J, Smaira L, Dieleman S and Zisserman A. 2020. Self-supervised multimodal versatile networks//Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc. : #3
Arandjelovic' R and Zisserman A. 2017. Look, listen and learn//Proceedings of the International Conference on Computer Vision (ICCV). Venice, Italy: IEEE: 609-617 [DOI: 10.1109/ICCV.2017.73]
Arandjelovic' R and Zisserman A. 2018. Objects that sound//Proceedings of the 15th European Conference on Computer Vision (ECCV). Munich, Germany: Springer: 451-466 [DOI: 10.1007/978-3-030-01246-5_27]
Asano Y M, Patrick M, Rupprecht C and Vedaldi A. 2020. Labelling unlabelled videos from scratch with multi-modal self-supervision//Advances in Neural Information Processing Systems. Online: Curran Associates, Inc. : 4660-4671
Chen T, Kornblith S, Norouzi M and Hinton G E. 2020. A simple framework for contrastive learning of visual representations//Proceedings of the 37th International Conference on Machine Learning. [s. l.]: PMLR: 1597-1607
Chen Y B, Xian Y Q, Koepke AS, Shan Y and Akata Z. 2021. Distilling audio-visual knowledge by compositional contrastive learning//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 7012-7021 [DOI: 10.1109/CVPR46437.2021.00694]
Doersch C, Gupta A and Efros A A. 2015. Unsupervised visual representation learning by context prediction//Proceedings of the International Conference on Computer Vision (ICCV). Santiago, Chile: IEEE: 1422-1430 [DOI: 10.1109/ICCV.2015.167]
Du H Y, Zhang J and Wang W J. 2020. A deep self-supervised clustering ensemble algorithm. CAAI Transactions on Intelligent Systems, 15(6): 1113-1120
杜航原, 张晶, 王文剑. 2020. 一种深度自监督聚类集成算法. 智能系统学报, 15(6): 1113-1120 [DOI: 10.11992/tis.202006050]
Fernando B, Bilen H, Gavves E and Gould S. 2017. Self-supervised video representation learning with odd-one-out networks//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE: 5729-5738 [DOI: 10.1109/CVPR.2017.607]
Hadsell R, Chopra S and LeCun Y. 2016. Dimensionality reduction by learning an invariant mapping//Proceedings of 2016 Conference on Computer Vision and Pattern Recognition (CVPR). New York, USA: IEEE: 1735-1742 [DOI: 10.1109/CVPR.2006.100]
Han T D, Xie W D and Zisserman A. 2019. Video representation learning by dense predictive coding//Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops. Seoul, Korea (South): IEEE: 1483-1492 [DOI: 10.1109/ICCVW.2019.00186]
He K M, Fan H Q, Wu Y X, Xie S N and Girshick R. 2020. Momentum contrast for unsupervised visual representation learning//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE: 9726-9735 [DOI: 10.1109/CVPR42600.2020.00975]
Heffner R S and Heffner H E. 1992. Evolution of sound localization in mammals//Webster D B, Popper A N and Fay R R, eds. The Evolutionary Biology of Hearing. NewYork, USA: Springer: 691-715 [DOI: 10.1007/978-1-4612-2784-7_43]
Hénaff O J. 2020. Data-efficient image recognition with contrastive predictive coding//Proceedings of the International Conference on Machine Learning. [s. l.]: PMLR: 4182-4192
Hinton G E and Salakhutdinov R R. 2006. Reducing the dimensionality of data with neural networks. Science, 313(5786): 504-507 [DOI: 10.1126/science.1127647]
Hjelm R D, Fedorov A, Lavoie-Marchildon S, Grewal K, Bachman P, Trischler A and Bengio Y. 2019. Learning deep representations by mutual information estimation and maximization//Proceeding of the 7th International Conference on Learning Representations. New Orleans, USA: OpenReview. net
Ho C H and Vasconcelos N. 2020. Contrastive learning with adversarial examples//Proceedings of the 34th International Conference on Neural Information Processing Systems (NeurIPS). Vancouver, Canada: Curran Associates Inc. : #1433
Hu Q J, Wang X, Hu Wand Qi G J. 2021. AdCo: adversarial contrast for efficient learning of unsupervised representations from self-trained negative adversaries//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE: 1074-1083 [DOI: 10.1109/CVPR46437.2021.00113]
Huang Y, Du C Z, Xue Z H, Chen X Y, Zhao H and Huang L B. 2021. What makes multi-modal learning better than single (Provably)//Proceedings of the 35th Conference on Neural Information Processing Systems. [s. l.]: [s. n.]: 10944-10956
Ji X, Vedaldi A and Henriques J F. 2019. Invariant information clustering for unsupervised image classification and segmentation//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 9864-9873 [DOI: 10.1109/ICCV.2019.00996]
Kalantidis Y, Sariyildiz M B, Pion N, Weinzaepfel P and Larlus D. 2020. Hard negative mixing for contrastive learning//Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc. : #1829
Kay W, Carreira J, Simonyan K, Zhang B, Hillier C, Vijayanarasimhan S, Viola F, Green T, Back T, Natsev P, Suleyman M and Zisserman A. 2017. The kinetics human action video dataset [EB/OL]. [2021-02-28]. https://arxiv.org/pdf/1705.06950.pdf
Kingma D P and Welling M. 2014. Auto-encoding variational bayes//Proceedings of the 2nd International Conference on Learning Representations. Banff, Canada: OpenReview. net
Korbar B, Tran D and Torresani L. 2018. Cooperative learning of audio and video models from self-supervised synchronization//Proceedings of the 32nd International Conference on Neural Information Processing Systems (NeurIPS). Montréal, Canada: Curran Associates Inc. : 7774-7785
Kuehne H, Jhuang H, Garrote E, Poggio T and Serre T. 2011. HMDB: a large video database for human motion recognition//Proceedings of the International Conference on Computer Vision. Barcelona, Spain: IEEE: 2556-2563 [DOI: 10.1109/ICCV.2011.6126543]
Lamba J, Abhishek, Akula J, Dabral R, Jyothi P and Ramakrishnan G. 2021. Cross-modal learning for audio-visual video parsing//Proceedings of the 22nd Annual Conference of the International Speech Communication Association. Brno, Czechia: ISCA: 1937-1941
Lee H, Battle A, Raina R and Ng A Y. 2006. Efficient sparse coding algorithms//Proceedings of the 19th International Conference on Neural Information Processing Systems. Vancouver, Canada: MIT Press: 801-808
Li J N, Zhou P, Xiong C M and Hoi S C H. 2021. Prototypical contrastive learning of unsupervised representations//Proceedings of the 9th International Conference on Learning Representations. [s. l.]: OpenReview. net
Ma S, Zeng Z Y, McDuff D and Song Y L. 2021. Active contrastive learning of audio-visual video representations//Proceedings of the 9th International Conference on Learning Representations. [s. l.]: OpenReview. net
Misra I and van der Maaten L. 2020. Self-supervised learning of pretext-invariant representations//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE: 6706-6716 [DOI: 10.1109/CVPR42600.2020.00674]
Mobahi H, Collobert R and Weston J. 2009. Deep learning from temporal coherence in video//Proceedings of the 26th Annual International Conference on Machine Learning. Montréal, Canada: ACM: 737-744 [DOI: 10.1145/1553374.1553469]
Morgado P, Li Y and Vasconcelos N. 2020. Learning representations from audio-visual spatial alignment//Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc. : #397
Morgado P, Misra I and Vasconcelos N. 2021a. Robust audio-visual instance discrimination//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 12929-12940 [DOI: 10.1109/CVPR46437.2021.01274]
Morgado P, Vasconcelos N and Misra I. 2021b. Audio-visual instance discrimination with cross-modal agreement//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE: 12470-12481 [DOI: 10.1109/CVPR46437.2021.01229]
Myklebust H R. 1960. Psychology of Deafness, The-Sensory Deprivation, Learning, and Adjustment. New York, USA: Grune and Stratton
Olshausen B A. 2002. Sparse coding of time-varying natural images. Journal of Vision, 2(7): #130 [DOI: 10.1167/2.7.130]
Olshausen B A and Field D J. 1996. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583): 607-609 [DOI: 10.1038/381607a0]
Owens A and Efros A A. 2018. Audio-visual scene analysis with self-supervised multisensory features//Proceedings of the 15th European Conference on Computer Vision (ECCV). Munich, Germany: Springer: 639-658 [DOI: 10.1007/978-3-030-01231-1_39]
Owens A, Wu J J, McDermott J H, Freeman W T and Torralba A. 2016. Ambient sound provides supervision for visual learning//Proceedings of the 14th European Conference on Computer Vision (ECCV). Amsterdam, the Netherlands: Springer: 801-816 [DOI: 10.1007/978-3-319-46448-0_48]
Patrick M, Asano Y M, Kuznetsova P, Fong R, Henriques J F, Zweig G and Vedaldi A. 2021. Multi-modal self-supervision from generalized data transformations//Proceedings of the International Conference on Learning Representations. [s. l.]: OpenReview. net
Piczak K J. 2015. ESC: dataset for environmental sound classification//Proceedings of the 23rd ACM international conference on Multimedia. Brisbane, Australia: ACM: 1015-1018 [DOI: 10.1145/2733373.2806390]
Piergiovanni A J, Angelova A and Ryoo M S. 2020. Evolving losses for unsupervised video representation learning//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE: 130-139 [DOI: 10.1109/CVPR42600.2020.00021]
Risto N. 1992. Attention and Brain Function. London, England: Routledge [DOI: 10.4324/9780429487354]
Sanguineti V, Morerio P, Pozzetti N, Greco D, Cristani M and Murino V. 2020. Leveraging acoustic images for effective self-supervised audio representation learning//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer: 119-135 [DOI: 10.1007/978-3-030-58542-6_8]
Shams L and Kim R. 2010. Crossmodal influences on visual perception. Physics of Life Reviews, 7(3): 269-284 [DOI: 10.1016/j.plrev.2010.04.006]
Shukla A, Petridis S and Pantic M. 2020. Learning speech representations from raw audio by joint audiovisual self-supervision//Proceedings of the 37th International Conference on Machine Learning. Vienna, Austria: OpenReview. net
Soomro K, Zamir A R and Shah M. 2012. UCF101: a dataset of 101 human actions classes from videos in the wild [EB/OL]. [2021-02-28]. https://arxiv.org/pdf/1212.0402.pdf
Stone J V. 2004. Independent Component Analysis: A Tutorial Introduction. London, England: The MIT Press [DOI: 10.7551/mitpress/3717.001.0001]
Szegedy C, Liu W, Jia Y Q, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V and Rabinovich A. 2015. Going deeper with convolutions//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE: 1-9 [DOI: 10.1109/CVPR.2015.7298594]
Tian Y L, Krishnan D and Isola P. 2020. Contrastive multiview coding//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer: 776-794 [DOI: 10.1007/978-3-030-58621-8_45]
Tran D, Wang H, Torresani L, Ray J, LeCun Y and Paluri M. 2018. A closer look at spatiotemporal convolutions for action recognition//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 6450-6459 [DOI: 10.1109/CVPR.2018.00675]
van den Oord A, Kalchbrenner N and Kavukcuoglu K. 2016. Pixel recurrent neural networks//Proceedings of the 33nd International Conference on Machine Learning. New York, USA: JMLR. org: 1747-1756
van den Oord A, Li Y Z and Vinyals O. 2018. Representation learning with contrastive predictive coding [EB/OL]. [2021-02-28]. https://arxiv.org/pdf/1807.03748v1.pdf
Wu Z R, Xiong Y J, Yu S X and Lin D H. 2018. Unsupervised feature learning via non-parametric instance discrimination//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 3733-3742 [DOI: 10.1109/CVPR.2018.00393]
Zbontar J, Jing L, Misra I, LeCun Y and Deny S. 2021. Barlow twins: self-supervised learning via redundancy reduction//Proceedings of the 38th International Conference on Machine Learning. [s. l.]: PMLR: 12310-12320
Zhuang C X, Zhai A and Yamins D. 2019. Local aggregation for unsupervised learning of visual embeddings//Proceedings of 2019 IEEE/CVF Conference on Computer Vision. Seoul, Korea (South): IEEE: 6001-6011 [DOI: 10.1109/ICCV.2019.00610]