Current Issue Cover
跨模态表征与生成技术

刘华峰1, 陈静静2, 李亮3, 鲍秉坤4, 李泽超1, 刘家瑛5, 聂礼强6(1.南京理工大学计算机科学与工程学院, 南京 210094;2.复旦大学计算机科学技术学院, 上海 200438;3.中国科学院计算技术研究所, 北京 100190;4.南京邮电大学通信与信息工程学院, 南京 230001;5.北京大学王选计算机研究所, 北京 100871;6.哈尔滨工业大学 (深圳) 计算机科学与技术学院, 深圳 518055)

摘 要
多媒体数据持续呈现爆发式增长并显现出异源异构的特性,因此跨模态学习领域研究逐渐引起学术和工业界的关注。跨模态表征与生成是跨模态学习的两大核心基础问题。跨模态表征旨在利用多种模态之间的互补性剔除模态之间的冗余,从而获得更为有效的特征表示;跨模态生成则是基于模态之间的语义一致性,实现不同模态数据形式上的相互转换,有助于提高不同模态间的迁移能力。本文系统地分析了国际与国内近年来跨模态表征与生成领域的重要研究进展,包括传统跨模态表征学习、多模态大模型表示学习、图像到文本的跨模态转换和跨模态图像生成。其中,传统跨模态表征学习探讨了跨模态统一表征和跨模态协同表征,多模态大模型表示学习探讨了基于Transformer的模型研究,图像到文本的跨模态转换探讨了图像视频的语义描述、视频字幕语义分析和视觉问答等领域的发展,跨模态图像生成从不同模态信息的跨模态联合表示方法、图像的跨模态生成技术和基于预训练的特定域图像生成阐述了跨模态生成方面的进展。本文详细综述了上述各个子领域研究的挑战性,对比了国内外研究方面的进展情况,梳理了发展脉络和学术研究的前沿动态。最后,根据上述分析展望了跨模态表征与生成的发展趋势和突破口。
关键词
Cross-modal representation learning and generation

Liu Huafeng1, Chen Jingjing2, Li Liang3, Bao Bingkun4, Li Zechao1, Liu Jiaying5, Nie Liqiang6(1.School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China;2.School of Computer Science, Fudan University, Shanghai 200438, China;3.Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China;4.College of Telecommunication and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 230001, China;5.Wangxuan Institute of Computer Technology, Peking University, Beijing 100871, China;6.School of Computer Science of Technology, Harbin Institute of Technology(Shenzhen), Shenzhen 518055, China)

Abstract
Nowadays, with the booming of multimedia data, the character of multi-source and multi-modality of data has become a challenging problem in multimedia research. Its representation and generation can be as two key factors in cross-modal learning research. Cross-modal representation studies feature learning and information integration methods using multi-modal data. To get more effective feature representation, multimodality-between mutual benefits are required to be strengthened. Cross-modal generation is focused on the knowledge transfer mechanism across modalities. The modals-between semantic consistency can be used to realize data-interchangeable profiles of different modals. It is beneficial to improve modalities-between migrating ability. The literature review in cross-modal representation and generation are critically analyzed on the aspect of 1) traditional cross-modal representation learning, 2) big model for cross-modal representation learning, 3) image-to-text cross-modal conversion, joint representation, and 4) cross-modal image generation. Traditional cross-modal representation has two categories:joint representation and coordinated representation. Joint representation can yield multiple single-modal information to the joint representation space when each of single-modal information is processed through the coordinated representations, and cross-modal representations can be learnt mutually in terms of similarity constraints. Deep neural networks(DNNs) based self-supervised learning ability are activated to deal with largescale unlabeled data, especially for the Transformer-based methods. To enrich the supervised learning paradigm, the pretrained large models can yield large-scale unlabeled data to learn training, and a downstream tasks-derived small amount of labeled data is used for model fine-tuning. The pre-trained model has better versatility and transfering ability compared to the trained model for specific tasks, and the fine-tuned model can be used to optimize downstream tasks as well. The developmentof cross-modal synthesis(a.k.a. image caption or video caption) methods have been summarized, including end-toend, semantic-based, and stylize-based methods. In addition, current situation of cross-modal conversion between image and text has beenanalyzed, including image caption, video caption, and visual question answering. The cross-modal generation methods are summarized as well in relevance to the joint representation of cross-modal information, image generation, text-image cross-modal generation, and cross-modal generation based on pre-trained models. In recent years, generative adversarial networks(GANs) and denoising diffusion probabilistic models(DDPMs) have been faciliating in crossmodal generation tasks. Thanks to the strong adaptability and generation ability of DDPM models, cross-modal generation research can be developed and the constraints of vulnerable textures are optimized to a certain extent. The growth of GAN-based and DDPM-based methods are summarized and analyzed further.
Keywords

订阅号|日报