情感计算与理解研究发展概述
An overview of research development of affective computing and understanding
- 2022年27卷第6期 页码:2008-2035
纸质出版日期: 2022-06-16 ,
录用日期: 2022-04-13
DOI: 10.11834/jig.220085
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2022-06-16 ,
录用日期: 2022-04-13
移动端阅览
姚鸿勋, 邓伟洪, 刘洪海, 洪晓鹏, 王甦菁, 杨巨峰, 赵思成. 情感计算与理解研究发展概述[J]. 中国图象图形学报, 2022,27(6):2008-2035.
Hongxun Yao, Weihong Deng, Honghai Liu, Xiaopeng Hong, Sujing Wang, Jufeng Yang, Sicheng Zhao. An overview of research development of affective computing and understanding[J]. Journal of Image and Graphics, 2022,27(6):2008-2035.
情感在感知、决策、逻辑推理和社交等一系列智能活动中起到核心作用,是实现人机交互和机器智能的重要元素。近年来,随着多媒体数据爆发式增长及人工智能的快速发展,情感计算与理解引发了广泛关注。情感计算与理解旨在赋予计算机系统识别、理解、表达和适应人的情感的能力来建立和谐人机环境,并使计算机具有更高、更全面的智能。根据输入信号的不同,情感计算与理解包含不同的研究方向。本文全面回顾了多模态情感识别、孤独症情感识别、情感图像内容分析以及面部表情识别等不同情感计算与理解方向在过去几十年的研究进展并对未来的发展趋势进行展望。对于每个研究方向,首先介绍了研究背景、问题定义和研究意义;其次从不同角度分别介绍了国际和国内研究现状,包括情感数据标注、特征提取、学习算法、部分代表性方法的性能比较和分析以及代表性研究团队等;然后对国内外研究进行了系统比较,分析了国内研究的优势和不足;最后讨论了目前研究存在的问题及未来的发展趋势与展望,例如考虑个体情感表达差异问题和用户隐私问题等。
Humans are emotional creatures. Emotion plays a key role in various intelligent actions
including perception
decision-making
logical reasoning
and social interaction. Emotion is an important and dispensable component in the realization of human-computer interaction and machine intelligence. Recently
with the explosive growth of multimedia data and the rapid development of artificial intelligence
affective computing and understanding has attracted much research attention. It aims to establish a harmonious human-computer environment by giving the computing machines the ability to recognize
understand
express
and adapt to human emotions
and to make computers have higher and more comprehensive intelligence. Based on the input signals
such as speech
text
image
action and gait
and physiological signals
affective computing and understanding can be divided into multiple research topics. In this paper
we will comprehensively review the development of four important topics in affective computing and understanding
including multi-modal emotion recognition
autism emotion recognition
affective image content analysis
and facial expression recognition. For each topic
we first introduce the research background
problem definition
and research significance. Specifically
we introduce how such topics were proposed
what the corresponding tasks do
and why it is important in different applications. Second
we introduce the international and domestic research on emotion data annotation
feature extraction
learning algorithms
performance comparison and analysis of some representative methods
and famous research teams. Emotion data annotation is conducted to evaluate the performances of affective computing and understanding algorithms. We briefly summarize how categorical and dimensional emotion representation models in psychology are used to construct datasets and the comparisons between these datasets. Feature extraction aims to extract discriminative features to represent emotions. We summarize both hand-crafted features in the early years and deep features in the deep learning era. Learning algorithms aim to learn a mapping between extracted features and emotions. We also summarize and compare both traditional and deep models. For a better understanding of how existing methods work
we report the emotion recognition results of some representative and influential methods on multiple datasets and give some detailed analysis. To better track the latest research for beginners
we briefly introduce some famous research teams with their research focus and main contributions. After that
we systematically compare the international and domestic research
and analyze the advantages and disadvantages of domestic research
which would motivate and boost the future research for domestic researchers and engineers. Finally
we discuss some challenges and promising research directions in the future for each topic
such as 1) image content and context understanding
viewer contextual and prior knowledge modeling
group emotion clustering
viewer and image interaction
and efficient learning for affective image content analysis; 2) data collection and annotation
real-time facial expression analysis
hybrid expression recognition
personalized emotion expression
and user privacy. Since emotion is an abstract
subjective
and complex high-level semantic concept
there are still some limitations of existing methods
and many challenges still remain unsolved. Such promising future research directions would help to reach the emotional intelligence for a better human-computer interaction.
情感计算情感识别孤独症图像识别表情识别
affective computingemotion recognitionautismimage recognitionexpression recognition
Ahsan U, de Choudhury M and Essa I. 2017. Towards using visual attributes to infer image sentiment of social events//Proceedings of 2017 International Joint Conference on Neural Networks. Anchorage, USA: IEEE: 1372-1379 [DOI: 10.1109/IJCNN.2017.7966013http://dx.doi.org/10.1109/IJCNN.2017.7966013]
Alameda-Pineda X, Ricci E, Yan Y and Sebe N. 2016. Recognizing emotions from abstract paintings using non-linear matrix completion//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 5240-5248 [DOI: 10.1109/CVPR.2016.566http://dx.doi.org/10.1109/CVPR.2016.566]
Balouchian P, Safaei M and Foroosh H. 2019. LUCFER: a large-scale context-sensitive image dataset for deep learning of visual emotions//Proceedings of 2019 IEEE Winter Conference on Applications of Computer Vision. Waikoloa, USA: IEEE: 1645-1654 [DOI: 10.1109/WACV.2019.00180http://dx.doi.org/10.1109/WACV.2019.00180]
Barsoum E, Zhang C,Ferrer C C and Zhang Z Y. 2016. Training deep networks for facial expression recognition with crowd-sourced label distribution//Proceedings of the 18th ACM International Conference on Multimodal Interaction. Tokyo, Japan: ACM: 279-283 [DOI: 10.1145/2993148.2993165http://dx.doi.org/10.1145/2993148.2993165]
Ben X Y, Ren Y, Zhang J P, Wang S J, Kpalma K, Meng W X and Liu Y J. 2021. Video-based facial micro-expression analysis: a survey of datasets, features and algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence: #3067464 [DOI: 10.1109/TPAMI.2021.3067464]
Benitez-Quiroz C F, Srinivasan R and Martinez A M. 2016. EmotioNet: an accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 5562-5570 [DOI: 10.1109/CVPR.2016.600http://dx.doi.org/10.1109/CVPR.2016.600]
Borth D, Ji R R, Chen T, Breuel T and Chang S F. 2013. Large-scale visual sentiment ontology and detectors using adjective noun pairs//Proceedings of the 21st ACM International Conference on Multimedia. Barcelona, Spain: ACM: 223-232 [DOI: 10.1145/2502081.2502282http://dx.doi.org/10.1145/2502081.2502282]
Burkhardt F, Paeschke A, Rolfes M, Sendlmeier W F and Weiss B. 2005. A database of German emotional speech//Proceedings of the 9th European Conference on Speech Communication and Technology. Lisbon, Portugal: ISCA: 1-4 [DOI: 10.21437/Interspeech.2005-446http://dx.doi.org/10.21437/Interspeech.2005-446]
Busso C, Bulut M, Lee C C, Kazemzadeh A, Mower E, Kim S, Chang J N, Lee S and Narayanan S S. 2008. IEMOCAP: interactive emotional dyadic motion capture database. Language Resources and Evaluation, 42(4): 335-359 [DOI: 10.1007/s10579-008-9076-6]
Centers for Disease Control and Prevention. 2016. Key Findings from the ADDM Network: A Snapshot of Autism Spectrum Disorder. Community Report on Autism. Centers for Disease Control and Prevention
Chen F H, Ji R R, Su J S, Cao D L and Gao Y. 2018a. Predicting microblog sentiments via weakly supervised multimodal deep learning. IEEE Transactions on Multimedia, 20(4): 997-1007 [DOI: 10.1109/TMM.2017.2757769]
Chen J W, Konrad J and Ishwar P. 2018b. VGAN-based image representation learning for privacy-preserving facial expression recognition//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Salt Lake City, USA: IEEE: 1683-1692 [DOI: 10.1109/CVPRW.2018.00207http://dx.doi.org/10.1109/CVPRW.2018.00207]
Chen M, Zhang L and Allebach J P. 2015. Learning deep features for image emotion classification//Proceedings of 2015 IEEE International Conference on Image Processing. Quebec City, Canada: IEEE: 4491-4495 [DOI: 10.1109/ICIP.2015.7351656http://dx.doi.org/10.1109/ICIP.2015.7351656]
Chen T, Borth D, Darrell T and Chang S F. 2014. DeepSentiBank: visual sentiment concept classification with deep convolutional neural networks [EB/OL]. [2022-02-06].https://arxiv.org/pdf/1410.8586.pdfhttps://arxiv.org/pdf/1410.8586.pdf
Chen Y L and Joo J. 2021. Understanding and mitigating annotation bias in facial expression recognition//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 14960-14971 [DOI: 10.1109/ICCV48922.2021.01471http://dx.doi.org/10.1109/ICCV48922.2021.01471]
Conner C M, White S W, Scahill L and Mazefsky C A. 2020. The role of emotion regulation and core autism symptoms in the experience of anxiety in autism. Autism, 24(4): 931-940 [DOI: 10.1177/1362361320904217]
Dan-Glauser E S and Scherer K R. 2011. The Geneva affective picture database (GAPED): a new 730-picture database focusing on valence and normative significance. Behavior Research Methods, 43(2): 468-477 [DOI: 10.3758/s13428-011-0064-1]
Darwin C. 2015. The Expression of the Emotions in Man and Animals. Chicago: University of Chicago Press [DOI: 10.7208/9780226220802]
Davison A K, Lansley C, Costen N, Tan K and Yap M H. 2018. SAMM: a spontaneous micro-facial movement dataset. IEEE Transactions on Affective Computing, 9(1): 116-129 [DOI: 10.1109/TAFFC.2016.2573832]
Dhall A. 2019. EmotiW 2019: automatic emotion, engagement and cohesion prediction tasks//Proceedings of 2019 International Conference on Multimodal Interaction. Suzhou, China: ACM: 546-550 [DOI: 10.1145/3340555.3355710http://dx.doi.org/10.1145/3340555.3355710]
Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X H, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, Uszkoreit J and Houlsby N. 2020. An image is worth 16×16 words: transformers for image recognition at scale//Proceedings of the 9th International Conference on Learning Representations. [s.l.]: OpenReview. net
Du S C and Martinez A M. 2015. Compound facial expressions of emotion: from basic research to clinical applications. Dialogues in Clinical Neuroscience, 17(4): 443-455 [DOI: 10.31887/DCNS.2015.17.4/sdu]
Du S C, Tao Y and Martinez A M. 2014. Compound facial expressions of emotion. Proceedings of the National Academy of Sciences of the United States of America, 111(15): E1454-E1462 [DOI: 10.1073/pnas.1322355111]
Ekman P. 1965. Communication through nonverbal behavior: a source of information about an interpersonal relationship//Tomkins S S and Izard C E, eds. Affect, Cognition and Personality. New York: Springer: 390-442
Ekman P. 2003. Darwin, deception, and facial expression. Annals of the New York Academy of Sciences, 1000(1): 205-221 [DOI: 10.1196/annals.1280.010]
Ekman P and Friesen W V. 1969. Nonverbal leakage and clues to deception. Psychiatry, 32(1): 88-106 [DOI: 10.1080/00332747.1969.11023575]
Ekman P and Friesen W V. 1971. Constants across cultures in the face and emotion. Journal of Personality and Social Psychology, 17(2): 124-129 [DOI: 10.1037/h0030377]
Ekman P and Friesen W V. 1978. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Palo Alto: Consulting Psychologists Press
Ekman P, Friesen W V and Hagar J C. 2002. Facial Action Coding System—the Manual. Salt Lake City: Research Nexus Division of Network Information Research Corporation: 15-464
Ekman P, Friesen W V, O′Sullivan M, Chan A, Diacoyanni-Tarlatzis I, Heider K, Krause R, Lecompte W A, Pitcairn T, Ricci-Bitti P E, Scherer K, Tomita M and Tzavaras A. 1987. Universals and cultural differences in the judgments of facial expressions of emotion. Journal of Personality and Social Psychology, 53(4): 712-717 [DOI: 10.1037/0022-3514.53.4.712]
Emery A E H, Muntoni F and Quinlivan R. 2015. Duchenne Muscular Dystrophy. 4th ed. Oxford: Oxford University Press
Fan S J, Jiang M, Shen Z Q, Koenig B L, Kankanhalli M S and Zhao Q. 2017. The role of visual attention in sentiment prediction//Proceedings of the 25th ACM International Conference on Multimedia. Mountain View, USA: ACM: 217-225 [DOI: 10.1145/3123266.3123445http://dx.doi.org/10.1145/3123266.3123445]
Gan Y S, Liong S T, Yau W C, Huang Y C and Tan L K. 2019. Off-ApexNet on micro-expression recognition system. Signal Processing: Image Communication, 74: 129-139 [DOI: 10.1016/j.image.2019.02.005]
Goodfellow I J, Erhan D, Carrier P L, Courville A, Mirza M, Hamner B, Cukierski W, Tang Y C, Thaler D, Lee D H, Zhou Y B, Ramaiah C, Feng F X, Li R F, Wang X J, Athanasakis D, Shawe-Taylor J, Milakov M, Park J, Ionescu R, Popescu M, Grozea C, Bergstra J, Xie J J, Romaszko L, Xu B, Chuang Z and Bengio Y. 2013. Challenges in representation learning: a report on three machine learning contests//Proceedings of the 20th International Conference on Neural Information Processing. Daegu, Korea(South): Springer: 117-124 [DOI: 10.1007/978-3-642-42051-1_16http://dx.doi.org/10.1007/978-3-642-42051-1_16]
Guo W Y, Zhang Y, Cai X R, Meng L, Yang J F and Yuan X J. 2021a. LD-MAN: layout-driven multimodal attention network for online news sentiment recognition. IEEE Transactions on Multimedia, 23: 1785-1798 [DOI: 10.1109/TMM.2020.3003648]
Guo Y F, Li B, Ben XY, Ren Y, Zhang J P, Yan R and Li Y J. 2021b. A magnitude and angle combined optical flow feature for microexpression spotting. IEEE MultiMedia, 28(2): 29-39 [DOI: 10.1109/MMUL.2021.3058017]
Haggard E A and Isaacs K S. 1966. Micromomentary Facial Expressions as Indicators of Ego Mechanisms in Psychotherapy//Gottschalk L A and Auerbach A H, eds. Methods of Research in Psychotherapy. Boston: Springer: 154-165 [DOI: 10.1007/978-1-4684-6045-2_14http://dx.doi.org/10.1007/978-1-4684-6045-2_14]
Han Y H, Li B J, Lai Y K and Liu Y J. 2018. CFD: a collaborative feature difference method for spontaneous micro-expression spotting//Proceedings of the 25th IEEE International Conference on Image Processing. Athens, Greece: IEEE: 1942-1946 [DOI: 10.1109/ICIP.2018.8451065http://dx.doi.org/10.1109/ICIP.2018.8451065]
Happy S and Routray A. 2019. Fuzzy histogram of optical flow orientations for micro-expression recognition. IEEE Transactions on Affective Computing, 10(3): 394-406 [DOI: 10.1109/TAFFC.2017.2723386]
He Y H. 2021. Research on micro-expression spotting method based on optical flow features//Proceedings of the 29th ACM International Conference on Multimedia. [s.l.]: ACM: 4803-4807 [DOI: 10.1145/3474085.3479225http://dx.doi.org/10.1145/3474085.3479225]
Hong X P, Peng W, Harandi M, Zhou Z H, Pietikäinen M and Zhao G Y. 2019. Characterizing subtle facial movements via riemannian manifold. ACM Transactions on Multimedia Computing, Communications, and Applications, 15(S3): #94 [DOI: 10.1145/3342227]
Hong X P, Xu Y Y and Zhao G Y. 2016a. LBP-TOP: a tensor unfolding revisit//Proceedings of 2016 Asian Conference on Computer Vision. Taipei, China: Springer: 513-527 [DOI: 10.1007/978-3-319-54407-6_34http://dx.doi.org/10.1007/978-3-319-54407-6_34]
Hong X P, Zhao G Y, Zafeiriou S, Pantic M and Pietikäinen M. 2016b. Capturing correlations of local features for image representation. Neurocomputing, 184: 99-106 [DOI: 10.1016/j.neucom.2015.07.134]
Huang X H, Wang S J, Liu X, Zhao G Y, Feng X Y and Pietikäinen M. 2019. Discriminative spatiotemporal local binary pattern with revisited integral projection for spontaneous facial micro-expression recognition. IEEE Transactions on Affective Computing, 10(1): 32-47 [DOI: 10.1109/TAFFC.2017.2713359]
Huang X H, Wang S J, Zhao G Y and Piteikäinen M. 2015. Facial micro-expression recognition using spatiotemporal local binary pattern with integral projection//Proceedings of 2015 IEEE International Conference on Computer Vision Workshop. Santiago, Chile: IEEE: 1-9 [DOI: 10.1109/ICCVW.2015.10http://dx.doi.org/10.1109/ICCVW.2015.10]
Huang X H, Zhao G Y, Hong X P, Zheng W M and Pietikäinen M. 2016. Spontaneous facial micro-expression analysis using spatiotemporal completed local quantized patterns. Neurocomputing, 175: 564-578 [DOI: 10.1016/j.neucom.2015.10.096]
Husák P, Ǒech J and Matas J. 2017. Spotting facial micro-expressions "In the Wild"//Proceedings of the 22nd Computer Vision Winter Workshop. Retz, Austria: [s. n.]
Jacob G M and Stenger B. 2021. Facial action unit detection with transformers//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 7676-7685 [DOI: 10.1109/CVPR46437.2021.00759http://dx.doi.org/10.1109/CVPR46437.2021.00759]
Ji R R, Chen F H, Cao L J and Gao Y. 2019. Cross-modality microblog sentiment prediction via bi-layer multimodal hypergraph learning. IEEE Transactions on Multimedia, 21(4): 1062-1075 [DOI: 10.1109/TMM.2018.2867718]
Jou B, Chen T, Pappas N, Redi M, Topkara M and Chang S F. 2015. Visual affect around the world: a large-scale multilingual visual sentiment ontology//Proceedings of the 23rd ACM International Conference on Multimedia. Brisbane, Australia: ACM: 159-168 [DOI: 10.1145/2733373.2806246http://dx.doi.org/10.1145/2733373.2806246]
Kalantarian H, Jedoui K, Dunlap K, Schwartz J, Washington P, Husic A, Tariq Q, Ning M, Kline A and Wall D P. 2020. The performance of emotion classifiers for children with parent-reported autism: quantitative feasibility study. JMIR Mental Health, 7(4): #e13174 [DOI: 10.2196/13174]
Kalantarian H, Jedoui K, Washington P, Tariq Q, Dunlap K, Schwartz J and Wall D P. 2019. Labeling images with facial emotion and the potential for pediatric healthcare. Artificial Intelligence in Medicine, 98: 77-86 [DOI: 10.1016/j.artmed.2019.06.004]
Kara O, Churamani N and Gunes H. 2021. Towards fair affective robotics: continual learning for mitigating bias in facial expression and action unit recognition//Proceedings of the Workshop on Lifelong Learning and Personalization in Long-Term Human-Robot Interaction (LEAP-HRI), 16th ACM/IEEE International Conference on Human-Robot Interaction (HRI) [EB/OL]. [2022-02-06].https://arxiv.org/pdf/2103.09233.pdfhttps://arxiv.org/pdf/2103.09233.pdf
Kim D H, Baddar W J and Ro Y M. 2016. Micro-expression recognition with expression-state constrained spatio-temporal feature representations//Proceedings of the 24th ACM International Conference on Multimedia. Amsterdam, the Netherlands: ACM: 382-386 [DOI: 10.1145/2964284.2967247http://dx.doi.org/10.1145/2964284.2967247]
Koelstra S, Muhl C, Soleymani M, Lee J S, Yazdani A, Ebrahimi T, Pun T, Nijholt A and Patras I. 2012. DEAP: a database for emotion analysis; using physiological signals. IEEE Transactions on Affective Computing, 3(1): 18-31 [DOI: 10.1109/T-AFFC.2011.15]
Kollias D and Zafeiriou S. 2019. Expression, affect, action unit recognition: aff-wild2, multi-task learning and arcface//Proceedings of the 30th British Machine Vision Conference. Cardiff, UK: BMVA Press
Kumar A J R and Bhanu B. 2021. Micro-expression classification based on landmark relations with graph attention convolutional network//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Nashville, USA: IEEE: 1511-1520 [DOI: 10.1109/CVPRW53098.2021.00167http://dx.doi.org/10.1109/CVPRW53098.2021.00167]
Lang P J, Bradley M M and Cuthbert B N. 1997. International Affective Picture System (IAPS): Technical Manual and Affective Ratings. NIMH Center for the Study of Emotion and Attention: 39-58
Lei L, Chen T, Li S G and Li J F. 2021. Micro-expression recognition based on facial graph representation learning and facial action unit fusion//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Nashville, USA: IEEE: 1571-1580 [DOI: 10.1109/CVPRW53098.2021.00173http://dx.doi.org/10.1109/CVPRW53098.2021.00173]
Lei L, Li J F, Chen T and Li S G. 2020a. A novel graph-TCN with a graph structured representation for micro-expression recognition//Proceedings of the 28th ACM International Conference on Multimedia. Seattle, USA: ACM: 2237-2245 [DOI: 10.1145/3394171.3413714http://dx.doi.org/10.1145/3394171.3413714]
Li J T, Soladie C and Seguier R. 2020a. Local temporal pattern and data augmentation for micro-expression spotting [JB/OL]. IEEE Transactions on Affective Computing, https://ieeexplore.ieee.org/document/9195783 [DOI: 10.1109/TAFFC.2020.3023821http://dx.doi.org/10.1109/TAFFC.2020.3023821]
Li J T, Wang S J, Yap M H, See J, Hong X P and Li X B. 2020b. MEGC2020 - the third facial micro-expression grand challenge//Proceedings of the 15th IEEE International Conference on Automatic Face and Gesture Recognition. Buenos Aires, Argentina: IEEE: 777-780 [DOI: 10.1109/FG47880.2020.00035http://dx.doi.org/10.1109/FG47880.2020.00035]
Li J T, Yap M H, Cheng W H, See J, Hong X P, Li X B and Wang S J. 2021d. FME'21: 1st workshop on facial micro-expression: advanced techniques for facial expressions generation and spotting//Proceedings of the 29th ACM International Conference on Multimedia. [s.l.]: ACM: 5700-5701 [DOI: 10.1145/3474085.3478579http://dx.doi.org/10.1145/3474085.3478579]
Li S and Deng W H. 2019a. Blended emotion in-the-wild: multi-label facial expression recognition using crowdsourced annotations and deep locality feature learning. International Journal of Computer Vision, 127(6): 884-906 [DOI: 10.1007/s11263-018-1131-1]
Li S and Deng W H. 2019b. Reliable crowdsourcing and deep locality-preserving learning for unconstrained facial expression recognition. IEEE Transactions on Image Processing, 28(1): 356-370 [DOI: 10.1109/TIP.2018.2868382]
Li S and Deng W H. 2020. A deeper look at facial expression dataset bias. IEEE Transactions on Affective Computing [DOI: 10.1109/TAFFC.2020.2973158]
Li S, Deng W H and Du J P. 2017. Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 2584-2593 [DOI: 10.1109/CVPR.2017.277http://dx.doi.org/10.1109/CVPR.2017.277]
Li X B, Hong X P, Moilanen A, Huang X H, Pfister T, Zhao G Y and Pietikäinen M. 2015. Reading hidden emotions: spontaneous micro-expression spotting and recognition[EB/OL]. [2022-02-06].https://arxiv.org/pdf/1511.00423.pdfhttps://arxiv.org/pdf/1511.00423.pdf
Li X B, Hong X P, Moilanen A, Huang X H, Pfister T, Zhao G Y and Pietikäinen M. 2018. Towards reading hidden emotions: a comparative study of spontaneous micro-expression spotting and recognition methods. IEEE Transactions on Affective Computing, 9(4): 563-577 [DOI: 10.1109/TAFFC.2017.2667642]
Li X B, Pfister T, Huang X H, Zhao G Y and Pietikäinen M. 2013. A spontaneous micro-expression database: inducement, collection and baseline//Proceedings of the 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. Shanghai, China: IEEE: 1-6 [DOI: 10.1109/FG.2013.6553717http://dx.doi.org/10.1109/FG.2013.6553717]
Li Y, Zeng J B and Shan S G. 2022. Learning representations for facial actions from unlabeled videos. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(1): 302-317 [DOI: 10.1109/TPAMI.2020.3011063]
Li Y T, Huang X H and Zhao G Y. 2021a. Joint local and global information learning with single apex frame detection for micro-expression recognition. IEEE Transactions on Image Processing, 30: 249-263 [DOI: 10.1109/TIP.2020.3035042]
Li Y T, Huang X H and Zhao G Y. 2021b. Micro-expression action unit detection with spatial and channel attention. Neurocomputing, 436: 221-231 [DOI: 10.1016/j.neucom.2021.01.032]
Li Y T, Wei J S, Liu Y, Kauttonen J and Zhao G Y. 2021c. Deep learning for micro-expression recognition: a survey [EB/OL]. [2020-12-21].https://arxiv.org/pdf/2107.02823.pdfhttps://arxiv.org/pdf/2107.02823.pdf
Liao M Y, Chen J Y, Wang G S and Peng S X. 2021. Intelligent identification of childrenwith autism spectrum disorder integrating multimodal data and its effectiveness. Chinese Science Bulletin, 66(20): 2618-2628
廖梦怡, 陈靓影, 王广帅, 彭世新. 2021. 融合多模态数据的自闭症谱系障碍儿童智能化识别及其有效性. 科学通报, 66(20): 2618-2628) [DOI: 10.1360/TB-2020-1635]
Lin C, Zhao S C, Meng L and Chua T S. 2020. Multi-source domain adaptation for visual sentiment classification//Proceedings of the 34th AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI: 2661-2668 [DOI: 10.1609/aaai.v34i03.5651http://dx.doi.org/10.1609/aaai.v34i03.5651]
Liong S T, Gan Y S, Zheng D N, Li S M, Xu H X, Zhang H Z, Lyu R K and Liu K H. 2020. Evaluation of the spatio-temporal features and GAN for micro-expression recognition system. Journal of Signal Processing Systems, 92(7): 705-725 [DOI: 10.1007/s11265-020-01523-4]
Liong S T, See J, Wong K, Le Ngo A C, Oh Y H and Phan R. 2015. Automatic apex frame spotting in micro-expression database//Proceedings of the 3rd IAPR Asian Conference on Pattern Recognition. Kuala Lumpur, Malaysia: IEEE: 665-669 [DOI: 10.1109/ACPR.2015.7486586http://dx.doi.org/10.1109/ACPR.2015.7486586]
Liu A A, Shi Y D, Jing P J, Liu J and Su Y T. 2018. Structured low-rank inverse-covariance estimation for visual sentiment distribution prediction. Signal Processing, 152: 206-216 [DOI: 10.1016/j.sigpro.2018.06.001]
Liu H Y, Xu M, Wang J Q, Rao T R and Burnett I. 2016a. Improving visual saliency computing with emotion intensity. IEEE Transactions on Neural Networks and Learning Systems, 27(6): 1201-1213 [DOI: 10.1109/tnnls.2016.2553579]
Liu J J, Wang Z Y, Xu K, Ji B, Zhang G Y, Wang Y, Deng J X, Xu Q, Xu X and Liu H H. 2020. Early screening of autism in toddlers via response-to-instructions protocol. IEEE Transactions on Cybernetics, 1-11 [DOI: 10.1109/TCYB.2020.3017866]
Liu Y J, Li B J and Lai Y K. 2021a. Sparse MDMO: learning a discriminative feature for micro-expression recognition. IEEE Transactions on Affective Computing, 12(1): 254-261 [DOI: 10.1109/TAFFC.2018.2854166]
Liu Y J, Zhang J K, Yan W J, Wang S J, Zhao GY and Fu X L. 2016b. A main directional mean optical flow feature for spontaneous micro-expression recognition. IEEE Transactions on Affective Computing, 7(4): 299-310 [DOI: 10.1109/taffc.2015.2485205]
Liu Z, Lin Y T, Cao Y, Hu H, Wei Y X, Zhang Z, Lin S and Guo B N. 2021b. Swin transformer: hierarchical vision transformer using shifted windows//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 9992-10002 [DOI: 10.1109/ICCV48922.2021.00986http://dx.doi.org/10.1109/ICCV48922.2021.00986]
Lu X, Suryanarayan P, Adams R B, Li J, Newman M G and Wang J Z. 2012. On shape and the computability of emotions//Proceedings of the 20th ACM International Conference on Multimedia. Nara, Japan: ACM: 229-238 [DOI: 10.1145/2393347.2393384http://dx.doi.org/10.1145/2393347.2393384]
Machajdik J and Hanbury A. 2010. Affective image classification using features inspired by psychology and art theory//Proceedings of the 18th ACM International Conference on Multimedia. Firenze, Italy: ACM: 83-92 [DOI: 10.1145/1873951.1873965http://dx.doi.org/10.1145/1873951.1873965]
Mai S J, Hu H F, Xu J and Xing S L. 2022. Multi-fusion residual memory network for multimodal human sentiment comprehension. IEEE Transactions on Affective Computing, 13(1): 320-334 [DOI: 10.1109/TAFFC.2020.3000510]
Martin J C, Niewiadomski R, Devillers L, Buisine S and Pelachaud C. 2006. Multimodal complex emotions: gesture expressivity and blended facial expressions. International Journal of Humanoid Robotics, 3(3): 269-291 [DOI: 10.1142/S0219843606000825]
Mehrabian A. 1996. Pleasure-arousal-dominance: a general framework for describing and measuring individual differences in temperament. Current Psychology, 14(4): 261-292 [DOI: 10.1007/BF02686918]
Mikels J A, Fredrickson B L, Larkin G R, Lindberg C M, Maglio S M and Reuter-Lorenz P A. 2005. Emotional category data on images from the international affective picture system. Behavior Research Methods, 37(4): 626-630 [DOI: 10.3758/BF03192732]
Minsky M L. 1986. The Society of Mind. New York: Simon and Schuster
Moilanen A, Zhao G Y and Pietikäinen M. 2014. Spotting rapid facial movements from videos using appearance-based feature difference analysis//Proceedings of the 22nd International Conference on Pattern Recognition. Stockholm, Sweden: IEEE: 1722-1727 [DOI: 10.1109/ICPR.2014.303http://dx.doi.org/10.1109/ICPR.2014.303]
Mollahosseini A, Hasani B and Mahoor M H. 2019. AffectNet: a database for facial expression, valence, and arousal computing in the wild. IEEE Transactions on Affective Computing, 10(1): 18-31 [DOI: 10.1109/TAFFC.2017.2740923]
Morency L P, Mihalcea R and Doshi P. 2011. Towards multimodal sentiment analysis: harvesting opinions from the web//Proceedings of the 13th International Conference on Multimodal Interfaces. Alicante, Spain: ACM: 169-176 [DOI: 10.1145/2070481.2070509http://dx.doi.org/10.1145/2070481.2070509]
Nag A, Haber N, Voss C, Tamura S, Daniels J, Ma J, Chiang B, Ramachandran S, Schwartz J, Winograd T, Feinstein C and Wall D P. 2020. Toward continuous social phenotyping: analyzing gaze patterns in an emotion recognition task for children with autism through wearable smart glasses. Journal of Medical Internet Research, 22(4): #e13810 [DOI: 10.2196/13810]
Nakashima Y, Koyama T, Yokoya N and Babaguchi N. 2015. Facial expression preserving privacy protection using image melding//Proceedings of 2015 IEEE International Conference on Multimedia and Expo (ICME). Turin, Italy: IEEE: 1-6 [DOI: 10.1109/ICME.2015.7177394http://dx.doi.org/10.1109/ICME.2015.7177394]
Narain J, Johnson K T, Ferguson C, O′Brien A, Talkar T, Weninger Y Z, Wofford P, Quatieri T, Picard R and Maes P. 2020a. Personalized modeling of real-world vocalizations from nonverbal individuals//Proceedings of 2020 International Conference on Multimodal Interaction. [s.l.]: ACM: 665-669 [DOI: 10.1145/3382507.3418854http://dx.doi.org/10.1145/3382507.3418854]
Narain J, Johnson K T, O′Brien A, Wofford P, Maes P and Picard R. 2020b. Nonverbal vocalizations as speech: characterizing natural-environment audio from nonverbal individuals with autism//Proceedings of the Laughter and Other Non-Verbal Vocalisations Workshop 2020. Bielefeld, Germany: [s. n.]
Ngo A C L, Johnston A, Phan R C W and See J. 2018. Micro-expression motion magnification: Global lagrangian vs. local eulerian approaches//Proceedings of the 13th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2018). Xi′an, China: IEEE: #102 [DOI: 10.1109/FG.2018.00102http://dx.doi.org/10.1109/FG.2018.00102]
Nie X, Takalkar M A, Duan M Y, Zhang H M and Xu M. 2021. GEME: dual-stream multi-task GEnder-based micro-expression recognition. Neurocomputing, 427: 13-28 [DOI: 10.1016/j.neucom.2020.10.082]
Niu X S, Yu Z T, Han H, Li X B, Shan S G and Zhao G Y. 2020. Video-based remote physiological measurement via cross-verified feature disentangling//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer: 295-310 [DOI: 10.1007/978-3-030-58536-5_18http://dx.doi.org/10.1007/978-3-030-58536-5_18]
Nummenmaa T. 1988. The recognition of pure and blended facial expressions of emotion from still photographs. Scandinavian Journal of Psychology, 29(1): 33-47 [DOI: 10.1111/j.1467-9450.1988.tb00773.x]
Oh T H, Jaroensri R, Kim C, Elgharib M, Durand F E, Freeman W T and Matusik W. 2018. Learning-based video motion magnification//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer: 663-679 [DOI: 10.1007/978-3-030-01225-0_39http://dx.doi.org/10.1007/978-3-030-01225-0_39]
Ola L and Gullon-Scott F. 2020. Facial emotion recognition in autistic adult females correlates with alexithymia, not autism. Autism, 24(8): 2021-2034 [DOI: 10.1177/1362361320932727]
Palser E R, Galvez-Pol A, Palmer C E, Hannah R, Fotopoulou A, Pellicano E and Kilner J M. 2021. Reduced differentiation of emotion-associated bodily sensations in autism. Autism, 25(5): 1321-1334 [DOI: 10.1177/1362361320987950]
Pan Y R, Cai K J, Cheng M, Zou X B and Li M. 2021. Responsive social smile: a machine learning based multimodal behavior assessment framework towards early stage autism screening//Proceedings of the 25th International Conference on Pattern Recognition (ICPR). Milan, Italy: IEEE: 2240-2247 [DOI: 10.1109/ICPR48806.2021.9412766http://dx.doi.org/10.1109/ICPR48806.2021.9412766]
Panda R, Zhang J M, Li H X, Lee J Y, Lu X and Roy-Chowdhury A K. 2018. Contemplating visual emotions: understanding and overcoming dataset bias//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer: 594-612 [DOI: 10.1007/978-3-030-01216-8_36http://dx.doi.org/10.1007/978-3-030-01216-8_36]
Patel D, Hong X P and Zhao G Y. 2016. Selective deep features for micro-expression recognition//Proceedings of the 23rd International Conference on Pattern Recognition. Cancun, Mexico: IEEE: 2258-2263 [DOI: 10.1109/ICPR.2016.7899972http://dx.doi.org/10.1109/ICPR.2016.7899972]
Patel D, Zhao G Y and Pietikäinen M. 2015. Spatiotemporal integration of optical flow vectors for micro-expression detection//Proceedings of the 16th International Conference on Advanced Concepts for Intelligent Vision Systems. Catania, Italy: Springer: 369-380 [DOI: 10.1007/978-3-319-25903-1_32http://dx.doi.org/10.1007/978-3-319-25903-1_32]
Peng K C, Chen T, Sadovnik A and Gallagher A. 2015. A mixed bag of emotions: model, predict, and transfer emotion distributions//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE: 860-868 [DOI: 10.1109/CVPR.2015.7298687http://dx.doi.org/10.1109/CVPR.2015.7298687]
Peng W, Hong X P, Xu Y Y and Zhao G Y. 2019. A boost in revealing subtle facial expressions: a consolidated eulerian framework//Proceedings of the 14th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2019). Lille, France: IEEE: 1-5 [DOI: 10.1109/FG.2019.8756541http://dx.doi.org/10.1109/FG.2019.8756541]
Pfister T, Li X B, Zhao G Y and Pietikäinen M. 2011. Recognising spontaneous facial micro-expressions//Proceedings of 2011 International Conference on Computer Vision. Barcelona, Spain: IEEE [DOI: 10.1109/ICCV.2011.6126401http://dx.doi.org/10.1109/ICCV.2011.6126401]
Plutchik R. 2001. The nature of emotions: human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice. American Scientist, 89(4): 344-350
Poria S, Cambria E, Hazarika D, Majumder N, Zadeh A and Morency L P. 2017. Context-dependent sentiment analysis in user-generated videos//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Vancouver, Canada: Association for Computational Linguistics: 873-883 [DOI: 10.18653/v1/P17-1081http://dx.doi.org/10.18653/v1/P17-1081]
Qu F B, Wang S J, Yan W J, Li H, Wu S H and Fu X L. 2018. CAS(ME)2: a database for spontaneous macro-expression and micro-expression spotting and recognition. IEEE Transactions on Affective Computing, 9(4): 424-436 [DOI: 10.1109/TAFFC.2017.2654440]
Rahman W, Hasan K, Lee S, Zadeh A A B, Mao C F, Morency L P and Hoque E. 2020. Integrating multimodal information in large pretrained transformers//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. [s.l.]: Association for Computational Linguistics: 2359-2369 [DOI: 10.18653/v1/2020.acl-main.214http://dx.doi.org/10.18653/v1/2020.acl-main.214]
Rahulamathavan Y and Rajarajan M. 2017. Efficient privacy-preserving facial expression classification. IEEE Transactions on Dependable and Secure Computing, 14(3): 326-338 [DOI: 10.1109/TDSC.2015.2453963]
Rao T R, Li X X and Xu M. 2020. Learning multi-level deep representations for image emotion classification. Neural Processing Letters, 51(3): 2043-2061 [DOI: 10.1007/s11063-019-10033-9]
Rao T R, Li X X, Zhang H M and Xu M. 2019. Multi-level region-based Convolutional Neural Network for image emotion classification. Neurocomputing, 333: 429-439 [DOI: 10.1016/j.neucom.2018.12.053]
Rao T R, Xu M, Liu H Y, Wang J Q and Burnett I. 2016. Multi-scale blocks based image emotion classification using multiple instance learning//Proceedings of 2016 IEEE International Conference on Image Processing. Phoenix, USA: IEEE: 634-638 [DOI: 10.1109/ICIP.2016.7532434http://dx.doi.org/10.1109/ICIP.2016.7532434]
Ruan D L, Yan Y, Lai S Q, Chai Z H, Shen C H and Wang H Z. 2021. Feature decomposition and reconstruction learning for effective facial expression recognition//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 7656-7665 [DOI: 10.1109/CVPR46437.2021.00757http://dx.doi.org/10.1109/CVPR46437.2021.00757]
Rudovic O, Lee J, Dai M, Schuller B and Picard R W. 2018. Personalized machine learning for robot perception of affect and engagement in autism therapy. Science Robotics, 3(19): #6760 [DOI: 10.1126/scirobotics.aao6760]
Rui T, Cui P and Zhu W W. 2017. Joint user-interest and social-influence emotion prediction for individuals. Neurocomputing, 230: 66-76 [DOI: 10.1016/j.neucom.2016.11.054]
Sanchez E, Tellamekala M K, Valstar M and Tzimiropoulos G. 2021. Affective processes: stochastic modelling of temporal context for emotion and facial expression recognition//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 9070-9080 [DOI: 10.1109/CVPR46437.2021.00896http://dx.doi.org/10.1109/CVPR46437.2021.00896]
Sartori A, Culibrk D, Yan Y and Sebe N. 2015. Who's Afraid of Itten: using the art theory of color combination to analyze emotions in abstract paintings//Proceedings of the 23rd ACM International Conference on Multimedia. Brisbane, Australia: ACM: 311-320 [DOI: 10.1145/2733373.2806250http://dx.doi.org/10.1145/2733373.2806250]
See J, Yap M H, Li J T, Hong X P and Wang S J. 2019. MEGC 2019-the second facial micro-expressions grand challenge//Proceedings of the 14th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2019). Lille, France: IEEE: 1-5 [DOI: 10.1109/FG.2019.8756611http://dx.doi.org/10.1109/FG.2019.8756611]
She D Y, Sun M and Yang J F. 2019. Learning discriminative sentiment representation from strongly- and weakly supervised CNNs. ACM Transactions on Multimedia Computing, Communications, and Applications, 15(S3): #96 [DOI: 10.1145/3326335]
She J H, Hu Y B, Shi H L, Wang J, Shen Q and Mei T. 2021. Dive into ambiguity: latent distribution mining and pairwise uncertainty estimation for facial expression recognition//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 6244-6253 [DOI: 10.1109/CVPR46437.2021.00618http://dx.doi.org/10.1109/CVPR46437.2021.00618]
Soleymani M, Lichtenauer J, Pun T and Pantic M. 2012. A multimodal database for affect recognition and implicit tagging. IEEE Transactions on Affective Computing, 3(1): 42-55 [DOI: 10.1109/T-AFFC.2011.25]
Song T F, Cui Z J, Wang Y R, Zheng W M and Ji Q. 2021. Dynamic probabilistic graph convolution for facial action unit intensity estimation//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 4843-4852 [DOI: 10.1109/CVPR46437.2021.00481http://dx.doi.org/10.1109/CVPR46437.2021.00481]
Su Y T, Zhang J Q, Liu J and Zhai G T. 2021. Key facial components guided micro-expression recognition based on first and second-order motion//Proceedings of 2021 IEEE International Conference on Multimedia and Expo. Shenzhen, China: IEEE: 1-6 [DOI: 10.1109/ICME51207.2021.9428407http://dx.doi.org/10.1109/ICME51207.2021.9428407]
Sullivan O A and Wang C Y. 2020. Autism spectrum disorder interventions in mainland China: a systematic review. Review Journal of Autism and Developmental Disorders, 7(3): 263-277 [DOI: 10.1007/s40489-019-00191-w]
Tang C G, Zheng W M, Zong Y, Qiu N, Lu C, Zhang X L, Ke X Y and Guan C T. 2020. Automatic identification of high-risk autism spectrum disorder: a feasibility study using video and audio data under the still-face paradigm. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 28(11): 2401-2410 [DOI: 10.1109/TNSRE.2020.3027756]
Theeuwes J and Van der Stigchel S. 2006. Faces capture attention: evidence from inhibition of return. Visual Cognition, 13(6): 657-665 [DOI: 10.1080/13506280500410949]
Tian Y L, Kanade T and Cohn J F. 2011. Facial expression recognition//Li S Z and Jain A K, eds. Handbook of Face Recognition. London, UK: Springer: 487-519 [DOI: 10.1007/978-0-85729-932-1_19http://dx.doi.org/10.1007/978-0-85729-932-1_19]
Tomasello M, Carpenter M, Call J, Behne T and Moll H. 2005.Understanding and sharing intentions: the origins of cultural cognition. Behavioral and Brain Sciences, 28(5): 675-691 [DOI: 10.1017/S0140525X05000129]
Tran T K, Vo Q N, Hong X P, Li X B and Zhao G Y. 2021. Micro-expression spotting: a new benchmark. Neurocomputing, 443: 356-368 [DOI: 10.1016/j.neucom.2021.02.022]
Tsai W T, Lee I J and Chen C H. 2021. Inclusion of third-person perspective in CAVE-like immersive 3D virtual reality role-playing games for social reciprocity training of children with an autism spectrum disorder. Universal Access in the Information Society, 20(2): 375-389 [DOI: 10.1007/s10209-020-00724-9]
Ullah A, Wang J, Anwar M S, Ahmad A, Nazir S, Khan H U and Fei Z S. 2021. Fusion of machine learning and privacy preserving for secure facial expression recognition. Security and Communication Networks, 2021: #6673992 [DOI: 10.1155/2021/6673992]
Vadicamo L, Carrara F, Cimino A, Cresci S, Dell'Orletta F, Falchi F and Tesconi M. 2017. Cross-media learning for image sentiment analysis in the wild//Proceedings of 2017 IEEE International Conference on Computer Vision Workshops. Venice, Italy: IEEE: 308-317 [DOI: 10.1109/ICCVW.2017.45http://dx.doi.org/10.1109/ICCVW.2017.45]
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A N, Kaiser Ł and Polosukhin H. 2017. Attention is all you need//Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc. : 6000-6010
Wang J and Geng X. 2021. Label distribution learning by exploiting label distribution manifold. IEEE Transactions on Neural Networks and Learning Systems: 1-14 [DOI: 10.1109/TNNLS.2021.3103178]
Wang K, Peng X J, Yang J F, Lu S J and Qiao Y. 2020a. Suppressing uncertainties for large-scale facial expression recognition//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 6896-6905 [DOI: 10.1109/CVPR42600.2020.00693http://dx.doi.org/10.1109/CVPR42600.2020.00693]
Wang K, Peng X J, Yang J F, Meng D B and Qiao Y. 2020b. Region attention networks for pose and occlusion robust facial expression recognition. IEEE Transactions on Image Processing, 29: 4057-4069 [DOI: 10.1109/TIP.2019.2956143]
Wang L J, Guo W Y, Yao X X, Zhang Y X and Yang J F. 2021a. Multimodal event-aware network for sentiment analysis in tourism. IEEE MultiMedia, 28(2): 49-58 [DOI: 10.1109/MMUL.2021.3079195]
Wang Q D, Lu L, Zhang Q, Fang F, Zou X B and Yi L. 2018. Eye avoidance in young children with autism spectrum disorder is modulated by emotional facial expressions. Journal of Abnormal Psychology, 127(7): 722-732 [DOI: 10.1037/abn0000372]
Wang S J, Wu S H, Qian X S, Li J X and Fu X L. 2017. A main directional maximal difference analysis for spotting facial movements from long-term videos. Neurocomputing, 230: 382-389 [DOI: 10.1016/j.neucom.2016.12.034]
Wang S J, Yan W J, Li X B, Zhao G Y, Zhou C G, Fu X L, Yang M H and Tao J H. 2015a. Micro-expression recognition using color spaces. IEEE Transactions on Image Processing, 24(12): 6034-6047 [DOI: 10.1109/TIP.2015.2496314]
Wang W N, Yu Y L and Jiang S M. 2006. Image retrieval by emotional semantics: a study of emotional space and feature extraction//Proceedings of 2006 IEEE International Conference on Systems, Man and Cybernetics. Taipei, China: IEEE: 3534-3539 [DOI: 10.1109/ICSMC.2006.384667http://dx.doi.org/10.1109/ICSMC.2006.384667]
Wang W N, Yu Y L and Zhang J C. 2004. Image emotional classification: static vs. dynamic//Proceedings of 2004 IEEE International Conference on Systems, Man and Cybernetics. Hague, the Netherlands: IEEE: 6407-6411 [DOI: 10.1109/ICSMC.2004.1401407http://dx.doi.org/10.1109/ICSMC.2004.1401407]
Wang X H, Jia J, Yin J M and Cai L H. 2013. Interpretable aesthetic features for affective image classification//Proceedings of 2013 IEEE International Conference on Image Processing. Melbourne, Australia: IEEE: 3230-3234 [DOI: 10.1109/ICIP.2013.6738665http://dx.doi.org/10.1109/ICIP.2013.6738665]
Wang Y L, Wang S H, Tang J L, Liu H and Li B X. 2015b. Unsupervised sentiment analysis for social media images//Proceedings of the 24th International Conference on Artificial Intelligence. Buenos Aires, Argentina: AAAI Press: 2378-2379
Wang Z Y, Liu J J, He K S, Xu Q, Xu X and Liu H H. 2021b. Screening early children with autism spectrum disorder via response-to-name protocol. IEEE Transactions on Industrial Informatics, 17(1): 587-595 [DOI: 10.1109/TII.2019.2958106]
Washington P, Paskov K M, Kalantarian H, Stockham N, Voss C, Kline A, Patnaik R, Chrisman B, Varma M, Tariq Q, Dunlap K, Schwartz J, Haber N and Wall D P. 2019. Feature selection and dimension reduction of social autism data//Altman R B, Dunker A K, Hunter L, Ritchie M D, Murray T and Klein T E, eds. Pacific Symposium on Biocomputing 2020. Kohala Coast, USA: [s. n.]: 707-718 [DOI: 10.1142/9789811215636_0062http://dx.doi.org/10.1142/9789811215636_0062]
Wöllmer M, Weninger F, Knaup T, Schuller B, Sun C K, Sagae K and Morency L P. 2013. Youtube movie reviews: sentiment analysis in an audio-visual context. IEEE Intelligent Systems, 28(3): 46-53 [DOI: 10.1109/MIS.2013.34]
World Health Organization. 2007. International Classification of Functioning, Disability and Health: Children and Youth Version: ICF-CY. World Health Organization
Wu H Y, Rubinstein M, Shih E, Guttag J, Durand F and Freeman W. 2012. Eulerian video magnification for revealing subtle changes in the world. ACM Transaction on Graphics, 31(4): #65
Wu Y, Lin Z J, Zhao Y Y, Qin B and Zhu L N. 2021. A text-centered shared-private framework via cross-modal prediction for multimodal sentiment analysis//Proceedings of the Findings of the Association for Computational Linguistics. [s.l.]: Association for Computational Linguistics [DOI: 10.18653/v1/2021.findings-acl.417http://dx.doi.org/10.18653/v1/2021.findings-acl.417]
Xia B and Wang S F. 2021. Micro-expression recognition enhanced by macro-expression from spatial-temporal domain//Proceedings of the 13th International Joint Conference on Artificial Intelligence. Montreal, Canada: ijcai. org: 1186-1193 [DOI: 10.24963/ijcai.2021/164http://dx.doi.org/10.24963/ijcai.2021/164]
Xia B, Wang W K, Wang S F and Chen E H. 2020b. Learning from macro-expression: a micro-expression recognition framework//Proceedings of the 28th ACM International Conference on Multimedia. Seattle, USA: ACM: 2936-2944 [DOI: 10.1145/3394171.3413774http://dx.doi.org/10.1145/3394171.3413774]
Xia Z Q, Hong X P, Gao X Y, Feng X Y, Zhao G Y. 2019a. Spatiotemporal recurrent convolutional networks for recognizing spontaneous micro-expressions. IEEE Transactions on Multimedia, 22(3): 626-640 [DOI: 10.1109/TMM.2019.2931351]
Xia Z Q, Liang H, Hong X P and Feng X Y. 2019b. cross-database micro-expression recognition with deep convolutional networks//Proceedings of the 3rd International Conference on Biometric Engineering and Applications. Stockholm, Sweden: ACM: 56-60 [DOI: 10.1145/3345336.3345343http://dx.doi.org/10.1145/3345336.3345343]
Xia Z Q, Peng W, Khor H Q, Feng X Y and Zhao G Y. 2020a. Revealing the invisible with model and data shrinking for composite-database micro-expression recognition. IEEE Transactions on Image Processing, 29: 8590-8605 [DOI: 10.1109/TIP.2020.3018222]
Xie H X, Lo L, Shuai H H and Cheng W H. 2020. Au-assisted graph attention convolutional network for micro-expression recognition//Proceedings of the 28th ACM International Conference on Multimedia. Seattle, USA: ACM: 2871-2880 [DOI: 10.1145/3394171.3414012http://dx.doi.org/10.1145/3394171.3414012]
Xu C, Cetintas S, Lee K C and Li L J. 2014. Visual sentiment prediction with deep convolutional neural networks [EB/OL]. [2022-02-06].https://arxiv.org/pdf/1411.5731.pdfhttps://arxiv.org/pdf/1411.5731.pdf
Xu F, Zhang J P and Wang J Z. 2017. Microexpression identification and categorization using a facial dynamics map. IEEE Transactions on Affective Computing, 8(2): 254-267 [DOI: 10.1109/TAFFC.2016.2518162]
Xu N, Mao W J and Chen G D. 2019. Multi-interactive memory network for aspect based multimodal sentiment analysis//Proceedings of the 33rd AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI: 371-378 [DOI: 10.1609/aaai.v33i01.3301371http://dx.doi.org/10.1609/aaai.v33i01.3301371]
Xu T, White J, Kalkan S and Gunes H. 2020. Investigating bias and fairness in facial expression recognition//Proceedings of 2020 European Conference on Computer Vision. Glasgow, UK: Springer: 506-523 [DOI: 10.1007/978-3-030-65414-6_35http://dx.doi.org/10.1007/978-3-030-65414-6_35]
Xue F L, Wang Q C and Guo G D. 2021. TransFER: learning relation-aware facial expression representations with transformers//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 3581-3590 [DOI: 10.1109/ICCV48922.2021.00358http://dx.doi.org/10.1109/ICCV48922.2021.00358]
Yan W J, Li S, Que C T, Pei J Q and Deng W H. 2020. RAF-AU database: in-the-wild facial expressions with subjective emotion judgement and objective AU annotations//Proceedings of the 15th Asian Conference on Computer Vision. Kyoto, Japan: Springer [DOI: 10.1007/978-3-030-69544-6_5http://dx.doi.org/10.1007/978-3-030-69544-6_5]
Yan W J, Li X B, Wang S J, Zhao G Y, Liu Y J, Chen Y H and Fu X L. 2014a. CASME Ⅱ: an improved spontaneous micro-expression database and the baseline evaluation. PLoS One, 9(1): #e86041 [DOI: 10.1371/journal.pone.0086041]
Yan W J, Wang S J, Chen Y H, Zhao G Y and Fu X L. 2014b. Quantifying micro-expressions with constraint local model and local binary pattern//Agapito L, Bronstein M M and Rother C, eds. Computer Vision - ECCV 2014 Workshops. Cham: Springer: 296-305 [DOI: 10.1007/978-3-319-16178-5_20http://dx.doi.org/10.1007/978-3-319-16178-5_20]
Yan W J, Wu Q, Liu Y J, Wang S J and Fu X L. 2013. CASME database: a dataset of spontaneous micro-expressions collected from neutralized faces//Proceedings of the 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. Shanghai, China: IEEE: 1-7 [DOI: 10.1109/FG.2013.6553799http://dx.doi.org/10.1109/FG.2013.6553799]
Yang J F, She D Y and Sun M. 2017a. Joint image emotion classification and distribution learning via deep convolutional neural network//Proceedings of the 26th International Joint Conference on Artificial Intelligence. Melbourne, Australia: AAAI: 3266-3272
Yang J F, She D Y, Lai Y K, Rosin P L and Yang M H. 2018a. Weakly supervised coupled networks for visual sentiment analysis//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 7584-7592 [DOI: 10.1109/CVPR.2018.00791http://dx.doi.org/10.1109/CVPR.2018.00791]
Yang J F, She D Y, Lai Y K and Yang M H. 2018b. Retrieving and classifying affective images via deep metric learning//Proceedings of the 32nd AAAI Conference on Artificial Intelligence and the 30th Innovative Applications of Artificial Intelligence Conference and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence. New Orleans, USA: AAAI: #61
Yang J F, She D Y, Sun M, Cheng M M, Rosin P L and Wang L. 2018c. Visual sentiment prediction based on automatic discovery of affective regions. IEEE Transactions on Multimedia, 20(9): 2513-2525 [DOI: 10.1109/TMM.2018.2803520]
Yang J F, Sun M and Sun X X. 2017b. Learning visual sentiment distributions via augmented conditional probability neural network//Proceedings of the 31st AAAI Conference on Artificial Intelligence. San Francisco, USA: AAAI: 224-230
Yang J Y, Gao X B, Li L D, Wang X M and Ding J S. 2021a. SOLVER: scene-object interrelated visual emotion reasoning network. IEEE Transactions on Image Processing, 30: 8686-8701 [DOI: 10.1109/TIP.2021.3118983]
Yang J Y, Li J, Wang X M, Ding Y X and Gao X B. 2021b. Stimuli-aware visual emotion analysis. IEEE Transactions on Image Processing, 30: 7432-7445 [DOI: 10.1109/TIP.2021.3106813]
Yang Y, Cui P, Zhu W W and Yang S Q. 2013. User interest and social influence based emotion prediction for individuals//Proceedings of the 21st ACM International Conference on Multimedia. Barcelona, Spain: ACM: 785-788 [DOI: 10.1145/2502081.2502204http://dx.doi.org/10.1145/2502081.2502204]
Yanulevskaya V, van Gemert J C, Roth K, Herbold A K, Sebe N and Geusebroek J M. 2008. Emotional valence categorization using holistic image features//Proceedings of the 15th IEEE International Conference on Image Processing. San Diego, USA: IEEE: 101-104 [DOI: 10.1109/ICIP.2008.4711701http://dx.doi.org/10.1109/ICIP.2008.4711701]
Yao X X, She D Y, Zhao S C, Liang J, Lai Y K and Yang J F. 2019. Attention-aware polarity sensitive embedding for affective image retrieval//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 1140-1150 [DOI: 10.1109/ICCV.2019.00123http://dx.doi.org/10.1109/ICCV.2019.00123]
Yap M H, See J, Hong X P and Wang S J. 2018. Facial micro-expressions grand challenge 2018 summary//Proceedings of the 13th IEEE International Conference on Automatic Face and Gesture Recognition. Xi′an, China: IEEE: 675-678 [DOI: 10.1109/FG.2018.00106http://dx.doi.org/10.1109/FG.2018.00106]
You Q Z, Jin H L and Luo J B. 2017. Visual sentiment analysis by attending on local image regions//Proceedings of the 31st AAAI Conference on Artificial Intelligence. San Francisco, USA: AAAI: 231-237
You Q Z, Luo J B, Jin H L and Yang J C. 2015. Robust image sentiment analysis using progressively trained and domain transferred deep networks//Proceedings of the 29th AAAI Conference on Artificial Intelligence. Austin, USA: AAAI: 381-388
You Q Z, Luo J B, Jin H L and Yang J C. 2016. Building a large scale dataset for image emotion recognition: the fine print and the benchmark//Proceedings of the 30th AAAI Conference on Artificial Intelligence. Phoenix, USA: AAAI: 308-314
Yu W M, Xu H, Meng F Y, Zhu Y L, Ma Y X, Wu J L, Zou J Y and Yang K C. 2020. CH-SIMS: a Chinese multimodal sentiment analysis dataset with fine-grained annotation of modality//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. [s.l.]: Association for Computational Linguistics: 3718-3727 [DOI: 10.18653/v1/2020.acl-main.343http://dx.doi.org/10.18653/v1/2020.acl-main.343]
Yu W M, Xu H, Yuan Z Q and Wu J L. 2021. Learning modality-specific representations with self-supervised multi-task learning for multimodal sentiment analysis//Proceedings of the 35th AAAI Conference on Artificial Intelligence. [s.l.]: AAAI
Yuan J B, McDonough S, You Q Z and Luo J B. 2013. Sentribute: image sentiment analysis from a mid-level perspective//Proceedings of the 2nd International Workshop on Issues of Sentiment Discovery and Opinion Mining. Chicago, USA: ACM: 10 [DOI: 10.1145/2502069.2502079http://dx.doi.org/10.1145/2502069.2502079]
Zadeh A, Liang P P, Mazumder N, Poria S, Cambria E and Morency L P. 2018a. Memory fusion network for multi-view sequential learning//Proceedings of the 32nd AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th Innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18). New Orleans, USA: AAAI: 5634-5641
Zadeh A, Liang P P, Poria S, Vij P, Cambria E and Morency L P. 2018b. Multi-attention recurrent network for human communication comprehension//Proceedings of the 32nd AAAI Conference on Artificial Intelligence and the 30th Innovative Applications of Artificial Intelligence Conference and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence. New Orleans, USA: AAAI: #692
Zadeh A A B, Liang P P, Poria S, Cambria K and Morency L P. 2018c. Multimodal language analysis in the wild: CMU-MOSEI dataset and interpretable dynamic fusion graph//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Melbourne, Australia: Association for Computational Linguistics: 2236-2246 [DOI: 10.18653/v1/P18-1208http://dx.doi.org/10.18653/v1/P18-1208]
Zagury-Orly I, Kroeck M R, Soussand L and Cohen A L. 2022. Face-processing performance is an independent predictor of social affect as measured by the autism diagnostic observation schedule across large-scale datasets. Journal of Autism and Developmental Disorders, 52(2): 674-688 [DOI: 10.1007/s10803-021-04971-4]
Zhan C, She D Y, Zhao S C, Cheng M M and Yang J F. 2019. Zero-shot emotion recognition via affective structural embedding//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 1151-1160 [DOI: 10.1109/ICCV.2019.00124http://dx.doi.org/10.1109/ICCV.2019.00124]
Zhang L F and Arandjelović O. 2021. Review of automatic microexpression recognition in the past decade. Machine Learning and Knowledge Extraction, 3(2): 414-434 [DOI: 10.3390/make3020021]
Zhang L F, Arandjelovic O and Hong X P. 2021d. Facial action unit detection with local key facial sub-region based multi-label classification for micro-expression analysis//Proceedings of the 1st Workshop on Facial Micro-Expression: Advanced Techniques for Facial Expressions Generation and Spotting. [s.l.]: ACM: 11-18 [DOI: 10.1145/3476100.3484462http://dx.doi.org/10.1145/3476100.3484462]
Zhang Q Q, Wu R J, Zhu S Y, Le J, Chen Y S, Lan C M, Yao S X, Zhao W H and Kendrick K M. 2021b. Facial emotion training as an intervention in autism spectrum disorder: a meta-analysis of randomized controlled trials. Autism Research, 14(10): 2169-2182 [DOI: 10.1002/aur.2565]
Zhang R J, Chen J Y, Wang G S, Xu R Y, Zhang K, Wang J D and Zheng W M. 2021c. Towards a computer-assisted comprehensive evaluation of visual motor integration for children with autism spectrum disorder: a pilot study. Interactive Learning Environments: 1-16 [DOI: 10.1080/10494820.2021.1952273]
Zhang T, Zong Y, Zheng W M, Chen C L P, Hong X P, Tang C G, Cui Z and Zhao G Y. 2020. Cross-database micro-expression recognition: a benchmark. IEEE Transactions on Knowledge and Data Engineering, 34(2): 544-559 [DOI: 10.1109/TKDE.2020.2985365]
Zhang W, Ji X P, Chen K Y, Ding Y and Fan C J. 2021d. Learning a facial expression embedding disentangled from identity//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 6755-6764 [DOI: 10.1109/CVPR46437.2021.00669http://dx.doi.org/10.1109/CVPR46437.2021.00669]
Zhang Y H, Wang C R and Deng W H. 2021e. Relative uncertainty learning for facial expression recognition [EB/OL]. [2020-12-20].https://openreview.net/forum?id=h1-ilmYbdeahttps://openreview.net/forum?id=h1-ilmYbdea
Zhang Z P, Luo P, Loy C C and Tang X O. 2018. From facial expression recognition to interpersonal relation prediction. International Journal of Computer Vision, 126(5): 550-569 [DOI: 10.1007/s11263-017-1055-1]
Zhao G Y and Li X B. 2019. Automatic micro-expression analysis: open challenges. Frontiers in Psychology, 10: #1833 [DOI: 10.3389/fpsyg.2019.01833]
Zhao S C, Ding G G, Gao Y and Han J G. 2017a. Approximating discrete probability distribution of image emotions by multi-modal features fusion//Proceedings of the 26th International Joint Conference on Artificial Intelligence. Melbourne, Australia: IJCAI. org: 4669-4675 [DOI: 10.24963/ijcai.2017/651http://dx.doi.org/10.24963/ijcai.2017/651]
Zhao S C, Ding G G, Gao Y and Han J G. 2017b. Learning visual emotion distributions via multi-modal features fusion//Proceedings of the 25th ACM International Conference on Multimedia. Mountain View, USA: ACM: 369-377 [DOI: 10.1145/3123266.3130858http://dx.doi.org/10.1145/3123266.3130858]
Zhao S C, Ding G G, Gao Y, Zhao X, Tang Y B, Han J G, Yao H X and Huang Q M. 2020a. Discrete probability distribution prediction of image emotions with shared sparse learning. IEEE Transactions on Affective Computing, 11(4): 574-587 [DOI: 10.1109/TAFFC.2018.2818685]
Zhao S C, Gao Y, Jiang X L, Yao H X, Chua T S and Sun X S. 2014. Exploring principles-of-art features for image emotion recognition//Proceedings of the 22nd ACM International Conference on Multimedia. Orlando, USA: ACM: 47-56 [DOI: 10.1145/2647868.2654930http://dx.doi.org/10.1145/2647868.2654930]
Zhao S C, Jia Z Z, Chen H, Li L D, Ding G G and Keutzer K. 2019a. PDANet: polarity-consistent deep attention network for fine-grained visual emotion regression//Proceedings of the 27th ACM International Conference on Multimedia. Nice, France: ACM: 192-201 [DOI: 10.1145/3343031.3351062http://dx.doi.org/10.1145/3343031.3351062]
Zhao S C, Lin C, Xu P F, Zhao S D, Guo Y C, Krishna R, Ding G G and Keutzer K. 2019b. CycleEmotionGAN: emotional semantic consistency preserved CycleGAN for adapting image emotions//Proceedings of the 33rd AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI: 2620-2627 [DOI: 10.1609/aaai.v33i01.33012620http://dx.doi.org/10.1609/aaai.v33i01.33012620]
Zhao S C, Ma Y S, Gu Y, Yang J F, Xing T F, Xu P F, Hu R B, Chai H and Keutzer K. 2020b. An end-to-end visual-audio attention network for emotion recognition in user-generated videos//Proceedings of the 34th AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI: 303-311 [DOI: 10.1609/aaai.v34i01.5364http://dx.doi.org/10.1609/aaai.v34i01.5364]
Zhao S C, Yao H X, Gao Y, Ding G G and Chua T S. 2018a. Predicting personalized image emotion perceptions in social networks. IEEE Transactions on Affective Computing, 9(4): 526-540 [DOI: 10.1109/TAFFC.2016.2628787]
Zhao S C, Yao H X, Gao Y, Ji R R and Ding G G. 2017c. Continuous probability distribution prediction of image emotions via multitask shared sparse regression. IEEE Transactions on Multimedia, 19(3): 632-645 [DOI: 10.1109/TMM.2016.2617741]
Zhao S C, Yao H X, Gao Y, Ji R R, Xie W L, Jiang X L and Chua T S. 2016. Predicting personalized emotion perceptions of social images//Proceedings of the 24th ACM International Conference on Multimedia. Amsterdam, the Netherlands: ACM: 1385-1394 [DOI: 10.1145/2964284.2964289http://dx.doi.org/10.1145/2964284.2964289]
Zhao S C, Yao H X, Jiang X L and Sun X S. 2015. Predicting discrete probability distribution of image emotions//Proceedings of 2015 IEEE International Conference on Image Processing. Quebec City, Canada: IEEE: 2459-2463 [DOI: 10.1109/ICIP.2015.7351244http://dx.doi.org/10.1109/ICIP.2015.7351244]
Zhao S C, Yao X X, Yang J F, Jia G J, Ding G G, Chua T S, Schuller B W and Keutzer K. 2021. Affective image content analysis: two decades review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence [DOI: 10.1109/TPAMI.2021.3094362]
Zhao S C, Zhao X, Ding G G and Keutzer K. 2018b. EmotionGAN: unsupervised domain adaptation for learning discrete probability distributions of image emotions//Proceedings of the 26th ACM International Conference on Multimedia. Seoul, Korea(South): ACM: 1319-1327 [DOI: 10.1145/3240508.3240591http://dx.doi.org/10.1145/3240508.3240591]
Zhou L, Shao X Y and Mao Q R. 2021. A survey of micro-expression recognition. Image and Vision Computing, 105: #104043 [DOI: 10.1016/j.imavis.2020.104043]
Zhou Z H, Hong X P, Zhao G Y and Pietikäinen M. 2014. A compact representation of visual speech data using latent variables. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(1): 181-187 [DOI: 10.1109/TPAMI.2013.173]
Zhou Z, Zhao G and Pietikainen M. 2011. Towards a practical lipreading system//Proceedings of 2011 IEEE Coference on Computer Vision and Pattern Recognition. Colorado Springs, USA: IEEE: 137-144
Zhu X E, Li L, Zhang W G, Rao T R, Xu M, Huang Q M and Xu D. 2017. Dependency exploitation: a unified CNN-RNN approach for visual emotion recognition//Proceedings of the 26th International Joint Conference on Artificial Intelligence. Melbourne, Australia: IJCAI. org: 3595-3601 [DOI: 10.24963/ijcai.2017/503http://dx.doi.org/10.24963/ijcai.2017/503]
Zong Y, Huang X H, Zheng W M, Cui Z and Zhao G Y. 2018b. Learning from hierarchical spatiotemporal descriptors for micro-expression recognition. IEEE Transactions on Multimedia, 20(11): 3160-3172 [DOI: 10.1109/TMM.2018.2820321]
Zong Y, Zheng W M, Hong X P, Tang CG, Cui Z and Zhao G Y. 2019. Cross-database micro-expression recognition: a benchmark//Proceedings of 2019 on International Conference on Multimedia Retrieval. Ottawa, Canada: ACM: 354-363 [DOI: 10.1145/3323873.3326590http://dx.doi.org/10.1145/3323873.3326590]
Zong Y, Zheng W M, Huang X H, Shi J G, Cui Z and Zhao G Y. 2018a. Domain regeneration for cross-database micro-expression recognition. IEEE Transactions on Image Processing, 27(5): 2484-2498 [DOI: 10.1109/TIP.2018.2797479]
Zou X B. 2019. Intervention principles for children with autism and BSR model. Chinese Journal of Child Health Care, 27(1): 1-6
邹小兵. 2019. 孤独症谱系障碍干预原则与BSR模式. 中国儿童保健杂志, 27(1): 1-6) [DOI: 10.11852/zgetbjzz2018-1611]
相关作者
相关机构