Current Issue Cover
虚实融合场景中的深度感知研究综述

平佳敏1,2, 刘越1,2,3, 翁冬冬1,2,3(1.北京市混合现实与新型显示工程技术研究中心, 北京 100081;2.北京理工大学光电学院, 北京 100081;3.北京电影学院未来影像高精尖创新中心, 北京 100088)

摘 要
混合现实系统可以提供虚拟信息和真实环境实时叠加的虚实融合场景,在教育培训、文物保护、军事仿真、装备制造、手术医疗和展览展示等领域具有十分广阔的应用前景。混合现实系统首先利用标定数据构建虚拟摄像机模型,然后根据头部跟踪结果和虚拟摄像机位置实时绘制虚拟内容并将其叠加在真实环境中,用户通过虚实融合场景中渲染的图形化线索和虚拟物体特征感知其深度信息,但存在用于指导虚实融合场景绘制的视觉规律和感知理论匮乏、图形化线索可提供的绝对深度信息缺失和虚拟物体的渲染维度和特征指标不足等问题。本文分析了面向虚实融合场景绘制渲染的视觉规律,从用户感知的角度出发,围绕虚实融合场景中图形化线索绘制和虚拟物体渲染等展开综述,并对虚实融合场景中深度感知的研究趋势和重点进行展望和预测。
关键词
Review of depth perception in virtual and real fusion environment

Ping Jiamin1,2, Liu Yue1,2,3, Weng Dongdong1,2,3(1.Beijing Engineering Research Center of Mixed Reality and Advanced Display, Beijing 100081, China;2.School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China;3.Advanced innovation Center for Future Visual Entertainment, Beijing Film Academy, Beijing 100088, China)

Abstract
Mixed reality systems can provide virtual and real fusion environment, in which the virtual objects add to the real world in real time. Mixed reality systems have been widely used in education, training, heritage preservation, military simulation, equipment manufacturing, surgery, and exhibition. The mixed reality systems use the calibration data to build a virtual camera model, and then draw virtual content in real time based on the head tracking data and the position of the virtual camera. Finally, the virtual content is superimposed in the real environment. The user perceives the virtual object’s depth information according to the integration of graphical cues and virtual object rendering features in the virtual and real fusion environment. When the user observes the virtual-real fusion scene presented by the mixed reality system, the following processes are included: 1) different distance information are converted into respective distance signals. The key role in this process is to present the virtual-real fusion scene through rendering technology. The user judges the distance on the basis of the inherent characteristics of the virtual object. 2) The user recognizes other visual stimulus variables in the scene and converts respective distance signal into the final distance signal. The key role in this process is to provide cues of depth information in the virtual and real fusion scene. The user needs to use depth cues to determine the position of the object. 3) They determine the distance relationship between the objects in the scene and convert the final distance signal into the corresponding indicated distance. The key role in this process is the visual law of the human eye when viewing the virtual and real scene. However, problems, such as the lack of visual principles and perception theories that can be used to guide the rendering of virtual and real fusion scenes, the lack of absolute depth information that the graphical clues can provide, and the lack of rendering features of virtual objects, are found. The study on the visual laws and perception theories that can be used to guide the rendering of virtual and real scenes is limited. The visual model and perception laws of the human eye should be studied when viewing virtual-real fusion scenes to form effective application guidance and improve virtual-real fusion scenes to apply visual laws effectively in the design and development of virtual-real fusion scenes and increase the accuracy of depth perception. The rendering effect of the mixed reality application improves the interactive efficiency and user experience of mixed reality applications. The absolute depth information that can be provided by graphical cues in the virtual-real fusion scene is missing. Graphical cues that can provide effective absolute depth information in the scene should be generated, the characteristics of different graphical cues should be extracted, and the effects on depth perception should be quantified to help users perceive the depth of the target object. This approach improves user performance in depth perception and provide a basis for rendering of virtual and real scenes. The rendering dimensions and characteristic indicators of virtual objects in virtual and real fusion scenes are insufficient. Reasonable parameter indicators and effective object rendering methods should be studied, different feature interaction models should be built, and the role of different virtual object rendering characteristics in depth perception should be clarified to determine the characteristics that play a major role in the rendering of virtual objects in virtual and real scenes. Finally, the study can provide a basis for rendering the fusion scene. The visual principle in virtual and real fusion environment rendering is analyzed, and then the rendering of graphical cues and virtual object in virtual and real fusion scenes is summarized, and finally the research trend of depth perception in virtual and real fusion scenes is discussed. When viewing virtual and real scenes, humans perceive objects in the scene through the visual system. The visual function factors related to the perception mechanism and the guiding effect of visual laws on depth perception should be studied to optimize the rendering of virtual and real scenes. With the development and application of perception technology in mixed reality, in recent years, many researchers have carried out studies on ground contact theory, the anisotropy of human eye perception, and the distribution of human eye gaze points in depth perception. The background environment and virtual objects in the virtual and real fusion scene can provide users with depth information cues. Most existing related studies focus on adding various depth cues to the virtual and real fusion scene and explore the relationship between additional depth information and depth perception in the scene through experiments. With the rapid development of computer graphics, in recent years, an increasing number of graphic technologies have been applied to the creation of virtual and real fusion scenes to enhance the depth position prompts of virtual objects, including linear perspective, graphical techniques for prompting position information, and creating X-ray vision Graphics technology. The virtual objects presented in the mixed reality system are an important part of the virtual and real fusion environment. In recent years, to study the role of the inherent characteristics of virtual objects in virtual and real fusion scenes in depth perception, researchers have carried out a large number of quantifications in terms of the size, color, brightness, transparency, texture, and surface lighting of virtual objects through experimental study. These rendering-based virtual object characteristics were extracted from the 17th century painting techniques, but they are different from traditional painting depth cues.
Keywords

订阅号|日报