数字室内三维场景构建综述
Review of digital 3D indoor scene synthesis
- 2024年29卷第9期 页码:2471-2493
纸质出版日期: 2024-09-16
DOI: 10.11834/jig.230712
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2024-09-16 ,
移动端阅览
岳亮, 谈皓, 黄俊凯, 张少魁. 2024. 数字室内三维场景构建综述. 中国图象图形学报, 29(09):2471-2493
Yue Liang, Tan Hao, Huang Junkai, Zhang Shaokui. 2024. Review of digital 3D indoor scene synthesis. Journal of Image and Graphics, 29(09):2471-2493
在计算机图形学发展过程中,数字三维场景长期对于学术界和工业界都起着至关重要的作用。其作用体现在展示渲染效果、支持应用环境以及充当交互载体等多个方面。然而,三维场景本身作为一种数据形式,其结构复杂且没有统一的数据结构,故难以像诸如图像、文本数据集一样被大量获取与应用。已有一些工作尝试让计算机自动构建场景或是让计算机辅助构建场景。而在众多场景之中,室内场景尤为重要。本文总结归纳了这些数字室内三维场景构建工作。提出从3个方面调研和总结场景构建的主要工作:场景自动构建、基于用户交互的场景辅助构建以及基于多通道与丰富输入的场景构建。自动构建在于让计算机直接基于当前三维内容给出场景构建结果;交互构建在于让用户控制计算机并辅助构建场景;多通道构建在于让构建的场景参考输入的图像、文本和点云。最后,本文总结了科研工作的应用场景和关键技术,并介绍了一些其他应用情景与未来面对的诸多挑战。数字室内三维场景构建的发展前景十分广阔,随着新算法的不断提出以及三维场景数据集日益完善,数字室内三维场景构建领域也将持续发展。
During continuous development of computer graphics and human-computer interaction, digital three-dimensional (3D) scenes have played a vital role in the academy and industry. 3D scenes show graphical rendering results, supply the environment for applications, and provide a foundation for interaction. Despite being common occurrences, indoor scenes are important. To increase players’ gaming experience, indoor game designers require all kinds of aesthetic digital 3D scenes. In online scene decoration, designers also need to predesign the decoration and furniture layout preview by interacting with 3D scenes. In studies of virtual reality, we can synthesize virtual space from a digital 3D scene, such as the synthesis of training data for wheelchair users. However, a number of difficulties still need to be overcome to obtain ideal digital 3D scenes for the applications mentioned above. First, manually synthesized 3D scenes are usually time consuming and require considerable experience. Designers must add objects to a scene and adjust their location and orientation one by one. These trivialities but heavy works cause difficulty in focusing on core ideas. Second, digital 3D scene is a data structure with extremely complex structure, and no unified consensus has been given to its data structure. Thus, digital 3D scenes are difficult to obtain and apply in large quantities compared with traditional data structures, such as image, audio or text. To solve the problems mentioned above, some existing work attempted to allow computers to automatically synthesize 3D scenes or interactively help synthesize scenes. This survey summarizes these works. This survey also investigates and summarizes 3D digital scene synthesis methodologies from three aspects: automatic scene synthesis, scene synthesis with multichannel and rich input, and interactive scene synthesis. The automatic synthesis allows the computer to directly build an indoor layout based on few inputs, such as the contour of the room or the list of objects. Initially, the scene is synthesized by manually setting rules and applying optimizers in an attempt to satisfy these rules. However, the situation increases in complexity during the synthesis practice, and thus, listing all the rules becomes impossible. As the amount of digital indoor scene increases, more works are introducing machine learning methods to study priors from the digital scenes of the 3D indoor scene dataset. Most of these works organize the furniture with graph to apply algorithms on the graph to process with the information. The results outperform those of former works. Researchers have been applying deep learning (DL) technology, such as convolutional neural network and generative adversarial network, to indoor scene synthesis, which strongly improves the synthetic effect. The synthesis with multichannel and rich input aims to synthesize a digital indoor 3D scene with unformatted information, such as image, text, RGBD scan, point cloud, etc. These algorithms enable the convenient formation of digital copies of scenes in the real world because they are mainly recorded by photos or literal description. Compared with the works on automatic synthesis, the scene syntheses with multichannel and rich input do not require diversity or aesthetics. However, this type of synthesis needs an algorithm for the accurate reconstruction of the indoor scene in the digital world. The interactive synthesis aims to let users control the process of computer-aided scene synthesis. The related works can mainly be divided into two parts: active and passive interactive syntheses. Active interactive synthesis simultaneously provides designers with suggestions while they synthesizing a scene. If the scene syntheses program can analyze the designers’ interaction and recommend the options with higher possibility to be chosen, considerable workload can be saved. During passive interactive synthesis, the system learns the user’s personal preferences from aspects, such as their behavior trajectory, personal abilities, work habits, and height information and automatically synthesize scenes that match the user’s preferences as much as possible. Eventually, this survey will also summarize the application scenario and core technology of the papers and introduce other typical application scenarios and future challenges. We summarized and classified the recent studies on applications of digital 3D scene synthesis to form this survey. Digital 3D indoor scene synthesis has attained great progress and has a wide prospect. The automatic scene synthesis has generally achieved its goal, and more attention should focused on the proposal and resolution of sub-problems and related issues afterward. For scene synthesis with rich input, existing work has explored inputs, such as image, RGBD-scan, text, and sketches. In the future, more potential input forms, such as music and symbols, should be explored. For scene interactive synthesis, current interactions are still limited to mouse and keyboard inputs, and methods based on interactive scenes, such as virtual reality, augmented reality, and hybrid reality, still need to be explored. Scene synthesis algorithm has continuously broadened its application. Industries normally require the automatic synthesis of a large amount of indoor scenes. The synthetic efficiency can be strongly increased if a computer can provide suggestions regarding an object and its layout. In academic studies, 3D scenes are usually applied to form all kinds of dataset. By rendering a scene’s photos from various perspectives and channels, researchers can easily obtain images. However, the study on indoor scene synthesis is still facing a number of limitations. The dissimilarity of data structure causes difficulty in extending the work of others. Copyright issues prevent a scene dataset from being freely used by researchers and coders. In the future, indoor scene datasets with additional furniture model and room contour will serve as the basis of indoor scene synthesis studies. Numerous related fields, such as style consistency and automatic photography, are also showing progress.
室内场景三维场景构建三维场景交互三维场景智能编辑计算机图形学
indoor scene3D scene synthesis3D scene interaction3D scene intelligent editingcomputer graphics
AlHalawani S, Yang Y L, Wonka P and Mitra N J. 2014. What makes London work like London? Computer Graphics Forum, 33(5): 157-165 [DOI: 10.1111/cgf.12441http://dx.doi.org/10.1111/cgf.12441]
Chang A, Savva M and Manning C D. 2014. Learning spatial knowledge for text to 3D scene generation//Proceedings of 2014 Conference on Empirical Methods in Natural Language Processing. Doha, Qatar: ACL: 2028-2038 [DOI: 10.3115/v1/D14-1217http://dx.doi.org/10.3115/v1/D14-1217]
Chang A X, Eric M, Savva M and Manning C D. 2017. SceneSeer: 3D scene design with natural language [EB/OL]. [2024-02-03]. https://arxiv.org/pdf/1703.00050.pdfhttps://arxiv.org/pdf/1703.00050.pdf
Ching F D K and Binggeli C. 2018. Interior Design Illustrated. 4th ed. Hoboken, USA: John Wiley and Sons
Chen K, Lai Y K and Hu S M. 2015. 3D indoor scene modeling from RGB-D data: a survey. Computational Visual Media, 1(4): 267-278 [DOI: 10.1007/s41095-015-0029-xhttp://dx.doi.org/10.1007/s41095-015-0029-x]
Chen K, Lai Y K, Wu Y X, Martin R and Hu S M. 2014. Automatic semantic modeling of indoor scenes from low-quality RGB-D data using contextual information. ACM Transactions on Graphics, 33(6): #208 [DOI: 10.1145/2661229.2661239http://dx.doi.org/10.1145/2661229.2661239]
Deussen O, Hanrahan P, Lintermann B, Měch R, Pharr M and Prusinkiewicz P. 1998. Realistic modeling and rendering of plant ecosystems//Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques. New York, USA: Association for Computing Machinery: 275-286 [DOI: 10.1145/280814.280898http://dx.doi.org/10.1145/280814.280898]
Dong Z C, Wu W M, Xu Z H, Sun Q, Yuan G J, Liu L G and Fu X M. 2021. Tailored reality: perception-aware scene restructuring for adaptive VR navigation. ACM Transactions on Graphics, 40(5): #193 [DOI: 10.1145/3470847http://dx.doi.org/10.1145/3470847]
Fang C, Hu X T, Luo K M and Tan P. 2023. Ctrl-Room: controllable text-to-3D room meshes generation with layout constraints [EB/OL]. [2024-02-06]. https://arxiv.org/pdf/2310.03602.pdfhttps://arxiv.org/pdf/2310.03602.pdf
Fisher M, Savva M, Li Y Y, Hanrahan P and Nießner M. 2015. Activity-centric scene synthesis for functional 3D scene modeling. ACM Transactions on Graphics, 34(6): #179 [DOI: 10.1145/2816795.2818057http://dx.doi.org/10.1145/2816795.2818057]
Foo L G, Rahmani H and Liu J. 2023. AI-generated content (AIGC) for various data modalities: a survey [EB/OL]. [2024-02-03]. https://arxiv.org/pdf/2308.14177.pdfhttps://arxiv.org/pdf/2308.14177.pdf
Fu H, Cai B W, Gao L, Zhang L X, Wang J M, Li C, Zeng Q X, Sun C Y, Jia R F, Zhao B Q and Zhang H. 2021. 3D-FRONT: 3D furnished rooms with layOuts and semaNTics//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 10933-10942 [DOI: 10.1109/iccv48922.2021.01075http://dx.doi.org/10.1109/iccv48922.2021.01075]
Fu Q, Chen X W, Wang X T, Wen S J, Zhou B and Fu H B. 2017. Adaptive synthesis of indoor scenes via activity-associated object relation graphs. ACM Transactions on Graphics, 36(6): #201 [DOI: 10.1145/3130800.3130805http://dx.doi.org/10.1145/3130800.3130805]
Fu Q, Fu H B, Yan H, Zhou B, Chen X W and Li X M. 2020. Human-centric metrics for indoor scene assessment and synthesis. Graphical Models, 110: #101073 [DOI: 10.1016/j.gmod.2020.101073http://dx.doi.org/10.1016/j.gmod.2020.101073]
Génevaux J D, Galin É, Guérin E, Peytavie A and Benes B. 2013. Terrain generation using procedural models based on hydrology. ACM Transaction on Graphics, 32(4): #143 [DOI: 10.1145/2461912.2461996http://dx.doi.org/10.1145/2461912.2461996]
Handa A, Patraucean V, Badrinarayanan V, Stent S and Cipolla R. 2016. Understanding RealWorld indoor scenes with synthetic data//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 4077-4085 [DOI: 10.1109/cvpr.2016.442http://dx.doi.org/10.1109/cvpr.2016.442]
He Y, Liu Y T, Jin Y H, Zhang S H, Lai Y K and Hu S M. 2022. Context-consistent generation of indoor virtual environments based on geometry constraints. IEEE Transactions on Visualization and Computer Graphics, 28(12): 3986-3999 [DOI: 10.1109/tvcg.2021.3111729http://dx.doi.org/10.1109/tvcg.2021.3111729]
Höllein L, Cao A, Owens A, Johnson J and Nießner M. 2023. Text2Room: extracting textured 3D meshes from 2D text-to-image models//Proceedings of 2023 IEEE/CVF International Conference on Computer Vision. Paris, France: IEEE: 7909-7920 [DOI: 10.1109/ICCV51070.2023.00727http://dx.doi.org/10.1109/ICCV51070.2023.00727]
Hu F, Ma Y B, Zhong W, Ye L, Yang X Y, Fang L and Zhang Q. 2024. A dataset and benchmark for 3D scene plausibility assessment. IEEE Transactions on Multimedia, 26: 6529-6541 [DOI: 10.1109/TMM.2024.3353456http://dx.doi.org/10.1109/TMM.2024.3353456]
Hu R Z, Huang Z Y, Tang Y H, van Kaick O, Zhang H and Huang H. 2020. Graph2Plan: Learning floorplan generation from layout graphs. ACM Transactions on Graphics, 39(4): #118 [DOI: 10.1145/3386569.3392391http://dx.doi.org/10.1145/3386569.3392391]
Lei J B, Tang J P and Jia K. 2023. RGBD2: Generative scene synthesis via incremental view inpainting using RGBD diffusion models//Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver, Canada: IEEE: 8422-8434 [DOI: 10.1109/CVPR52729.2023.00814http://dx.doi.org/10.1109/CVPR52729.2023.00814]
Li M Y, Patil A G, Xu K, Chaudhuri S, Khan O, Shamir A, Tu C H, Chen B Q, Cohen-Or D and Zhang H. 2019. GRAINS: generative recursive autoencoders for INdoor scenes. ACM Transactions on Graphics, 38(2): #12 [DOI: 10.1145/3303766http://dx.doi.org/10.1145/3303766]
Li W W, Talavera J, Samayoa A G, Lien J M and Yu L F. 2020. Automatic synthesis of virtual wheelchair training scenarios//Proceedings of 2020 IEEE Conference on Virtual Reality and 3D User Interfaces. Atlanta, USA: IEEE: 539-547 [DOI: 10.1109/VR46266.2020.00075http://dx.doi.org/10.1109/VR46266.2020.00075]
Liang W, Liu J J, Lang Y N, Ning B and Yu L F. 2019. Functional workspace optimization via learning personal preferences from virtual experiences. IEEE Transactions on Visualization and Computer Graphics, 25(5): 1836-1845 [DOI: 10.1109/tvcg.2019.2898721http://dx.doi.org/10.1109/tvcg.2019.2898721]
Liang Y, Zhang S H and Martin R R. 2017. Automatic data-driven room design generation//Proceedings of the 3rd International Workshop on Next Generation Computer Animation Techniques. Bournemouth, UK: Springer: 133-148 [DOI: 10.1007/978-3-319-69487-0_10http://dx.doi.org/10.1007/978-3-319-69487-0_10]
Liu J J, Liane W, Ning B and Mao T. 2021. Work surface arrangement optimization driven by human activity//Proceedings of 2021 IEEE Virtual Reality and 3D User Interfaces. Lisboa, Portugal: IEEE: 270-278 [DOI: 10.1109/vr50410.2021.00049http://dx.doi.org/10.1109/vr50410.2021.00049]
Liu T Q, Hertzmann A, Li W and Funkhouser T. 2015. Style compatibility for 3D furniture models. ACM Transactions on Graphics, 34(4): #85 [DOI: 10.1145/2766898http://dx.doi.org/10.1145/2766898]
Ma R, Patil A G, Fisher M, Li M Y, Pirk S, Hua B S, Yeung S K, Tong X, Guibas L and Zhang H. 2018. Language-driven synthesis of 3D scenes from scene databases. ACM Transactions on Graphics, 37(6): #212 [DOI: 10.1145/3272127.3275035http://dx.doi.org/10.1145/3272127.3275035]
Merrell P, Schkufza E and Koltun V. 2010. Computer-generated residential building layouts. ACM Transactions on Graphics, 29(6): #181 [DOI: 10.1145/1882261.1866203http://dx.doi.org/10.1145/1882261.1866203]
Merrell P, Schkufza E, Li Z Y, Agrawala M and Koltun V. 2011. Interactive furniture layout using interior design guidelines. ACM Transactions on Graphics, 30(4): #87 [DOI: 10.1145/2010324.1964982http://dx.doi.org/10.1145/2010324.1964982]
Nan L L, Xie K and Sharf A. 2012. A search-classify approach for cluttered indoor scene understanding. ACM Transactions on Graphics, 31(6): #137 [DOI: 10.1145/2366145.2366156http://dx.doi.org/10.1145/2366145.2366156]
Nauata N, Chang K H, Cheng C Y, Mori G and Furukawa Y. 2020. House-GAN: relational generative adversarial networks for graph-constrained house layout generation//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer: 162-177 [DOI: 10.1007/978-3-030-58452-8_10http://dx.doi.org/10.1007/978-3-030-58452-8_10]
Nie Y Y, Dai A, Han X G and Nießner M. 2023. Learning 3D scene priors with 2D supervision//Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver, Canada: IEEE: 792-802 [DOI: 10.1109/CVPR52729.2023.00083http://dx.doi.org/10.1109/CVPR52729.2023.00083]
Paschalidou D, Kar A, Shugrina M, Kreis K, Geiger A and Fidler S. 2021. ATISS: autoregressive transformers for indoor scene synthesis//Proceedings of the 35th International Conference on Neural Information Processing Systems. [s.l.]: [s.n.]: 12013-12026
Peng C H, Yang Y L, Bao F, Fink D, Yan D M, Wonka P and Mitra N J. 2016. Computational network design from functional specifications. ACM Transactions on Graphics, 35(4): #131 [DOI: 10.1145/2897824.2925935http://dx.doi.org/10.1145/2897824.2925935]
Purkait P, Zach C and Reid I. 2020. SG-VAE: scene grammar variational autoencoder to generate new indoor scenes//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer: 155-171 [DOI: 10.1007/978-3-030-58586-0_10http://dx.doi.org/10.1007/978-3-030-58586-0_10]
Qi S Y, Zhu Y X, Huang S Y, Jiang C F F and Zhu S C. 2018. Human-centric indoor scene synthesis using stochastic grammar//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 5899-5908 [DOI: 10.1109/cvpr.2018.00618http://dx.doi.org/10.1109/cvpr.2018.00618]
Ritchie D, Wang K and Lin Y A. 2019. Fast and flexible indoor scene synthesis via deep convolutional generative models//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 6182-6190 [DOI: 10.1109/cvpr.2019.00634http://dx.doi.org/10.1109/cvpr.2019.00634]
Savva M, Chang A X and Agrawala M. 2017. SceneSuggest: context-driven 3D scene design [EB/OL]. [2024-02-03]. https://arxiv.org/pdf/1703.00061.pdfhttps://arxiv.org/pdf/1703.00061.pdf
Song S R, Yu F, Zeng A, Chang A X, Savva M and Funkhouser T. 2017. Semantic scene completion from a single depth image//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 1746-1754 [DOI: 10.1109/cvpr.2017.28http://dx.doi.org/10.1109/cvpr.2017.28]
Tang J P, Nie Y Y, Markhasin L, Dai A, Thies J and Nießner M. 2023. DiffuScene: denoising diffusion models for generative indoor scene synthesis [EB/OL]. [2024-02-06]. https://arxiv.org/pdf/2303.14207.pdfhttps://arxiv.org/pdf/2303.14207.pdf
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A N, Kaiser Ł and Polosukhin, I. 2017. Attention is all you need//Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc: 6000-6010
Wang H. 1961. Proving theorems by pattern recognition—II. Bell System Techincal Journal: 40(1): 1-41 [DOI: 10.1002/j.1538-7305.1961.tb03975.xhttp://dx.doi.org/10.1002/j.1538-7305.1961.tb03975.x]
Wang K, Lin Y A, Weissmann B, Savva M, Chang A X and Ritchie D. 2019. PlaniT: planning and instantiating indoor scenes with relation graph and spatial prior networks. ACM Transactions on Graphics, 38(4): #132 [DOI: 10.1145/3306346.3322941http://dx.doi.org/10.1145/3306346.3322941]
Wang K, Savva M, Chang A X and Ritchie D. 2018. Deep convolutional priors for indoor scene synthesis. ACM Transactions on Graphics, 37(4): #70 [DOI: 10.1145/3197517.3201362http://dx.doi.org/10.1145/3197517.3201362]
Wang X P, Yeshwanth C and Nießner M. 2021. SceneFormer: indoor scene generation with transformers//Proceedings of 2021 International Conference on 3D Vision. London, United Kingdom: IEEE: 106-115 [DOI: 10.1109/3DV53792.2021.00021http://dx.doi.org/10.1109/3DV53792.2021.00021]
Weiss T, Litteneker A, Duncan N, Nakada M, Jiang C F F, Yu L F and Terzopoulos D. 2019. Fast and scalable position-based layout synthesis. IEEE Transactions on Visualization and Computer Graphics, 25(12): 3231-3243 [DOI: 10.1109/tvcg.2018.2866436http://dx.doi.org/10.1109/tvcg.2018.2866436]
Wu W M, Fan L B, Liu L G and Wonka P. 2018. MIQP-based layout design for building interiors. Computer Graphics Forum, 37(2): 511-521 [DOI: 10.1111/cgf.13380http://dx.doi.org/10.1111/cgf.13380]
Xu K, Chen K, Fu H B, Sun W L and Hu S M. 2013. Sketch2Scene: sketch-based co-retrieval and co-placement of 3D models. ACM Transactions on Graphics, 32(4): #123 [DOI: 10.1145/2461912.2461968http://dx.doi.org/10.1145/2461912.2461968]
Xu W Z, Wang B and Yan D M. 2015. Wall grid structure for interior scene synthesis. Computers and Graphics, 46: 231-243 [DOI: 10.1016/j.cag.2014.09.032http://dx.doi.org/10.1016/j.cag.2014.09.032]
Yan M, Chen X J and Zhou J. 2017. An interactive system for efficient 3D furniture arrangement//Proceedings of the Computer Graphics International Conference. Yokohama, Japan: ACM: #29 [DOI: 10.1145/3095140.3095169http://dx.doi.org/10.1145/3095140.3095169]
Yang H T, Zhang Z W, Yan S M, Huang H B, Ma C Y, Zheng Y, Bajaj C and Huang Q X. 2021a. Scene synthesis via uncertainty-driven attribute synchronization//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 5630-5640 [DOI: 10.1109/iccv48922.2021.00558http://dx.doi.org/10.1109/iccv48922.2021.00558]
Yang M J, Guo Y X, Zhou B and Tong X. 2021b. Indoor scene generation from a collection of semantic-segmented depth images//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 15203-15212 [DOI: 10.1109/ICCV48922.2021.01492http://dx.doi.org/10.1109/ICCV48922.2021.01492]
Yang Y L, Wang J, Vouga E and Wonka P. 2013. Urban pattern: layout design by hierarchical domain splitting. ACM Transactions on Graphics, 32(6): #181 [DOI: 10.1145/2508363.2508405http://dx.doi.org/10.1145/2508363.2508405]
Ye S F, Wang Y X, Li J M, Park D, Liu C K, Xu H Z and Wu J J. 2022. Scene synthesis from human motion//Proceedings of 2022 SIGGRAPH Asia Conference Papers. Daegu, Korea(South): Association for Computing Machinery: #26 [DOI: 10.1145/3550469.3555426http://dx.doi.org/10.1145/3550469.3555426]
Yeh Y T, Yang L F, Watson M, Goodman N D and Hanrahan P. 2012. Synthesizing open worlds with constraints using locally annealed reversible jump MCMC. ACM Transactions on Graphics, 31(4): #56 [DOI: 10.1145/2185520.2185552http://dx.doi.org/10.1145/2185520.2185552]
Yu L F, Yeung S K, Tang C K, Terzopoulos D, Chan T F and Osher S. 2011. Make it home: automatic optimization of furniture arrangement. ACM Transaction on Graphics, 30(4): #86 [DOI: 10.1145/2010324.1964981http://dx.doi.org/10.1145/2010324.1964981]
Yu L F, Yeung S K and Terzopoulos D. 2016. The clutterpalette: an interactive tool for detailing indoor scenes. IEEE Transactions on Visualization and Computer Graphics, 22(2): 1138-1148 [DOI: 10.1109/tvcg.2015.2417575http://dx.doi.org/10.1109/tvcg.2015.2417575]
Zhang S H, Zhang S K, Liang Y and Hall P. 2019. A survey of 3D indoor scene synthesis. Journal of Computer Science and Technology, 34(3): 594-608 [DOI: 10.1007/s11390-019-1929-5http://dx.doi.org/10.1007/s11390-019-1929-5]
Zhang S H, Zhang S K, Xie W Y, Luo C Y, Yang Y L and Fu H B. 2022. Fast 3D indoor scene synthesis by learning spatial relation priors of objects. IEEE Transactions on Visualization and Computer Graphics, 28(9): 3082-3092 [DOI: 10.1109/tvcg.2021.3050143http://dx.doi.org/10.1109/tvcg.2021.3050143]
Zhang S K, Li Y X, He Y, Yang Y L and Zhang S H. 2021a. MageAdd: real-time interaction simulation for scene synthesis//Proceedings of the 29th ACM International Conference on Multimedia. [s.l.]: ACM: 965-973 [DOI: 10.1145/3474085.3475194http://dx.doi.org/10.1145/3474085.3475194]
Zhang S K, Liu J H, Li Y K, Xiong T Y, Ren K X, Fu H B and Zhang S H. 2023a. Automatic generation of commercial scenes//Proceedings of the 31st ACM International Conference on Multimedia. Ottawa, Canada: ACM: 1137-1147 [DOI: 10.1145/3581783.3613456http://dx.doi.org/10.1145/3581783.3613456]
Zhang S K, Tam H, Li Y K, Ren K X, Fu H B and Zhang S H. 2023b. SceneDirector: interactive scene synthesis by simultaneously editing multiple objects in real-time. IEEE Transactions on Visualization and Computer Graphic: #3268115 [DOI: 10.1109/TVCG.2023.3268115http://dx.doi.org/10.1109/TVCG.2023.3268115]
Zhang S K, Tam H, Li Y X, Mu T J and Zhang S H. 2023c. SceneViewer: automating residential photography in virtual environments. IEEE Transactions on Visualization and Computer Graphics, 29(12): 5523-5537 [DOI: 10.1109/tvcg.2022.3214836http://dx.doi.org/10.1109/tvcg.2022.3214836]
Zhang S K, Xie W Y and Zhang S H. 2021b. Geometry-based layout generation with hyper-relations AMONG objects. Graphical Models, 116: #101104 [DOI: 10.1016/j.gmod.2021.101104http://dx.doi.org/10.1016/j.gmod.2021.101104]
Zhang S Y, Han Z Z, Lai Y K, Zwicker M and Zhang H. 2021c. Active arrangement of small objects in 3D indoor scenes. IEEE Transactions on Visualization and Computer Graphics, 27(4): 2250-2264 [DOI: 10.1109/tvcg.2019.2949295http://dx.doi.org/10.1109/tvcg.2019.2949295]
Zhang S Y, Han Z Z and Zhang H. 2016. User guided 3D scene enrichment//Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and its Applications in Industry. Zhuhai China: ACM: 353-362 [DOI: 10.1145/3013971.3014002http://dx.doi.org/10.1145/3013971.3014002]
Zhang Y Q, Huang H K, Plaku E and Yu L F. 2021d. Joint computational design of workspaces and workplans. ACM Transactions on Graphics, 40(6): #228 [DOI: 10.1145/3478513.3480500http://dx.doi.org/10.1145/3478513.3480500]
Zhang Z W, Yang Z P, Ma C Y, Luo L J, Huth A, Vouga E and Huang Q X. 2020. Deep generative modeling for scene synthesis via hybrid representations. ACM Transactions on Graphics, 39(2): #17 [DOI: 10.1145/3381866http://dx.doi.org/10.1145/3381866]
Zheng J, Zhang J, Li J, Tang R, Gao S and Zhou Z. 2020. Structured3D: a large photo-realistic dataset for structured 3D modeling//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer: 519-535 [DOI:10.1007/978-3-030-58545-7_30http://dx.doi.org/10.1007/978-3-030-58545-7_30]
Zhu F, Liu L, Xie J, Shen F M, Shao L and Fang Y. 2018. Learning to synthesize 3D indoor scenes from monocular images//Proceedings of the 26th ACM International Conference on Multimedia. Soeul, Korea (South): ACM: 501-509 [DOI: 10.1145/3240508.3240700http://dx.doi.org/10.1145/3240508.3240700]
Zhu S Y, Wang X J, Wang M, Wang Y C, Wei Z Q, Yin B and Jin X G. 2022. Example-based large-scale marine scene authoring using Wang Cubes. Visual Informatics, 6(3): 23-34 [DOI: 10.1016/j.visinf.2022.05.004http://dx.doi.org/10.1016/j.visinf.2022.05.004]
相关作者
相关机构