Title:
Unified Representation for XR Content and its Rendering Method
Authors:
Yongjae Lee, Changhyun Moon, Heedong Ko, Soo-Hong Lee & Byounghyun Yoo
Published:
Web3D '20: The 25th International Conference on 3D Web Technology, November 2020, Article No.: 21, Pages 1–10 [ACM Digital Library]
Abstract:
Virtual Reality (VR) and Augmented Reality (AR) have become familiar technologies with related markets growing rapidly every year. Moreover, the idea of considering VR and AR as one eXtended reality (XR) has broken the border between virtual space and real space. However, there is no formal way to create such XR content except through existing VR or AR content development platforms. These platforms require the content author to perform additional tasks such as duplicating content for a specific user interaction environment (VR or AR) and associating them as one. Also, describing the content in an existing markup language (e.g., X3D, X3DOM, A-frame) has limitations of that the content author should predefine the user interaction environment (i.e., either of VR and AR). In this study, a unified XR representation is defined for describing XR content, and the method to render it has been proposed. The unified XR representation extends the HTML so that content authored with this representation can be harmoniously incorporated into existing web documents and can exploit resources on the World Wide Web. The XR renderer, which draws XR content on the screen, follows different procedures for both VR and AR situations. Consequently, the XR content works in both user interaction environment (VR and AR). Hence, this study provides a straightforward XR content authoring method that users access anywhere through a web browser regardless of their situational contexts, such as VR or AR. It facilitates XR collaboration with real objects by providing both VR and AR users with accessing an identical content.