Anzahl der Treffer: 179
Erstellt: Thu, 28 Sep 2023 23:13:54 +0200 in 0.0744 sec

De Souza Cardoso, Luís Fernando; Kimura, Bruno Yuji Lino; Zorzal, Ezequiel Roberto
Towards augmented and mixed reality on future mobile networks. - In: Multimedia tools and applications, ISSN 1573-7721, Bd. 0 (2023), 0, insges. 36 S.

Augmented and Mixed Reality (AR/MR) technologies enhance the human perception of the world by combining virtual and real environments. With the increase of mobile devices and the advent of 5G, this technology has the potential to become part of people’s life. This article aims to evaluate the impact of 5G and beyond mobile networks in the future of AR/MR. To attend to this objective, we surveyed four digital libraries to identify articles and reviews concerning AR/MR use based on mobile networks. The results describe the state-of-the-art of mobile AR/MR applications and the benefits and challenges of the technology. Finally, after the review, we propose a roadmap concerning AR/MR hardware and software development to run applications supported by future mobile networks.
Knutzen, Kathrin; Weidner, Florian; Broll, Wolfgang
The role of social identity labels in CVEs on user behavior. - In: IEEE Xplore digital library, ISSN 2473-2001, (2023), S. 883-884

Psychological, and individual factors like group identity influence social presence in collaborative virtual settings. We investigated the impact of social identity labels, which reflect a user's nation and academic affiliation, on collaborative behavior. In an experiment, N=18 dyads played puzzle games while seeing or not seeing such labels. There were no significant differences regarding their social presence, trust, group identification or enjoyment. We argue that social identity labels in dyadic interactions do not change collaborative virtual behavior. We advance the field of sociotechnical applications by highlighting the relationship between psychological characteristics and cooperative behavior in collaborative virtual settings.
Weidner, Florian; Böttcher, Gerd; Arévalo Arboleda, Stephanie; Diao, Chenyao; Sinani, Luljeta; Kunert, Christian; Gerhardt, Christoph; Broll, Wolfgang; Raake, Alexander
A systematic review on the visualization of avatars and agents in AR & VR displayed using head-mounted displays. - In: IEEE transactions on visualization and computer graphics, ISSN 1941-0506, Bd. 29 (2023), 5, S. 2596-2606

Augmented Reality (AR) and Virtual Reality (VR) are pushing from the labs towards consumers, especially with social applications. These applications require visual representations of humans and intelligent entities. However, displaying and animating photo-realistic models comes with a high technical cost while low-fidelity representations may evoke eeriness and overall could degrade an experience. Thus, it is important to carefully select what kind of avatar to display. This article investigates the effects of rendering style and visible body parts in AR and VR by adopting a systematic literature review. We analyzed 72 papers that compare various avatar representations. Our analysis includes an outline of the research published between 2015 and 2022 on the topic of avatars and agents in AR and VR displayed using head-mounted displays, covering aspects like visible body parts (e.g., hands only, hands and head, full-body) and rendering style (e.g., abstract, cartoon, realistic); an overview of collected objective and subjective measures (e.g., task performance, presence, user experience, body ownership); and a classification of tasks where avatars and agents were used into task domains (physical activity, hand interaction, communication, game-like scenarios, and education/training). We discuss and synthesize our results within the context of today's AR and VR ecosystem, provide guidelines for practitioners, and finally identify and present promising research opportunities to encourage future research of avatars and agents in AR/VR environments.
Weidner, Florian; Maier, Jana E.; Broll, Wolfgang
Eating, smelling, and seeing: investigating multisensory integration and (in)congruent stimuli while eating in VR. - In: IEEE transactions on visualization and computer graphics, ISSN 1941-0506, Bd. 29 (2023), 5, S. 2423-2433

Integrating taste in AR/VR applications has various promising use cases - from social eating to the treatment of disorders. Despite many successful AR/VR applications that alter the taste of beverages and food, the relationship between olfaction, gustation, and vision during the process of multisensory integration (MSI) has not been fully explored yet. Thus, we present the results of a study in which participants were confronted with congruent and incongruent visual and olfactory stimuli while eating a tasteless food product in VR. We were interested (1) if participants integrate bi-modal congruent stimuli and (2) if vision guides MSI during congruent/incongruent conditions. Our results contain three main findings: First, and surprisingly, participants were not always able to detect congruent visual-olfactory stimuli when eating a portion of tasteless food. Second, when confronted with tri-modal incongruent cues, a majority of participants did not rely on any of the presented cues when forced to identify what they eat; this includes vision which has previously been shown to dominate MSI. Third, although research has shown that basic taste qualities like sweetness, saltiness, or sourness can be influenced by congruent cues, doing so with more complex flavors (e.g., zucchini or carrot) proved to be harder to achieve. We discuss our results in the context of multimodal integration, and within the domain of multisensory AR/VR. Our results are a necessary building block for future human-food interaction in XR that relies on smell, taste, and vision and are foundational for applied applications such as affective AR/VR.
Dörner, Ralf; Broll, Wolfgang; Jung, Bernhard; Grimm, Paul; Göbel, Martin; Kruse, Rolf
Introduction to Virtual and Augmented Reality. - In: Virtual and augmented reality (VR/AR), (2022), S. 1-37

What is Virtual Reality (VR)? What is Augmented Reality (AR)? What is the purpose of VR/AR? What are the basic concepts? What are the hard- and software components of VR/AR systems? How has VR/AR developed historically? The first chapter examines these questions and provides an introduction to this textbook. This chapter is fundamental for the whole book. All subsequent chapters build on it and do not depend directly on one another. Therefore, these chapters can be worked through selectively and in a sequence that suits the individual interests and needs of the readers. Corresponding tips on how this book can be used efficiently by different target groups (students, teachers, users, technology enthusiasts) are provided at the end of the chapter, as well as a summary, questions for reviewing what has been learned, recommendations for further reading, and the references used in the chapter.

Grimm, Paul; Broll, Wolfgang; Herold, Rigo; Hummel, Johannes; Kruse, Rolf
VR/AR input devices and tracking. - In: Virtual and augmented reality (VR/AR), (2022), S. 107-148

How do Virtual Reality (VR) and Augmented Reality (AR) systems recognize the actions of users? How does a VR or AR system know where the user is? How can a system track objects in their movement? What are proven input devices for VR and AR that increase immersion in virtual or augmented worlds? What are the technical possibilities and limitations? Based on fundamentals, which explain terms like degrees of freedom, accuracy, repetition rates, latency and calibration, methods are considered that are used for continuous tracking or monitoring of objects. Frequently used input devices are presented and discussed. Finally, examples of special methods such as finger and eye tracking are discussed.

Broll, Wolfgang; Grimm, Paul; Herold, Rigo; Reiners, Dirk; Cruz-Neira, Carolina
VR/AR output devices. - In: Virtual and augmented reality (VR/AR), (2022), S. 149-200

This chapter discusses output devices and technologies for Virtual Reality (VR) and Augmented Reality (AR). The goal of using output devices is to enable the user to dive into the virtual world or to perceive the augmented world. Devices for visual output play a crucial role here, they are of central importance for the use of VR and AR. First and foremost, Head-Mounted Displays (HMD) must be mentioned, the different types of which are discussed in detail here. However, VR also uses different forms of stationary displays, which are another major topic of this chapter. Finally, output devices for other senses are reviewed, namely acoustic and haptic outputs.

Broll, Wolfgang;
Augmented reality. - In: Virtual and augmented reality (VR/AR), (2022), S. 291-329

This chapter covers specific topics of Augmented Reality (AR). After an introduction to the basic components and a review of the different types of AR, the following sections explain the individual components in more detail, as far as they were not already part of previous chapters. This includes in particular the different manifestations of registration, since these are of central importance for an AR experience. Furthermore, special AR techniques and interaction types are introduced before discussing individual application areas of AR. Then, Diminished Reality (DR), the opposite of AR, is discussed, namely the removal of real content. Finally, Mediated Reality, which allows for altering reality in any form, including the combination of AR and DR, will be discussed.

Broll, Wolfgang; Weidner, Florian; Schwandt, Tobias; Weber, Kai; Dörner, Ralf
Authoring of VR/AR applications. - In: Virtual and augmented reality (VR/AR), (2022), S. 371-400

This chapter deals with the authoring of VR and AR applications. The focus here is on the use of authoring tools in the form of software development kits (SDKs) or game engines. First, the actual authoring process will be briefly discussed before selected authoring tools for VR and AR are reviewed. Subsequently, the authoring process and the use of the tools will be illustrated through typical case studies. The other chapters of this book deal with the fundamentals and methodologies of VR and AR. These are generally applicable over a longer period. In contrast to this, this chapter looks at some very specific authoring tools and the authoring process based on them, which can inevitably only represent a snapshot in time. Features, releases and availability of these tools can change at short notice, so that individual sections may no longer be up to date when this book is in press. To take this aspect into account, the case studies listed here are stored in an online repository, where they are regularly updated to reflect the latest versions of the authoring tools and runtime environments.

Makled, Elhassan; Weidner, Florian; Broll, Wolfgang
Investigating user embodiment of inverse-kinematic avatars in smartphone Augmented Reality. - In: 2022 IEEE International Symposium on Mixed and Augmented Reality, (2022), S. 666-675

Smartphone Augmented Reality (AR) has already provided us with a plethora of social applications such as Pokemon Go or Harry Potter Wizards Unite. However, to enable smartphone AR for social applications similar to VRChat or AltspaceVR, proper user tracking is necessary to accurately animate the avatars. In Virtual Reality (VR), avatar tracking is rather easy due to the availability of hand-tracking, controllers, and HMD whereas smartphone AR has only the back-(and front) camera and IMUs available for this task. In this paper we propose ARIKA, a tracking solution for avatars in smartphone AR. ARIKA uses tracking information from ARCore to track the users hand position and to calculate a pose using Inverse Kinematics (IK). We compare the accuracy of our system against a commercial motion tracking system and compare both systems with respect to sense of agency, self-location, and body-ownership. For this, 20 participants observed their avatars in an augmented virtual mirror and executed a navigation and a pointing task. Our results show that participants felt a higher sense of agency and self location when using the full body tracked avatar as opposed to IK avatars. Interestingly and in favor of ARIKA, there were no significant differences in body-ownership between our solution and the full-body tracked avatars. Thus, ARIKA and it’s single-camera approach is valid solution for smartphone AR applications where body-ownership is essential.