UniteXR: joint exploration of a real-world museum and its digital twin. - In: VRST 2023, (2023), 25, insges. 10 S.
The combination of smartphone Augmented Reality (AR) and Virtual Reality (VR) makes it possible for on-site and remote users to simultaneously explore a physical space and its digital twin through an asymmetric Collaborative Virtual Environment (CVE). In this paper, we investigate two spatial awareness visualizations to enable joint exploration of a space for dyads consisting of a smartphone AR user and a head-mounted display VR user. Our study revealed that both, a mini-map-based method and an egocentric compass method with a path visualization, enabled the on-site visitors to locate and follow a virtual companion reliably and quickly. Furthermore, the embodiment of the AR user by an inverse kinematics avatar allowed the use of natural gestures such as pointing and waving which was preferred over text messages by the participants of our study. In an expert review in a museum and its digital twin we observed an overall high social presence for on-site AR and remote VR visitors and found that the visualizations and the avatar embodiment successfully facilitated their communication and collaboration.
Cross-timescale experience evaluation framework for productive teaming. - In: Engineering for a Changing World, (2023), 5.4.129, insges. 6 S.
Kolloquium: 60th ISC, Ilmenau Scientific Colloquium, Ilmenau, 04.-08.09.2023
This paper presents the initial concept for an evaluation framework to systematically evaluate productive teaming (PT). We consider PT as adaptive human-machine interactions between human users and augmented technical production systems. Also, human-to-human communication as part of a hybrid team with multiple human actors is considered, as well as human-human and human-machine communication for remote and mixed remote- and co-located teams. The evaluation comprises objective, performance-related success indicators, behavioral metadata, and measures of human experience. In particular, it considers affective, attentional and intentional states of human team members, their influence on interaction dynamics in the team, and researches appropriate strategies to satisfyingly adjust dysfunctional dynamics, using concepts of companion technology. The timescales under consideration span from seconds to several minutes, with selected studies targeting hour-long interactions and longer-term effects such as effort and fatigue. Two example PT scenarios will be discussed in more detail. To enable generalization and a systematic evaluation, the scenarios’ use cases will be decomposed into more general modules of interaction.
Erfahrungen bei der Integration des Autograding-Systems CodeOcean in die universitäre Programmierausbildung. - In: Proceedings of the Sixth Workshop "Automatische Bewertung von Programmieraufgaben" (ABP 2023), (2023), S. 67-74
Eine effektive und effiziente universitäre Programmierausbildung erfordert zunehmend den Einsatz automatisierter Bewertungssysteme. Im Rahmen des Projekts examING erprobt das Teilprojekt AutoPING den Einsatz des quelloffenen Autograding-Systems CodeOcean für übergreifende Lehrangebote und Prüfungen an der TU Ilmenau mit dem Ziel, selbstgesteuertes und kompetenzorientiertes Lernen zu ermöglichen und zu fördern. Der Beitrag gibt einen Überblick über erste Projekterfahrungen bei der Adaption didaktischer Szenarien in der Programmierausbildung hin zu testgetriebener Softwareentwicklung sowie der Generierung von Feedback. Es werden wesentliche Erkenntnisse aus Sicht der Studierenden und Lehrenden erörtert, Herausforderungen und Lösungsansätze zur Integration und Erweiterung von CodeOcean für neue Anwendungsfelder diskutiert sowie zukünftige Perspektiven eröffnet.
Towards augmented and mixed reality on future mobile networks. - In: Multimedia tools and applications, ISSN 1573-7721, Bd. 0 (2023), 0, insges. 36 S.
Augmented and Mixed Reality (AR/MR) technologies enhance the human perception of the world by combining virtual and real environments. With the increase of mobile devices and the advent of 5G, this technology has the potential to become part of people’s life. This article aims to evaluate the impact of 5G and beyond mobile networks in the future of AR/MR. To attend to this objective, we surveyed four digital libraries to identify articles and reviews concerning AR/MR use based on mobile networks. The results describe the state-of-the-art of mobile AR/MR applications and the benefits and challenges of the technology. Finally, after the review, we propose a roadmap concerning AR/MR hardware and software development to run applications supported by future mobile networks.
The role of social identity labels in CVEs on user behavior. - In: 2023 IEEE Conference on Virtual Reality and 3D User Interfaces abstracts and workshops, (2023), S. 883-884
Psychological, and individual factors like group identity influence social presence in collaborative virtual settings. We investigated the impact of social identity labels, which reflect a user's nation and academic affiliation, on collaborative behavior. In an experiment, N=18 dyads played puzzle games while seeing or not seeing such labels. There were no significant differences regarding their social presence, trust, group identification or enjoyment. We argue that social identity labels in dyadic interactions do not change collaborative virtual behavior. We advance the field of sociotechnical applications by highlighting the relationship between psychological characteristics and cooperative behavior in collaborative virtual settings.
A systematic review on the visualization of avatars and agents in AR & VR displayed using head-mounted displays. - In: IEEE transactions on visualization and computer graphics, ISSN 1941-0506, Bd. 29 (2023), 5, S. 2596-2606
Augmented Reality (AR) and Virtual Reality (VR) are pushing from the labs towards consumers, especially with social applications. These applications require visual representations of humans and intelligent entities. However, displaying and animating photo-realistic models comes with a high technical cost while low-fidelity representations may evoke eeriness and overall could degrade an experience. Thus, it is important to carefully select what kind of avatar to display. This article investigates the effects of rendering style and visible body parts in AR and VR by adopting a systematic literature review. We analyzed 72 papers that compare various avatar representations. Our analysis includes an outline of the research published between 2015 and 2022 on the topic of avatars and agents in AR and VR displayed using head-mounted displays, covering aspects like visible body parts (e.g., hands only, hands and head, full-body) and rendering style (e.g., abstract, cartoon, realistic); an overview of collected objective and subjective measures (e.g., task performance, presence, user experience, body ownership); and a classification of tasks where avatars and agents were used into task domains (physical activity, hand interaction, communication, game-like scenarios, and education/training). We discuss and synthesize our results within the context of today's AR and VR ecosystem, provide guidelines for practitioners, and finally identify and present promising research opportunities to encourage future research of avatars and agents in AR/VR environments.
Eating, smelling, and seeing: investigating multisensory integration and (in)congruent stimuli while eating in VR. - In: IEEE transactions on visualization and computer graphics, ISSN 1941-0506, Bd. 29 (2023), 5, S. 2423-2433
Integrating taste in AR/VR applications has various promising use cases - from social eating to the treatment of disorders. Despite many successful AR/VR applications that alter the taste of beverages and food, the relationship between olfaction, gustation, and vision during the process of multisensory integration (MSI) has not been fully explored yet. Thus, we present the results of a study in which participants were confronted with congruent and incongruent visual and olfactory stimuli while eating a tasteless food product in VR. We were interested (1) if participants integrate bi-modal congruent stimuli and (2) if vision guides MSI during congruent/incongruent conditions. Our results contain three main findings: First, and surprisingly, participants were not always able to detect congruent visual-olfactory stimuli when eating a portion of tasteless food. Second, when confronted with tri-modal incongruent cues, a majority of participants did not rely on any of the presented cues when forced to identify what they eat; this includes vision which has previously been shown to dominate MSI. Third, although research has shown that basic taste qualities like sweetness, saltiness, or sourness can be influenced by congruent cues, doing so with more complex flavors (e.g., zucchini or carrot) proved to be harder to achieve. We discuss our results in the context of multimodal integration, and within the domain of multisensory AR/VR. Our results are a necessary building block for future human-food interaction in XR that relies on smell, taste, and vision and are foundational for applied applications such as affective AR/VR.
Introduction to Virtual and Augmented Reality. - In: Virtual and augmented reality (VR/AR), (2022), S. 1-37
What is Virtual Reality (VR)? What is Augmented Reality (AR)? What is the purpose of VR/AR? What are the basic concepts? What are the hard- and software components of VR/AR systems? How has VR/AR developed historically? The first chapter examines these questions and provides an introduction to this textbook. This chapter is fundamental for the whole book. All subsequent chapters build on it and do not depend directly on one another. Therefore, these chapters can be worked through selectively and in a sequence that suits the individual interests and needs of the readers. Corresponding tips on how this book can be used efficiently by different target groups (students, teachers, users, technology enthusiasts) are provided at the end of the chapter, as well as a summary, questions for reviewing what has been learned, recommendations for further reading, and the references used in the chapter.
VR/AR input devices and tracking. - In: Virtual and augmented reality (VR/AR), (2022), S. 107-148
How do Virtual Reality (VR) and Augmented Reality (AR) systems recognize the actions of users? How does a VR or AR system know where the user is? How can a system track objects in their movement? What are proven input devices for VR and AR that increase immersion in virtual or augmented worlds? What are the technical possibilities and limitations? Based on fundamentals, which explain terms like degrees of freedom, accuracy, repetition rates, latency and calibration, methods are considered that are used for continuous tracking or monitoring of objects. Frequently used input devices are presented and discussed. Finally, examples of special methods such as finger and eye tracking are discussed.
VR/AR output devices. - In: Virtual and augmented reality (VR/AR), (2022), S. 149-200
This chapter discusses output devices and technologies for Virtual Reality (VR) and Augmented Reality (AR). The goal of using output devices is to enable the user to dive into the virtual world or to perceive the augmented world. Devices for visual output play a crucial role here, they are of central importance for the use of VR and AR. First and foremost, Head-Mounted Displays (HMD) must be mentioned, the different types of which are discussed in detail here. However, VR also uses different forms of stationary displays, which are another major topic of this chapter. Finally, output devices for other senses are reviewed, namely acoustic and haptic outputs.