Anzahl der Treffer: 189
Erstellt: Tue, 23 Apr 2024 23:15:51 +0200 in 0.0787 sec


Raake, Alexander; Broll, Wolfgang; Chuang, Lewis L.; Domahidi, Emese; Wendemuth, Andreas
Cross-timescale experience evaluation framework for productive teaming. - In: Engineering for a changing world, (2023), 5.4.129, S. 1-6

This paper presents the initial concept for an evaluation framework to systematically evaluate productive teaming (PT). We consider PT as adaptive human-machine interactions between human users and augmented technical production systems. Also, human-to-human communication as part of a hybrid team with multiple human actors is considered, as well as human-human and human-machine communication for remote and mixed remote- and co-located teams. The evaluation comprises objective, performance-related success indicators, behavioral metadata, and measures of human experience. In particular, it considers affective, attentional and intentional states of human team members, their influence on interaction dynamics in the team, and researches appropriate strategies to satisfyingly adjust dysfunctional dynamics, using concepts of companion technology. The timescales under consideration span from seconds to several minutes, with selected studies targeting hour-long interactions and longer-term effects such as effort and fatigue. Two example PT scenarios will be discussed in more detail. To enable generalization and a systematic evaluation, the scenarios’ use cases will be decomposed into more general modules of interaction.



https://doi.org/10.22032/dbt.58930
Amthor, Peter; Döring, Ulf; Fischer, Daniel; Genath, Jonas; Kreuzberger, Gunther
Erfahrungen bei der Integration des Autograding-Systems CodeOcean in die universitäre Programmierausbildung. - In: Proceedings of the Sixth Workshop "Automatische Bewertung von Programmieraufgaben" (ABP 2023), (2023), S. 67-74

Eine effektive und effiziente universitäre Programmierausbildung erfordert zunehmend den Einsatz automatisierter Bewertungssysteme. Im Rahmen des Projekts examING erprobt das Teilprojekt AutoPING den Einsatz des quelloffenen Autograding-Systems CodeOcean für übergreifende Lehrangebote und Prüfungen an der TU Ilmenau mit dem Ziel, selbstgesteuertes und kompetenzorientiertes Lernen zu ermöglichen und zu fördern. Der Beitrag gibt einen Überblick über erste Projekterfahrungen bei der Adaption didaktischer Szenarien in der Programmierausbildung hin zu testgetriebener Softwareentwicklung sowie der Generierung von Feedback. Es werden wesentliche Erkenntnisse aus Sicht der Studierenden und Lehrenden erörtert, Herausforderungen und Lösungsansätze zur Integration und Erweiterung von CodeOcean für neue Anwendungsfelder diskutiert sowie zukünftige Perspektiven eröffnet.



https://doi.org/10.18420/abp2023-9
Knutzen, Kathrin; Weidner, Florian; Broll, Wolfgang
The role of social identity labels in CVEs on user behavior. - In: 2023 IEEE Conference on Virtual Reality and 3D User Interfaces abstracts and workshops, (2023), S. 883-884

Psychological, and individual factors like group identity influence social presence in collaborative virtual settings. We investigated the impact of social identity labels, which reflect a user's nation and academic affiliation, on collaborative behavior. In an experiment, N=18 dyads played puzzle games while seeing or not seeing such labels. There were no significant differences regarding their social presence, trust, group identification or enjoyment. We argue that social identity labels in dyadic interactions do not change collaborative virtual behavior. We advance the field of sociotechnical applications by highlighting the relationship between psychological characteristics and cooperative behavior in collaborative virtual settings.



https://doi.org/10.1109/VRW58643.2023.00284
Weidner, Florian; Böttcher, Gerd; Arévalo Arboleda, Stephanie; Diao, Chenyao; Sinani, Luljeta; Kunert, Christian; Gerhardt, Christoph; Broll, Wolfgang; Raake, Alexander
A systematic review on the visualization of avatars and agents in AR & VR displayed using head-mounted displays. - In: IEEE transactions on visualization and computer graphics, ISSN 1941-0506, Bd. 29 (2023), 5, S. 2596-2606

Augmented Reality (AR) and Virtual Reality (VR) are pushing from the labs towards consumers, especially with social applications. These applications require visual representations of humans and intelligent entities. However, displaying and animating photo-realistic models comes with a high technical cost while low-fidelity representations may evoke eeriness and overall could degrade an experience. Thus, it is important to carefully select what kind of avatar to display. This article investigates the effects of rendering style and visible body parts in AR and VR by adopting a systematic literature review. We analyzed 72 papers that compare various avatar representations. Our analysis includes an outline of the research published between 2015 and 2022 on the topic of avatars and agents in AR and VR displayed using head-mounted displays, covering aspects like visible body parts (e.g., hands only, hands and head, full-body) and rendering style (e.g., abstract, cartoon, realistic); an overview of collected objective and subjective measures (e.g., task performance, presence, user experience, body ownership); and a classification of tasks where avatars and agents were used into task domains (physical activity, hand interaction, communication, game-like scenarios, and education/training). We discuss and synthesize our results within the context of today's AR and VR ecosystem, provide guidelines for practitioners, and finally identify and present promising research opportunities to encourage future research of avatars and agents in AR/VR environments.



https://doi.org/10.1109/TVCG.2023.3247072
Weidner, Florian; Maier, Jana E.; Broll, Wolfgang
Eating, smelling, and seeing: investigating multisensory integration and (in)congruent stimuli while eating in VR. - In: IEEE transactions on visualization and computer graphics, ISSN 1941-0506, Bd. 29 (2023), 5, S. 2423-2433

Integrating taste in AR/VR applications has various promising use cases - from social eating to the treatment of disorders. Despite many successful AR/VR applications that alter the taste of beverages and food, the relationship between olfaction, gustation, and vision during the process of multisensory integration (MSI) has not been fully explored yet. Thus, we present the results of a study in which participants were confronted with congruent and incongruent visual and olfactory stimuli while eating a tasteless food product in VR. We were interested (1) if participants integrate bi-modal congruent stimuli and (2) if vision guides MSI during congruent/incongruent conditions. Our results contain three main findings: First, and surprisingly, participants were not always able to detect congruent visual-olfactory stimuli when eating a portion of tasteless food. Second, when confronted with tri-modal incongruent cues, a majority of participants did not rely on any of the presented cues when forced to identify what they eat; this includes vision which has previously been shown to dominate MSI. Third, although research has shown that basic taste qualities like sweetness, saltiness, or sourness can be influenced by congruent cues, doing so with more complex flavors (e.g., zucchini or carrot) proved to be harder to achieve. We discuss our results in the context of multimodal integration, and within the domain of multisensory AR/VR. Our results are a necessary building block for future human-food interaction in XR that relies on smell, taste, and vision and are foundational for applied applications such as affective AR/VR.



https://doi.org/10.1109/TVCG.2023.3247099
Dörner, Ralf; Broll, Wolfgang; Jung, Bernhard; Grimm, Paul; Göbel, Martin; Kruse, Rolf
Introduction to Virtual and Augmented Reality. - In: Virtual and augmented reality (VR/AR), (2022), S. 1-37

What is Virtual Reality (VR)? What is Augmented Reality (AR)? What is the purpose of VR/AR? What are the basic concepts? What are the hard- and software components of VR/AR systems? How has VR/AR developed historically? The first chapter examines these questions and provides an introduction to this textbook. This chapter is fundamental for the whole book. All subsequent chapters build on it and do not depend directly on one another. Therefore, these chapters can be worked through selectively and in a sequence that suits the individual interests and needs of the readers. Corresponding tips on how this book can be used efficiently by different target groups (students, teachers, users, technology enthusiasts) are provided at the end of the chapter, as well as a summary, questions for reviewing what has been learned, recommendations for further reading, and the references used in the chapter.



Grimm, Paul; Broll, Wolfgang; Herold, Rigo; Hummel, Johannes; Kruse, Rolf
VR/AR input devices and tracking. - In: Virtual and augmented reality (VR/AR), (2022), S. 107-148

How do Virtual Reality (VR) and Augmented Reality (AR) systems recognize the actions of users? How does a VR or AR system know where the user is? How can a system track objects in their movement? What are proven input devices for VR and AR that increase immersion in virtual or augmented worlds? What are the technical possibilities and limitations? Based on fundamentals, which explain terms like degrees of freedom, accuracy, repetition rates, latency and calibration, methods are considered that are used for continuous tracking or monitoring of objects. Frequently used input devices are presented and discussed. Finally, examples of special methods such as finger and eye tracking are discussed.



Broll, Wolfgang; Grimm, Paul; Herold, Rigo; Reiners, Dirk; Cruz-Neira, Carolina
VR/AR output devices. - In: Virtual and augmented reality (VR/AR), (2022), S. 149-200

This chapter discusses output devices and technologies for Virtual Reality (VR) and Augmented Reality (AR). The goal of using output devices is to enable the user to dive into the virtual world or to perceive the augmented world. Devices for visual output play a crucial role here, they are of central importance for the use of VR and AR. First and foremost, Head-Mounted Displays (HMD) must be mentioned, the different types of which are discussed in detail here. However, VR also uses different forms of stationary displays, which are another major topic of this chapter. Finally, output devices for other senses are reviewed, namely acoustic and haptic outputs.



Broll, Wolfgang;
Augmented reality. - In: Virtual and augmented reality (VR/AR), (2022), S. 291-329

This chapter covers specific topics of Augmented Reality (AR). After an introduction to the basic components and a review of the different types of AR, the following sections explain the individual components in more detail, as far as they were not already part of previous chapters. This includes in particular the different manifestations of registration, since these are of central importance for an AR experience. Furthermore, special AR techniques and interaction types are introduced before discussing individual application areas of AR. Then, Diminished Reality (DR), the opposite of AR, is discussed, namely the removal of real content. Finally, Mediated Reality, which allows for altering reality in any form, including the combination of AR and DR, will be discussed.



Broll, Wolfgang; Weidner, Florian; Schwandt, Tobias; Weber, Kai; Dörner, Ralf
Authoring of VR/AR applications. - In: Virtual and augmented reality (VR/AR), (2022), S. 371-400

This chapter deals with the authoring of VR and AR applications. The focus here is on the use of authoring tools in the form of software development kits (SDKs) or game engines. First, the actual authoring process will be briefly discussed before selected authoring tools for VR and AR are reviewed. Subsequently, the authoring process and the use of the tools will be illustrated through typical case studies. The other chapters of this book deal with the fundamentals and methodologies of VR and AR. These are generally applicable over a longer period. In contrast to this, this chapter looks at some very specific authoring tools and the authoring process based on them, which can inevitably only represent a snapshot in time. Features, releases and availability of these tools can change at short notice, so that individual sections may no longer be up to date when this book is in press. To take this aspect into account, the case studies listed here are stored in an online repository, where they are regularly updated to reflect the latest versions of the authoring tools and runtime environments.