Anzahl der Treffer: 191
Erstellt: Thu, 09 May 2024 23:17:32 +0200 in 0.0764 sec


Gerhardt, Christoph; Weidner, Florian; Broll, Wolfgang
SkyCloud: neural network-based sky and cloud segmentation from natural images. - In: 2023 8th International Conference on Image, Vision and Computing (ICIVC 2023), (2023), S. 343-351

The comprehensive understanding of outdoor scenes is a necessary requirement for a wide variety of applications. For example, semantic segmentation enables applications such as outdoor robot navigation, image stylization, weather fore-casting, or climate monitoring. However, existing outdoor scene understanding models are often less reliable in challenging situations such as changing weather conditions or low light. Additionally, current approaches mainly focus on sky and ground separation and do not incorporate valuable information provided by weather conditions and cloud coverage. To overcome these challenges, we present SkyCloudNet, a multitask neural network architecture that extracts high-level attributes from the input image and utilizes them to improve the robustness of the network to environmental influences. Furthermore, it allows for the segmentation of cloud segments in natural outdoor images. While existing cloud segmentation approaches are limited to cropped sky-only images, our model enables the segmentation from entire landscape images with arbitrary resolution. Next to that, SkyCloudNet achieves state-of-the-art performance in environmental attribute estimation and sky segmentation. As cloud segmentation from natural images has not been addressed in previous literature, we also release the SkyCloud data set consisting of 350 high-resolution outdoor images with dense labels of sky and cloud segments.



https://doi.org/10.1109/ICIVC58118.2023.10270450
Schott, Ephraim; Makled, Elhassan; Zöppig, Tony Jan; Mühlhaus, Sebastian; Weidner, Florian; Broll, Wolfgang; Fröhlich, Bernd
UniteXR: joint exploration of a real-world museum and its digital twin. - In: VRST 2023, (2023), 25, insges. 10 S.

The combination of smartphone Augmented Reality (AR) and Virtual Reality (VR) makes it possible for on-site and remote users to simultaneously explore a physical space and its digital twin through an asymmetric Collaborative Virtual Environment (CVE). In this paper, we investigate two spatial awareness visualizations to enable joint exploration of a space for dyads consisting of a smartphone AR user and a head-mounted display VR user. Our study revealed that both, a mini-map-based method and an egocentric compass method with a path visualization, enabled the on-site visitors to locate and follow a virtual companion reliably and quickly. Furthermore, the embodiment of the AR user by an inverse kinematics avatar allowed the use of natural gestures such as pointing and waving which was preferred over text messages by the participants of our study. In an expert review in a museum and its digital twin we observed an overall high social presence for on-site AR and remote VR visitors and found that the visualizations and the avatar embodiment successfully facilitated their communication and collaboration.



https://doi.org/10.1145/3611659.3615708
Raake, Alexander; Broll, Wolfgang; Chuang, Lewis L.; Domahidi, Emese; Wendemuth, Andreas
Cross-timescale experience evaluation framework for productive teaming. - In: Engineering for a changing world, (2023), 5.4.129, S. 1-6

This paper presents the initial concept for an evaluation framework to systematically evaluate productive teaming (PT). We consider PT as adaptive human-machine interactions between human users and augmented technical production systems. Also, human-to-human communication as part of a hybrid team with multiple human actors is considered, as well as human-human and human-machine communication for remote and mixed remote- and co-located teams. The evaluation comprises objective, performance-related success indicators, behavioral metadata, and measures of human experience. In particular, it considers affective, attentional and intentional states of human team members, their influence on interaction dynamics in the team, and researches appropriate strategies to satisfyingly adjust dysfunctional dynamics, using concepts of companion technology. The timescales under consideration span from seconds to several minutes, with selected studies targeting hour-long interactions and longer-term effects such as effort and fatigue. Two example PT scenarios will be discussed in more detail. To enable generalization and a systematic evaluation, the scenarios’ use cases will be decomposed into more general modules of interaction.



https://doi.org/10.22032/dbt.58930
Amthor, Peter; Döring, Ulf; Fischer, Daniel; Genath, Jonas; Kreuzberger, Gunther
Erfahrungen bei der Integration des Autograding-Systems CodeOcean in die universitäre Programmierausbildung. - In: Proceedings of the Sixth Workshop "Automatische Bewertung von Programmieraufgaben" (ABP 2023), (2023), S. 67-74

Eine effektive und effiziente universitäre Programmierausbildung erfordert zunehmend den Einsatz automatisierter Bewertungssysteme. Im Rahmen des Projekts examING erprobt das Teilprojekt AutoPING den Einsatz des quelloffenen Autograding-Systems CodeOcean für übergreifende Lehrangebote und Prüfungen an der TU Ilmenau mit dem Ziel, selbstgesteuertes und kompetenzorientiertes Lernen zu ermöglichen und zu fördern. Der Beitrag gibt einen Überblick über erste Projekterfahrungen bei der Adaption didaktischer Szenarien in der Programmierausbildung hin zu testgetriebener Softwareentwicklung sowie der Generierung von Feedback. Es werden wesentliche Erkenntnisse aus Sicht der Studierenden und Lehrenden erörtert, Herausforderungen und Lösungsansätze zur Integration und Erweiterung von CodeOcean für neue Anwendungsfelder diskutiert sowie zukünftige Perspektiven eröffnet.



https://doi.org/10.18420/abp2023-9
Knutzen, Kathrin; Weidner, Florian; Broll, Wolfgang
The role of social identity labels in CVEs on user behavior. - In: 2023 IEEE Conference on Virtual Reality and 3D User Interfaces abstracts and workshops, (2023), S. 883-884

Psychological, and individual factors like group identity influence social presence in collaborative virtual settings. We investigated the impact of social identity labels, which reflect a user's nation and academic affiliation, on collaborative behavior. In an experiment, N=18 dyads played puzzle games while seeing or not seeing such labels. There were no significant differences regarding their social presence, trust, group identification or enjoyment. We argue that social identity labels in dyadic interactions do not change collaborative virtual behavior. We advance the field of sociotechnical applications by highlighting the relationship between psychological characteristics and cooperative behavior in collaborative virtual settings.



https://doi.org/10.1109/VRW58643.2023.00284
Weidner, Florian; Böttcher, Gerd; Arévalo Arboleda, Stephanie; Diao, Chenyao; Sinani, Luljeta; Kunert, Christian; Gerhardt, Christoph; Broll, Wolfgang; Raake, Alexander
A systematic review on the visualization of avatars and agents in AR & VR displayed using head-mounted displays. - In: IEEE transactions on visualization and computer graphics, ISSN 1941-0506, Bd. 29 (2023), 5, S. 2596-2606

Augmented Reality (AR) and Virtual Reality (VR) are pushing from the labs towards consumers, especially with social applications. These applications require visual representations of humans and intelligent entities. However, displaying and animating photo-realistic models comes with a high technical cost while low-fidelity representations may evoke eeriness and overall could degrade an experience. Thus, it is important to carefully select what kind of avatar to display. This article investigates the effects of rendering style and visible body parts in AR and VR by adopting a systematic literature review. We analyzed 72 papers that compare various avatar representations. Our analysis includes an outline of the research published between 2015 and 2022 on the topic of avatars and agents in AR and VR displayed using head-mounted displays, covering aspects like visible body parts (e.g., hands only, hands and head, full-body) and rendering style (e.g., abstract, cartoon, realistic); an overview of collected objective and subjective measures (e.g., task performance, presence, user experience, body ownership); and a classification of tasks where avatars and agents were used into task domains (physical activity, hand interaction, communication, game-like scenarios, and education/training). We discuss and synthesize our results within the context of today's AR and VR ecosystem, provide guidelines for practitioners, and finally identify and present promising research opportunities to encourage future research of avatars and agents in AR/VR environments.



https://doi.org/10.1109/TVCG.2023.3247072
Weidner, Florian; Maier, Jana E.; Broll, Wolfgang
Eating, smelling, and seeing: investigating multisensory integration and (in)congruent stimuli while eating in VR. - In: IEEE transactions on visualization and computer graphics, ISSN 1941-0506, Bd. 29 (2023), 5, S. 2423-2433

Integrating taste in AR/VR applications has various promising use cases - from social eating to the treatment of disorders. Despite many successful AR/VR applications that alter the taste of beverages and food, the relationship between olfaction, gustation, and vision during the process of multisensory integration (MSI) has not been fully explored yet. Thus, we present the results of a study in which participants were confronted with congruent and incongruent visual and olfactory stimuli while eating a tasteless food product in VR. We were interested (1) if participants integrate bi-modal congruent stimuli and (2) if vision guides MSI during congruent/incongruent conditions. Our results contain three main findings: First, and surprisingly, participants were not always able to detect congruent visual-olfactory stimuli when eating a portion of tasteless food. Second, when confronted with tri-modal incongruent cues, a majority of participants did not rely on any of the presented cues when forced to identify what they eat; this includes vision which has previously been shown to dominate MSI. Third, although research has shown that basic taste qualities like sweetness, saltiness, or sourness can be influenced by congruent cues, doing so with more complex flavors (e.g., zucchini or carrot) proved to be harder to achieve. We discuss our results in the context of multimodal integration, and within the domain of multisensory AR/VR. Our results are a necessary building block for future human-food interaction in XR that relies on smell, taste, and vision and are foundational for applied applications such as affective AR/VR.



https://doi.org/10.1109/TVCG.2023.3247099
Dörner, Ralf; Broll, Wolfgang; Jung, Bernhard; Grimm, Paul; Göbel, Martin; Kruse, Rolf
Introduction to Virtual and Augmented Reality. - In: Virtual and augmented reality (VR/AR), (2022), S. 1-37

What is Virtual Reality (VR)? What is Augmented Reality (AR)? What is the purpose of VR/AR? What are the basic concepts? What are the hard- and software components of VR/AR systems? How has VR/AR developed historically? The first chapter examines these questions and provides an introduction to this textbook. This chapter is fundamental for the whole book. All subsequent chapters build on it and do not depend directly on one another. Therefore, these chapters can be worked through selectively and in a sequence that suits the individual interests and needs of the readers. Corresponding tips on how this book can be used efficiently by different target groups (students, teachers, users, technology enthusiasts) are provided at the end of the chapter, as well as a summary, questions for reviewing what has been learned, recommendations for further reading, and the references used in the chapter.



Grimm, Paul; Broll, Wolfgang; Herold, Rigo; Hummel, Johannes; Kruse, Rolf
VR/AR input devices and tracking. - In: Virtual and augmented reality (VR/AR), (2022), S. 107-148

How do Virtual Reality (VR) and Augmented Reality (AR) systems recognize the actions of users? How does a VR or AR system know where the user is? How can a system track objects in their movement? What are proven input devices for VR and AR that increase immersion in virtual or augmented worlds? What are the technical possibilities and limitations? Based on fundamentals, which explain terms like degrees of freedom, accuracy, repetition rates, latency and calibration, methods are considered that are used for continuous tracking or monitoring of objects. Frequently used input devices are presented and discussed. Finally, examples of special methods such as finger and eye tracking are discussed.



Broll, Wolfgang; Grimm, Paul; Herold, Rigo; Reiners, Dirk; Cruz-Neira, Carolina
VR/AR output devices. - In: Virtual and augmented reality (VR/AR), (2022), S. 149-200

This chapter discusses output devices and technologies for Virtual Reality (VR) and Augmented Reality (AR). The goal of using output devices is to enable the user to dive into the virtual world or to perceive the augmented world. Devices for visual output play a crucial role here, they are of central importance for the use of VR and AR. First and foremost, Head-Mounted Displays (HMD) must be mentioned, the different types of which are discussed in detail here. However, VR also uses different forms of stationary displays, which are another major topic of this chapter. Finally, output devices for other senses are reviewed, namely acoustic and haptic outputs.