Anzahl der Treffer: 189
Erstellt: Wed, 24 Apr 2024 23:15:44 +0200 in 0.0792 sec


De Souza Cardoso, Luís Fernando; Kimura, Bruno Yuji Lino; Zorzal, Ezequiel Roberto
Towards augmented and mixed reality on future mobile networks. - In: Multimedia tools and applications, ISSN 1573-7721, Bd. 83 (2024), 3, S. 9067-9102

Augmented and Mixed Reality (AR/MR) technologies enhance the human perception of the world by combining virtual and real environments. With the increase of mobile devices and the advent of 5G, this technology has the potential to become part of people’s life. This article aims to evaluate the impact of 5G and beyond mobile networks in the future of AR/MR. To attend to this objective, we surveyed four digital libraries to identify articles and reviews concerning AR/MR use based on mobile networks. The results describe the state-of-the-art of mobile AR/MR applications and the benefits and challenges of the technology. Finally, after the review, we propose a roadmap concerning AR/MR hardware and software development to run applications supported by future mobile networks.



https://doi.org/10.1007/s11042-023-15301-4
Döring, Nicola; Mikhailova, Veronika; Brandenburg, Karlheinz; Broll, Wolfgang; Groß, Horst-Michael; Werner, Stephan; Raake, Alexander
Digital media in intergenerational communication: status quo and future scenarios for the grandparent-grandchild relationship. - In: Universal access in the information society, ISSN 1615-5297, Bd. 23 (2024), 1, S. 379-394

Communication technologies play an important role in maintaining the grandparent-grandchild (GP-GC) relationship. Based on Media Richness Theory, this study investigates the frequency of use (RQ1) and perceived quality (RQ2) of established media as well as the potential use of selected innovative media (RQ3) in GP-GC relationships with a particular focus on digital media. A cross-sectional online survey and vignette experiment were conducted in February 2021 among N = 286 university students in Germany (mean age 23 years, 57% female) who reported on the direct and mediated communication with their grandparents. In addition to face-to-face interactions, non-digital and digital established media (such as telephone, texting, video conferencing) and innovative digital media, namely augmented reality (AR)-based and social robot-based communication technologies, were covered. Face-to-face and phone communication occurred most frequently in GP-GC relationships: 85% of participants reported them taking place at least a few times per year (RQ1). Non-digital established media were associated with higher perceived communication quality than digital established media (RQ2). Innovative digital media received less favorable quality evaluations than established media. Participants expressed doubts regarding the technology competence of their grandparents, but still met innovative media with high expectations regarding improved communication quality (RQ3). Richer media, such as video conferencing or AR, do not automatically lead to better perceived communication quality, while leaner media, such as letters or text messages, can provide rich communication experiences. More research is needed to fully understand and systematically improve the utility, usability, and joy of use of different digital communication technologies employed in GP-GC relationships.



https://doi.org/10.1007/s10209-022-00957-w
Fischedick, Söhnke B.; Richter, Kay; Wengefeld, Tim; Seichter, Daniel; Scheidig, Andrea; Döring, Nicola; Broll, Wolfgang; Werner, Stephan; Raake, Alexander; Groß, Horst-Michael
Bridging distance with a collaborative telepresence robot for older adults - report on progress in the CO-HUMANICS project. - In: ISR Europe 2023: 56th International Symposium on Robotics, (2023), S. 346-353

In an aging society, the social needs of older adults, such as regular interactions and independent living, are crucial for their quality of life. However, due to spatial separation from their family and friends, it is difficult to maintain social relationships. Our multidisciplinary project, CO-HUMANICS, aims to meet these needs, even over long distances, through the utilization of innovative technologies, including a robot-based system. This paper presents the first prototype of our system, designed to connect family members or friends virtually present through a mobile robot with an older adult. The system incorporates bi-directional video telephony, remote control capabilities, and enhanced visualization methods. A comparison is made with other state-of-the-art robotic approaches, focusing on remote control capabilities. We provide details about the hard- and software components, e.g., a projector-based pointing unit for collaborative telepresence to assist in everyday tasks. Our comprehensive scene representation is discussed, which utilizes 3D NDT maps, enabling advanced remote navigation features, such as autonomously driving to a specific object. Finally, insights about past and concepts for future evaluation are provided to assess the developed system.



https://ieeexplore.ieee.org/document/10363093
Hartbrich, Jakob; Weidner, Florian; Kunert, Christian; Arévalo Arboleda, Stephanie; Raake, Alexander; Broll, Wolfgang
Eye and face tracking in VR: avatar embodiment and enfacement with realistic and cartoon avatars. - In: MUM 2023, (2023), S. 270-278

Previous studies have explored the perception of various types of embodied avatars in immersive environments. However, the impact of eye and face tracking with personalized avatars is yet to be explored. In this paper, we investigate the impact of eye and face tracking on embodiment, enfacement, and the uncanny valley with four types of avatars using a VR-based mirroring task. We conducted a study (N=12) and created self-avatars with two rendering styles: a cartoon avatar (created in an avatar generator using a picture of the user’s face) and a photorealistic scanned avatar (created using a 3D scanner), each with and without eye and face tracking and respective adaptation of the mirror image. Our results indicate that adding eye and face tracking can be beneficial for certain enfacement scales (belonged), and we confirm that compared to a cartoon avatar, a scanned realistic avatar results in higher body ownership and increased enfacement (own face, belonging, mirror) - regardless of eye and face tracking. We critically discuss our experiences and outline the limitations of the applied hardware and software with respect to the provided level of control and the applicability for complex tasks such as displaying emotions. We synthesize these findings into a discussion about potential improvements for facial animation in VR and highlight the need for a better level of control, the integration of additional sensing and processing technologies, and an objective metric for comparing facial animation systems.



https://doi.org/10.1145/3626705.3627793
Kumari, Gunjan; Knutzen, Kathrin; Schuldt, Jacqueline
Exploring the use of social virtual reality conferences in higher education. - In: 2023 IEEE 2nd German Education Conference (GeCon), (2023), insges. 6 S.

Sparked by the recent growth of online and remote teaching formats, social virtual reality (Social VR) applications are being employed in higher education teaching. This eliminates the need for physical presence in one classroom and allows for increased accessibility of classes. More engaging distant virtual classrooms and extracurricular activities are made possible by Social VR applications like Mozilla Hubs (MH). We conducted a virtual conference in web-based MH as a cross-university collaboration for two game development courses for undergraduate and graduate students. We report on our organizational strategy and subsequent online survey evaluation of N = 29 attendees. We present solutions to problems that are frequently encountered, specifically in MH, while organizing virtual conferences.



https://doi.org/10.1109/GECon58119.2023.10295104
Andrich, Aliya; Weidner, Florian; Broll, Wolfgang
Zeitgebers, time judgments, and VR: a constructive replication study. - In: 2023 IEEE International Symposium on Mixed and Augmented Reality adjunct, (2023), S. 1-2

Previous research has attempted to understand the influence of virtual reality (VR) on human perception of time, but neither a comprehensive understanding nor conclusive results have been achieved. To extend and continue research on this topic, we closely replicated a previous study and included new elements in a constructive replication study. To do this, we replicate the original setup and investigate the influence of workload and sun speed on time production. Contrary to previous findings, we did not find significant differences in virtual sun movements on time judgments. However, consistent with the original study, time perception in VR was affected by cognitive workload. In addition, we found that immersion in the virtual environment influenced time perception after VR. The contrasting results highlight the need for further research into the factors contributing to time perception in and after VR to fully explain the phenomenon of altered perception.



https://doi.org/10.1109/ISMAR-Adjunct60411.2023.00007
Kunert, Christian; Schwandt, Tobias; Broll, Wolfgang
Cube-SSIM: a metric for evaluating 360-degree images as cube maps. - In: IEEE Xplore digital library, ISSN 2473-2001, (2023), S. 248-251

360-degree image data is a crucial aspect in graphics applications where they are typically used for lighting purposes. Fields like mixed reality generally rely on lighting estimation techniques to estimate the 360-degree environment. To evaluate such approaches, accurate image assessment in this domain is important. However, traditional image evaluation metrics like SSIM, PSNR, and IMED are problematic when analyzing 360-degree image data as they would require two-dimensional representations like equirectangular panoramas or unfolded cubes that introduce image distortions. In this paper, we address this problem by presenting Cube-SSIM, a variant of SSIM designed specifically for cube maps. For this, we modify SSIM to take the solid angles of cube map pixels into account which gives more consistent results than using SSIM for the individual cube faces. The computations can run efficiently on graphics hardware due to their native support for cube maps and no further image conversions are required. We show that our approach allows for more accurate results than other comparison metrics that largely depend on 2D images. While SSIM is especially important due to its wide usage, the modification can also be applied to other image metrics for which we include IMED as an example.



https://doi.org/10.1109/CW58918.2023.00043
Kunert, Christian; Schwandt, Tobias; Broll, Wolfgang
Evaluating light probe estimation techniques for mobile augmented reality. - In: IEEE Xplore digital library, ISSN 2473-2001, (2023), S. 141-148

Realistic lighting approaches typically rely on physically-based rendering which in turn often makes use of image-based lighting. Enabling these techniques in augmented reality on mobile devices requires unique approaches to estimate light probes, given the limited camera and sensor data available. In this paper, we evaluate different time-dependent and time-independent techniques for light probe estimation in augmented reality applications that try to predict the environment lighting using single images or video streams in combination with inpainting techniques. We simulate real-world applications using an evaluation framework where a simulated mobile device captures camera streams from different scenarios following a pre-defined path. The resulting camera streams are fed to a total of six estimation techniques in order to create light probes which are then used to render virtual objects while applying various materials. By comparing the rendered images as well as the light probe estimations, we perform a quantitative evaluation. We show how approaches that are able to process continuous video streams can provide more plausible results in cases where sufficient camera movement is present. Additionally, we investigate the visual impression of different types of materials showing that rough surfaces with distinct colors are less likely to produce divergent estimation results.



https://doi.org/10.1109/CW58918.2023.00029
Gerhardt, Christoph; Weidner, Florian; Broll, Wolfgang
SkyCloud: neural network-based sky and cloud segmentation from natural images. - In: 2023 8th International Conference on Image, Vision and Computing (ICIVC 2023), (2023), S. 343-351

The comprehensive understanding of outdoor scenes is a necessary requirement for a wide variety of applications. For example, semantic segmentation enables applications such as outdoor robot navigation, image stylization, weather fore-casting, or climate monitoring. However, existing outdoor scene understanding models are often less reliable in challenging situations such as changing weather conditions or low light. Additionally, current approaches mainly focus on sky and ground separation and do not incorporate valuable information provided by weather conditions and cloud coverage. To overcome these challenges, we present SkyCloudNet, a multitask neural network architecture that extracts high-level attributes from the input image and utilizes them to improve the robustness of the network to environmental influences. Furthermore, it allows for the segmentation of cloud segments in natural outdoor images. While existing cloud segmentation approaches are limited to cropped sky-only images, our model enables the segmentation from entire landscape images with arbitrary resolution. Next to that, SkyCloudNet achieves state-of-the-art performance in environmental attribute estimation and sky segmentation. As cloud segmentation from natural images has not been addressed in previous literature, we also release the SkyCloud data set consisting of 350 high-resolution outdoor images with dense labels of sky and cloud segments.



https://doi.org/10.1109/ICIVC58118.2023.10270450
Schott, Ephraim; Makled, Elhassan; Zöppig, Tony Jan; Mühlhaus, Sebastian; Weidner, Florian; Broll, Wolfgang; Fröhlich, Bernd
UniteXR: joint exploration of a real-world museum and its digital twin. - In: VRST 2023, (2023), 25, insges. 10 S.

The combination of smartphone Augmented Reality (AR) and Virtual Reality (VR) makes it possible for on-site and remote users to simultaneously explore a physical space and its digital twin through an asymmetric Collaborative Virtual Environment (CVE). In this paper, we investigate two spatial awareness visualizations to enable joint exploration of a space for dyads consisting of a smartphone AR user and a head-mounted display VR user. Our study revealed that both, a mini-map-based method and an egocentric compass method with a path visualization, enabled the on-site visitors to locate and follow a virtual companion reliably and quickly. Furthermore, the embodiment of the AR user by an inverse kinematics avatar allowed the use of natural gestures such as pointing and waving which was preferred over text messages by the participants of our study. In an expert review in a museum and its digital twin we observed an overall high social presence for on-site AR and remote VR visitors and found that the visualizations and the avatar embodiment successfully facilitated their communication and collaboration.



https://doi.org/10.1145/3611659.3615708