Publications of the Department of Audiovisual Technology

The following list (automatically generated by the University Library) contains the publications from the year 2016. The publications up to the year 2015 can be found on an extra page.

Note: If you want to search through all the publications, select "Show All" and then you can use the browser search with Ctrl+F.

Results: 165
Created on: Thu, 02 May 2024 23:03:32 +0200 in 0.0763 sec


Saboor, Qasim; Mehfooz-Khan, Hamd; Raake, Alexander; Arévalo Arboleda, Stephanie
A virtual gardening experience: evaluating the effect of haptic feedback on spatial presence, perceptual realism, mental immersion, and user experience. - In: MUM 2023, (2023), S. 520-522

Virtual nature settings have demonstrated to provide benefits to mental well-being. However, most studies have focused on providing only audiovisual stimuli. We aim to evaluate the use of haptic feedback to simulate touching elements in nature-inspired settings. In this paper, we designed a VR gardening environment to investigate the impact of haptic feedback on spatial presence, perceptual realism, mental immersion, user experience, and task performance while interacting with gardening objects in a study (N=18, 9 female and 9 male). Our results suggest that haptic feedback can increase spatial presence and point to gender differences, i.e., female participants reported higher scores in spatial presence and perceptual realism, in the chosen VR experience. Although our main goal was to evaluate the role of haptics in a virtual garden, our findings highlight the importance of investigating and identifying factors that could lead to gender differences in VR experiences.



https://doi.org/10.1145/3626705.3631794
Hartbrich, Jakob; Weidner, Florian; Kunert, Christian; Arévalo Arboleda, Stephanie; Raake, Alexander; Broll, Wolfgang
Eye and face tracking in VR: avatar embodiment and enfacement with realistic and cartoon avatars. - In: MUM 2023, (2023), S. 270-278

Previous studies have explored the perception of various types of embodied avatars in immersive environments. However, the impact of eye and face tracking with personalized avatars is yet to be explored. In this paper, we investigate the impact of eye and face tracking on embodiment, enfacement, and the uncanny valley with four types of avatars using a VR-based mirroring task. We conducted a study (N=12) and created self-avatars with two rendering styles: a cartoon avatar (created in an avatar generator using a picture of the user’s face) and a photorealistic scanned avatar (created using a 3D scanner), each with and without eye and face tracking and respective adaptation of the mirror image. Our results indicate that adding eye and face tracking can be beneficial for certain enfacement scales (belonged), and we confirm that compared to a cartoon avatar, a scanned realistic avatar results in higher body ownership and increased enfacement (own face, belonging, mirror) - regardless of eye and face tracking. We critically discuss our experiences and outline the limitations of the applied hardware and software with respect to the provided level of control and the applicability for complex tasks such as displaying emotions. We synthesize these findings into a discussion about potential improvements for facial animation in VR and highlight the need for a better level of control, the integration of additional sensing and processing technologies, and an objective metric for comparing facial animation systems.



https://doi.org/10.1145/3626705.3627793
Friese, Ingo; Galkow-Schneider, Mandy; Bassbouss, Louay; Zoubarev, Alexander; Neparidze, Andy; Melnyk, Sergiy; Zhou, Qiuheng; Schotten, Hans D.; Pfandzelter, Tobias; Bermbach, David; Kritzner, Arndt; Zschau, Enrico; Dhara, Prasenjit; Göring, Steve; Menz, William; Raake, Alexander; Rüther-Kindel, Wolfgang; Quaeck, Fabian; Stuckert, Nick; Vilter, Robert
True 3D holography: a communication service of tomorrow and its requirements for a new converged cloud and network architecture on the path to 6G. - In: International Conference on 6G Networking, October 18 - 20, 2023, (2023), insges. 8 S.

Research project 6G NeXt is considering true 3D holography as a use case, setting requirements on the communication as well as the computing infrastructure. In a future holographic communication service, clients are widely spread in the network and cooperatively interact with each other. Especially for holographic communication high processing power is required as well. This makes a high-speed distributed backbone computing infrastructure, which realizes the concept of split computing, inevitable. Furthermore, tight integration between processing facilities and wireless networks is required in order to provide an immersive user experience. This paper illustrates true 3D holographic communication and its requirements. Afterward, an appropriate solution approach is elaborated. Here, novel technological approaches are discussed based on a proposed overall communication and computing architecture.



https://doi.org/10.1109/6GNet58894.2023.10317647
Diao, Chenyao; Sinani, Luljeta; Ramachandra Rao, Rakesh Rao; Raake, Alexander
Revisiting videoconferencing QoE: impact of network delay and resolution as factors for social cue perceptibility. - In: 2023 15th International Conference on Quality of Multimedia Experience (QoMEX), (2023), S. 240-243

Previous research from well before the Covid-19 pandemic had indicated little effect of delay on integral quality but a measurable one on user behavior, and a significant effect of resolution on quality but not on behavior in a two-party communication scenario. In this paper, we re-investigate the topic, after the times of the Covid-19 pandemic and its frequent and widespread videoconferencing usage. To this aim, we conducted a subjective test involving 23 pairs of participants, employing the Celebrity Name Guessing task. The focus was on impairments that may affect social (resolution) and communication cues (de-lay). Subjective data in the form of overall conversational quality and task performance satisfaction as well as objective data in the form of task correctness, user motion, and facial expressions were collected in the test. The analysis of the subjective data indicates that perceived conversational quality and performance satisfaction were mainly affected by video resolution, while delay (up to 1000 ms) had no significant impact. Furthermore, the analysis of the objective data shows that there is no impact of resolution and delay on user performance and behavior, in contrast to earlier findings.



https://doi.org/10.1109/QoMEX58391.2023.10178483
Singla, Ashutosh; Robotham, Thomas; Bhattacharya, Abhinav; Menz, William; Habets, Emanuel A.P.; Raake, Alexander
Saliency of omnidirectional videos with different audio presentations: analyses and dataset. - In: 2023 15th International Conference on Quality of Multimedia Experience (QoMEX), (2023), S. 264-269

There is an increased interest in understanding users' behavior when exploring omnidirectional (360˚) videos, especially in the presence of spatial audio. Several studies demonstrate the effect of no, mono, or spatial audio on visual saliency. However, no studies investigate the influence of higher-order (i.e., 4t h- order) Ambisonics on subjective exploration in virtual reality settings. In this work, a between-subjects test design is employed to collect users' exploration data of 360˚ videos in a free-form viewing scenario using the Varjo XR-3 Head Mounted Display, in the presence of no, mono, and 4th-order Ambisonics audio. Saliency information was captured as head-saliency in terms of the center of a viewport at 50 Hz. For each item, subjects were asked to describe the scene with a short free-verbalization task. Moreover, cybersickness was assessed using the simulator sickness questionnaire at the beginning and at the end of the test. The head-saliency results over time show that with the presence of higher-order Ambisonics audio, subjects concentrate more on the directions sound is coming from. No influence of audio scenario on cybersickness scores was observed. From the analysis of the verbal scene descriptions, it was found that users were attentive to the omnidirectional video, but only for the ‘no audio’ scenario provided minute and insignificant details of the scene objects. The audiovisual saliency dataset is made available following the open science approach already used for the audiovisual scene recordings we previously published. The data is sought to enable training of visual and audiovisual saliency prediction models for interactive experiences.



https://doi.org/10.1109/QoMEX58391.2023.10178588
Ramachandra Rao, Rakesh Rao; Borer, Silvio; Lindero, David; Göring, Steve; Raake, Alexander
PNATS-UHD-1-Long: an open video quality dataset for long sequences for HTTP-based Adaptive Streaming QoE assessment. - In: 2023 15th International Conference on Quality of Multimedia Experience (QoMEX), (2023), S. 252-257

The P.NATS Phase 2 competition in ITU-T Study Group 12 resulted in both the ITU-T Rec. P.1204 series of recommendations, and also a large dataset for HTTP-based adaptive streaming QoE assessment that is now made openly available as part of this paper. The presented dataset consists of 3 subjective databases targeting overall quality assessment of a typical HTTP-based Adaptive Streaming session consisting of degradations such as quality switching, initial loading delay, and stalling events using audiovisual content ranging between 2 and 5 minutes. In addition to this, subject bias and consistency in quality assessment of such longer-duration audiovisual contents with multiple degradations are investigated using a subject behaviour model. As part of this paper, the overall test design, subjective test results, sources, encoded audiovisual contents, and a set of analysis plots are made publicly available for further research.



https://doi.org/10.1109/QoMEX58391.2023.10178493
Braun, Florian; Ramachandra Rao, Rakesh Rao; Robitza, Werner; Raake, Alexander
Automatic audiovisual asynchrony measurement for quality assessment of videoconferencing. - In: 2023 15th International Conference on Quality of Multimedia Experience (QoMEX), (2023), S. 248-251

Audiovisual asynchrony is a significant factor im-pacting the Quality of Experience (QoE), especially for interactive communication like video conferencing. In this paper, we propose a client-side approach to predict the delay between an audio and a video signal, using only the media signals from both streams. Features are extracted from the video and audio stream, respectively, and analyzed using a cross-correlation approach to determine the actual delay. Our approach predicts the delay with an accuracy of over 80% in a time frame of ±1s. We further highlight the potential drawbacks of using a cross-correlation-based analysis and propose different solutions for practical implementations of a delay-based QoE metric.



https://doi.org/10.1109/QoMEX58391.2023.10178438
Keller, Dominik; Hagen, Felix; Prenzel, Julius; Strama, Kay; Ramachandra Rao, Rakesh Rao; Raake, Alexander
Influence of viewing distances on 8K HDR video quality perception. - In: 2023 15th International Conference on Quality of Multimedia Experience (QoMEX), (2023), S. 209-212

The benefits of high resolutions in displays, such as 8K (UHD-2), have been the subject of ongoing research in the field of display technology and human perception in recent years. Out of several factors influencing users' perception of video quality, viewing distance is one of the key aspects. Hence, this study uses a subjective test to investigate the perceptual advantages of 8K over 4K (UHD-1) resolution for HDR videos at 7 different viewing distances, ranging from 0.5 H to 2 H. The results indicate that, on average, for HDR content the 8K resolution can improve the video quality at all tested distances. Our study shows that although the 8K resolution is slightly better than 4K at close distances, the extent of these benefits is highly dependent on factors such as the pixel-related complexity of the content and the visual acuity of the viewers.



https://doi.org/10.1109/QoMEX58391.2023.10178602
Herglotz, Christian; Robitza, Werner; Raake, Alexander; Hoßfeld, Tobias; Kaup, André
Power reduction opportunities on end-user devices in quality-steady video streaming. - In: 2023 15th International Conference on Quality of Multimedia Experience (QoMEX), (2023), S. 79-82

This paper uses a crowdsourced dataset of online video streaming sessions to investigate opportunities to reduce the power consumption while considering QoE. For this, we base our work on prior studies which model both the end-user's QoE and the end-user device's power consumption with the help of high-level video features such as the bitrate, the frame rate, and the resolution. On top of existing research, which focused on reducing the power consumption at the same QoE optimizing video parameters, we investigate potential power savings by other means such as using a different playback device, a different codec, or a predefined maximum quality level. We find that based on the power consumption of the streaming sessions from the crowdsourcing dataset, devices could save more than 55% of power if all participants adhere to low-power settings.



https://doi.org/10.1109/QoMEX58391.2023.10178450
Göring, Steve; Ramachandra Rao, Rakesh Rao; Merten, Rasmus; Raake, Alexander
Appeal and quality assessment for AI-generated images. - In: 2023 15th International Conference on Quality of Multimedia Experience (QoMEX), (2023), S. 115-118

Recently AI-generated images gained in popularity. A critical aspect of AI-generated images using, e.g., DALL-E-2 or Midjourney, is that they may look artificial, be of low quality, or have a low appeal in contrast to real images, depending on the text prompt and AI generator. For this reason, we evaluate the quality and appeal of AI-generated images using a crowdsourcing test as an extension of our recently published AVT-AI-Image-Dataset. This dataset consists of a total of 135 images generated with five different AI-text-to-image generators. Based on the collected subjective ratings in the crowdsourcing test, we evaluate the different used AI generators in terms of image quality and appeal of the AI-generated images. We also link image quality and image appeal also with SoA objective models. The extension will be made publicly available for reproducibility.



https://doi.org/10.1109/QoMEX58391.2023.10178486