Veröffentlichungen des Fachgebiets Audiovisuelle Technik

Die folgende Liste (automatisch durch die Universitätsbibliothek erstellt) enthält die Publikationen ab dem Jahr 2016. Die Veröffentlichungen bis zum Jahr 2015 finden sie auf einer extra Seite.

Hinweis: Wenn Sie alle Publikationen durchsuchen wollen, dann wählen Sie "Alle anzeigen" und können dann die Browser-Suche mit Strg+F verwenden.

Anzahl der Treffer: 161
Erstellt: Thu, 22 Feb 2024 23:21:49 +0100 in 0.0659 sec

Singla, Ashutosh; Wang, Shuang; Göring, Steve; Ramachandra Rao, Rakesh Rao; Viola, Irene; Cesar, Pablo; Raake, Alexander
Subjective quality evaluation of point clouds using remote testing. - In: IXR '23, (2023), S. 21-28

Subjective quality assessment serves as a method to evaluate the perceptual quality of 3D point clouds. These evaluations can be conducted using lab-based or remote or crowdsourcing tests. The lab-based tests are time-consuming and less cost-effective. As an alternative, remote or crowd tests can be used, offering a time and cost-friendly approach. Remote testing enables larger and more diverse participant pools. However, this raises the question of its applicability due to variability in participants' display devices and environments for the evaluation of the point cloud. In this paper, the focus is on investigating the applicability of remote testing by using the Absolute Category Rating (ACR) test method for assessing the subjective quality of point clouds in different tests. We compare the results of lab and remote tests by replicating lab-based tests. In the first test, we assess the subjective quality of a static point cloud geometry for two different types of geometrical degradations, namely Gaussian noise, and octree-pruning. In the second test, we compare the performance of two different compression methods (G-PCC and V-PCC) to assess the subjective quality of coloured point cloud videos. Based on the results obtained using correlation and Standard deviation of Opinion Scores (SOS) analysis, the remote testing paradigm can be used for evaluating point clouds.
Breuer, Carolin; Leist, Larissa; Fremerey, Stephan; Raake, Alexander; Klatte, Maria; Fels, Janina
Towards investigating listening comprehension in virtual reality. - Aachen : Universitätsbibliothek der RWTH Aachen. - 1 Online-Ressource (7 Seiten)

The investigation of listening comprehension in auditory and visually complex classroom settings is a promising method to evaluate children’s cognitive performance in a realistic setting. Many studies were able to show that children are more susceptible to noise than adults. However, it has recently been suggested that established monaural listening situations could overestimate the influence of noise on children’s task performance. Therefore, new, close- to real-life scenarios need to be introduced to investigate cognitive performance in everyday situations rather than artificial laboratory settings. This study aimed at extending a validated paper-and-pencil test towards a virtual reality setting. To get first insights, into different interaction methods, a pilot study with adult participants was conducted. In contrast to other recent studies, the virtual environment had little influence on this listening comprehension paradigm, since comparable results were obtained in the paper-and-pencil test and in the virtual reality variants for all user interfaces. Thus, the presented paradigm has proven to be robust and can be used to further investigate the usage of virtual reality to evaluate children’s cognitive performance.
Ramachandra Rao, Rakesh Rao; Göring, Steve; Elmeligy, Bassem; Raake, Alexander
AVT-VQDB-UHD-1-Appeal: a UHD-1/4K open dataset for video quality and appeal assessment using modern video codecs. - In: IEEE Xplore digital library, ISSN 2473-2001, (2023), insges. 6 S.

A number of factors play an important role in the perception of video quality for streaming and other services, key among them being encoding-related degradations. Hence, newer codecs are developed with the goal of optimizing video quality for a given encoding setting. Here, subjective studies are an efficient method to evaluate the performance of such newer codecs. Furthermore, contextual factors impact the perception of video quality, e.g., the appeal of the content itself. To this end, this paper presents a subjective study targeting both quality and appeal assessment of videos. For this purpose, a subjective study consisting of three different parts is conducted. Firstly, participants were asked to rate the appeal of the uncompressed UHD-1/4K source content with a duration of 8 - 10s each. Following this, the video quality of these source videos individually encoded with either the HEVC/H.265, AV1, or VVC/H.266 video codec was rated. A wide range of encoding conditions in terms of resolution (360p to 2160p) and bitrate (100kbps to 15mbps) is used to encode the videos, so as to enable the applicability of the data to real-world settings. In the last part, subjects are again asked to rate the appeal of the uncompressed source content. The results are analyzed to assess the impact of different encoding conditions on perceived video quality. In addition, the impact of appeal on video quality and vice-versa is also investigated. Furthermore, an objective quality assessment with different state-of-the-art full-reference, bitstream-based, and hybrid models including the newer codecs AV1 and VVC is presented. The subjective dataset including test design, subjective results, sources, and encoded audiovisual contents are made publicly available following an open science approach.
Viola, Irene; Amirpour, Hadi; Arévalo Arboleda, Stephanie; Torres Vega, Maria
IXR '23: 2nd International Workshop on Interactive eXtended Reality. - In: MM '23, (2023), S. 9728-9730

Despite remarkable advances, current Extended Reality (XR) applications are in their majority local and individual experiences. A plethora of interactive applications, such as teleconferencing, telesurgery, interconnection in new buildings project chain, Cultural Heritage, and Museum contents communication, are well on their way to integrating immersive technologies. However, interconnected, and interactive XR, where participants can virtually interact across vast distances, remains a distant dream. In fact, three great barriers stand between current technology and remote immersive interactive life-like experiences, namely (i) content realism, (ii) motion-to-photon latency, and accurate (iii) human-centric quality assessment and control. Overcoming these barriers will require novel solutions at all elements of the end-to-end transmission chain. This workshop focuses on the challenges, applications, and major advancements in multimedia, networks, and end-user infrastructures to enable the next generation of interactive XR applications and services.
Fischedick, Söhnke B.; Richter, Kay; Wengefeld, Tim; Seichter, Daniel; Scheidig, Andrea; Döring, Nicola; Broll, Wolfgang; Werner, Stephan; Raake, Alexander; Groß, Horst-Michael
Bridging distance with a collaborative telepresence robot for older adults - report on progress in the CO-HUMANICS project. - In: IEEE Xplore digital library, ISSN 2473-2001, (2023), S. 346-353

In an aging society, the social needs of older adults, such as regular interactions and independent living, are crucial for their quality of life. However, due to spatial separation from their family and friends, it is difficult to maintain social relationships. Our multidisciplinary project, CO-HUMANICS, aims to meet these needs, even over long distances, through the utilization of innovative technologies, including a robot-based system. This paper presents the first prototype of our system, designed to connect family members or friends virtually present through a mobile robot with an older adult. The system incorporates bi-directional video telephony, remote control capabilities, and enhanced visualization methods. A comparison is made with other state-of-the-art robotic approaches, focusing on remote control capabilities. We provide details about the hard- and software components, e.g., a projector-based pointing unit for collaborative telepresence to assist in everyday tasks. Our comprehensive scene representation is discussed, which utilizes 3D NDT maps, enabling advanced remote navigation features, such as autonomously driving to a specific object. Finally, insights about past and concepts for future evaluation are provided to assess the developed system.
Saboor, Qasim; Mehfooz-Khan, Hamd; Raake, Alexander; Arévalo Arboleda, Stephanie
A virtual gardening experience: evaluating the effect of haptic feedback on spatial presence, perceptual realism, mental immersion, and user experience. - In: MUM 2023, (2023), S. 520-522

Virtual nature settings have demonstrated to provide benefits to mental well-being. However, most studies have focused on providing only audiovisual stimuli. We aim to evaluate the use of haptic feedback to simulate touching elements in nature-inspired settings. In this paper, we designed a VR gardening environment to investigate the impact of haptic feedback on spatial presence, perceptual realism, mental immersion, user experience, and task performance while interacting with gardening objects in a study (N=18, 9 female and 9 male). Our results suggest that haptic feedback can increase spatial presence and point to gender differences, i.e., female participants reported higher scores in spatial presence and perceptual realism, in the chosen VR experience. Although our main goal was to evaluate the role of haptics in a virtual garden, our findings highlight the importance of investigating and identifying factors that could lead to gender differences in VR experiences.
Hartbrich, Jakob; Weidner, Florian; Kunert, Christian; Arévalo Arboleda, Stephanie; Raake, Alexander; Broll, Wolfgang
Eye and face tracking in VR: avatar embodiment and enfacement with realistic and cartoon avatars. - In: MUM 2023, (2023), S. 270-278

Previous studies have explored the perception of various types of embodied avatars in immersive environments. However, the impact of eye and face tracking with personalized avatars is yet to be explored. In this paper, we investigate the impact of eye and face tracking on embodiment, enfacement, and the uncanny valley with four types of avatars using a VR-based mirroring task. We conducted a study (N=12) and created self-avatars with two rendering styles: a cartoon avatar (created in an avatar generator using a picture of the user’s face) and a photorealistic scanned avatar (created using a 3D scanner), each with and without eye and face tracking and respective adaptation of the mirror image. Our results indicate that adding eye and face tracking can be beneficial for certain enfacement scales (belonged), and we confirm that compared to a cartoon avatar, a scanned realistic avatar results in higher body ownership and increased enfacement (own face, belonging, mirror) - regardless of eye and face tracking. We critically discuss our experiences and outline the limitations of the applied hardware and software with respect to the provided level of control and the applicability for complex tasks such as displaying emotions. We synthesize these findings into a discussion about potential improvements for facial animation in VR and highlight the need for a better level of control, the integration of additional sensing and processing technologies, and an objective metric for comparing facial animation systems.
Friese, Ingo; Galkow-Schneider, Mandy; Bassbouss, Louay; Zoubarev, Alexander; Neparidze, Andy; Melnyk, Sergiy; Zhou, Qiuheng; Schotten, Hans D.; Pfandzelter, Tobias; Bermbach, David; Kritzner, Arndt; Zschau, Enrico; Dhara, Prasenjit; Göring, Steve; Menz, William; Raake, Alexander; Rüther-Kindel, Wolfgang; Quaeck, Fabian; Stuckert, Nick; Vilter, Robert
True 3D holography: a communication service of tomorrow and its requirements for a new converged cloud and network architecture on the path to 6G. - In: IEEE Xplore digital library, ISSN 2473-2001, (2023), insges. 8 S.
ISBN 979-8-3503-0673-6

Research project 6G NeXt is considering true 3D holography as a use case, setting requirements on the communication as well as the computing infrastructure. In a future holographic communication service, clients are widely spread in the network and cooperatively interact with each other. Especially for holographic communication high processing power is required as well. This makes a high-speed distributed backbone computing infrastructure, which realizes the concept of split computing, inevitable. Furthermore, tight integration between processing facilities and wireless networks is required in order to provide an immersive user experience. This paper illustrates true 3D holographic communication and its requirements. Afterward, an appropriate solution approach is elaborated. Here, novel technological approaches are discussed based on a proposed overall communication and computing architecture.
Diao, Chenyao; Sinani, Luljeta; Ramachandra Rao, Rakesh Rao; Raake, Alexander
Revisiting videoconferencing QoE: impact of network delay and resolution as factors for social cue perceptibility. - In: 2023 15th International Conference on Quality of Multimedia Experience (QoMEX), (2023), S. 240-243

Previous research from well before the Covid-19 pandemic had indicated little effect of delay on integral quality but a measurable one on user behavior, and a significant effect of resolution on quality but not on behavior in a two-party communication scenario. In this paper, we re-investigate the topic, after the times of the Covid-19 pandemic and its frequent and widespread videoconferencing usage. To this aim, we conducted a subjective test involving 23 pairs of participants, employing the Celebrity Name Guessing task. The focus was on impairments that may affect social (resolution) and communication cues (de-lay). Subjective data in the form of overall conversational quality and task performance satisfaction as well as objective data in the form of task correctness, user motion, and facial expressions were collected in the test. The analysis of the subjective data indicates that perceived conversational quality and performance satisfaction were mainly affected by video resolution, while delay (up to 1000 ms) had no significant impact. Furthermore, the analysis of the objective data shows that there is no impact of resolution and delay on user performance and behavior, in contrast to earlier findings.
Singla, Ashutosh; Robotham, Thomas; Bhattacharya, Abhinav; Menz, William; Habets, Emanuel A.P.; Raake, Alexander
Saliency of omnidirectional videos with different audio presentations: analyses and dataset. - In: 2023 15th International Conference on Quality of Multimedia Experience (QoMEX), (2023), S. 264-269

There is an increased interest in understanding users' behavior when exploring omnidirectional (360˚) videos, especially in the presence of spatial audio. Several studies demonstrate the effect of no, mono, or spatial audio on visual saliency. However, no studies investigate the influence of higher-order (i.e., 4t h- order) Ambisonics on subjective exploration in virtual reality settings. In this work, a between-subjects test design is employed to collect users' exploration data of 360˚ videos in a free-form viewing scenario using the Varjo XR-3 Head Mounted Display, in the presence of no, mono, and 4th-order Ambisonics audio. Saliency information was captured as head-saliency in terms of the center of a viewport at 50 Hz. For each item, subjects were asked to describe the scene with a short free-verbalization task. Moreover, cybersickness was assessed using the simulator sickness questionnaire at the beginning and at the end of the test. The head-saliency results over time show that with the presence of higher-order Ambisonics audio, subjects concentrate more on the directions sound is coming from. No influence of audio scenario on cybersickness scores was observed. From the analysis of the verbal scene descriptions, it was found that users were attentive to the omnidirectional video, but only for the ‘no audio’ scenario provided minute and insignificant details of the scene objects. The audiovisual saliency dataset is made available following the open science approach already used for the audiovisual scene recordings we previously published. The data is sought to enable training of visual and audiovisual saliency prediction models for interactive experiences.