M. Sc. Christian Schneiderwind

Research Associate and Doctoral Student

Phone 03677 69-2671 Fax: 03677 69-1255 Room H 3527 christian.schneiderwind@tu-ilmenau.de



Anzahl der Treffer: 13
Erstellt: Thu, 23 May 2024 23:17:26 +0200 in 0.0741 sec

Arévalo Arboleda, Stephanie; Kunert, Christian; Hartbrich, Jakob; Schneiderwind, Christian; Diao, Chenyao; Gerhardt, Christoph; Surdu, Tatiana; Weidner, Florian; Broll, Wolfgang; Stephan, Werner; Raake, Alexander
Beyond looks: a study on agent movement and audiovisual spatial coherence in augmented reality. - In: IEEE Xplore digital library, ISSN 2473-2001, (2024), S. 502-512

The appearance of virtual humans (avatars and agents) has been widely explored in immersive environments. However, virtual humans’ movements and associated sounds in real-world interactions, particularly in Augmented Reality (AR), are yet to be explored. In this paper, we investigate the influence of three distinct movement patterns (circle, side-to-side, and standing), two rendering styles (realistic and cartoon), and two types of audio (spatial audio and non-spatial audio) on emotional responses, social presence, appearance and behavior plausibility, audiovisual coherence, and auditory plausibility. To enable that, we conducted a study (N=36) where participants observed an agent reciting a short fictional story. Our results indicate an effect of the rendering style and the type of movement on the subjective perception of the agents behaving in an AR environment. Participants reported higher levels of excitement when they observed the realistic agent moving in a circle compared to the cartoon agent or the other two movement patterns. Moreover, we found an influence of agent’s movement pattern on social presence and higher appearance and behavior plausibility for the realistic rendering style. Regarding audiovisual spatial coherence, we found an influence of rendering style and type of audio only for the cartoon agent. Additionally, the spatial audio was perceived as more plausible than non-spatial audio. Our findings suggest that aligning realistic rendering styles with realistic auditory experiences may not be necessary for 1-1 listening experiences with moving sources. However, movement patterns of agents influence excitement and social presence in passive unidirectional communication scenarios.

Klein, Florian; Treybig, Lukas; Schneiderwind, Christian; Werner, Stephan; Sporer, Thomas
Just noticeable reverberation difference at varying loudness levels. - In: AES Europe 2023, (2023), S. 361-368

In order to successfully fuse virtual sound sources with the real acoustic environment, the acoustic properties of the real environment must be estimated and utilized for the synthesis of virtual sound sources. Often, just noticeable differences (JNDs) of room acoustic parameters are utilized to predict a good match between virtual and real acoustics. However, several studies in this domain have shown that existing JND values of room acoustic parameters are often not able to predict the perception of the listeners. This can have various reasons: Differences in first reflection patterns are barely measurable with classical acoustic parameters; Even if acoustic differences are above the JND, a plausible reproduction might still be possible; JNDs depend on various factors (such as sound signal, etc.) and existing studies do not cover all of them. The last factor is addressed in this research paper. A three-alternative forced (3AFC) choice test was conducted at four different loudness levels (75 dB(A), 65 dB(A), 55 dB(A), and 45 dB(A)) in a reverberation time range from 0.5 s to 0.8 s. A dependency of the loudness on the detectability of reverberation differences was found for the randomly interleaved presentation of loudness levels but not for sequential presentation. Individual hearing thresholds as well as expertise level significantly influence the JND of reverberation time.

Schneiderwind, Christian; Richter, Maike; Merten, Nils; Neidhardt, Annika
Effects of modified late reverberation on audio-visual plausibility and externalization in AR. - In: 2023 Immersive and 3D Audio: from Architecture to Automotive (I3DA), (2023), insges. 9 S.

Binaural synthesis systems can create virtual sound sources that are indistinguishable from reality. In Augmented Reality (AR) applications, virtual sound sources need to blend in with the real environment to create plausible illusions. However, in some scenarios, it may be desirable to enhance the natural acoustic properties of the virtual content to improve speech intelligibility, alleviate listener fatigue, or achieve a specific artistic effect. Previous research has shown that deviating from the original room acoustics can degrade the quality of the auditory illusion, often referred to as the room divergence effect. This study investigates whether it is possible to modify the auditory aesthetics of a room environment without compromising the plausibility of a sound event in AR. To accomplish this, the length of the reverberation tails of measured binaural room impulse responses are modified after the mixing time to change reverberance.A listening test was conducted to evaluate the externalization and audio-visual plausibility of an exemplary AR scene for different degrees of reverberation modification. The results indicate that externalization is unaffected even with extreme modifications (such as a stretch ratio of 1.8). However, audio-visual plausibility is only maintained for moderate modifications (such as stretch ratios of 0.8 and 1.2).

Schneiderwind, Christian; Neidhardt, Annika
Discriminability of concurrent virtual and real sound sources in an augmented audio scenario. - In: AES Europe Spring 2022, (2022), S. 521-529

This exploratory study investigates peoples’ ability to discriminate between real and virtual sound sources in a position-dynamic headphone based augmented audio scene. For this purpose, an acoustic scene was created consisting of two loudspeakers at different positions in a small seminar room. Considering the presence of headphones, non-individualized BRIRs measured along a line with a dummy head wearing AKG K1000 headphones were used to allow for head rotation and translation. In a psychoacoustic experiment, participants had to explore the acoustic scene and tell which sound source they believe is real or virtual. The test cases included a dialog scenario, stereo pop-music and one person speaking while the other speaker played mono-music simultaneously. Results show that the participants were on trend able to debunk individual virtual sources. However, for the cases where both sound sources reproduced sound simultaneously, lower distinguishability rates were observed.

Gupta, Rishabh; He, Jianjun; Ranjan, Rishabh; Gan, Woon Seng; Klein, Florian; Schneiderwind, Christian; Neidhardt, Annika; Brandenburg, Karlheinz; Välimäki, Vesa
Augmented/mixed reality audio for hearables: sensing, control, and rendering. - In: IEEE signal processing magazine, ISSN 1558-0792, Bd. 39 (2022), 3, S. 63-89

Augmented or mixed reality (AR/MR) is emerging as one of the key technologies in the future of computing. Audio cues are critical for maintaining a high degree of realism, social connection, and spatial awareness for various AR/MR applications, such as education and training, gaming, remote work, and virtual social gatherings to transport the user to an alternate world called the metaverse. Motivated by a wide variety of AR/MR listening experiences delivered over hearables, this article systematically reviews the integration of fundamental and advanced signal processing techniques for AR/MR audio to equip researchers and engineers in the signal processing community for the next wave of AR/MR.

Neidhardt, Annika; Schneiderwind, Christian; Klein, Florian
Perceptual matching of room acoustics for auditory augmented reality in small rooms - literature review and theoretical framework. - In: Trends in hearing, ISSN 2331-2165, Bd. 26 (2022), S. 1-22

For the realization of auditory augmented reality (AAR), it is important that the room acoustical properties of the virtual elements are perceived in agreement with the acoustics of the actual environment. This perceptual matching of room acoustics is the subject reviewed in this paper. Realizations of AAR that fulfill the listeners’ expectations were achieved based on pre-characterization of the room acoustics, for example, by measuring acoustic impulse responses or creating detailed room models for acoustic simulations. For future applications, the goal is to realize an online adaptation in (close to) real-time. Perfect physical matching is hard to achieve with these practical constraints. For this reason, an understanding of the essential psychoacoustic cues is of interest and will help to explore options for simplifications. This paper reviews a broad selection of previous studies and derives a theoretical framework to examine possibilities for psychoacoustical optimization of room acoustical matching.

Schneiderwind, Christian; Neidhardt, Annika; Meyer, Dominik
Comparing the effect of different open headphone models on the perception of a real sound source. - In: 150th Audio Engineering Society Convention 2021, (2021), S. 389-398

Werner, Stephan; Klein, Florian; Neidhardt, Annika; Sloma, Ulrike; Schneiderwind, Christian; Brandenburg, Karlheinz
Creation of auditory augmented reality using a position-dynamic binaural synthesis system - technical components, psychoacoustic needs, and perceptual evaluation. - In: Applied Sciences, ISSN 2076-3417, Bd. 11 (2021), 3, 1150, S. 1-20

For a spatial audio reproduction in the context of augmented reality, a position-dynamic binaural synthesis system can be used to synthesize the ear signals for a moving listener. The goal is the fusion of the auditory perception of the virtual audio objects with the real listening environment. Such a system has several components, each of which help to enable a plausible auditory simulation. For each possible position of the listener in the room, a set of binaural room impulse responses (BRIRs) congruent with the expected auditory environment is required to avoid room divergence effects. Adequate and efficient approaches are methods to synthesize new BRIRs using very few measurements of the listening room. The required spatial resolution of the BRIR positions can be estimated by spatial auditory perception thresholds. Retrieving and processing the tracking data of the listener’s head-pose and position as well as convolving BRIRs with an audio signal needs to be done in real-time. This contribution presents work done by the authors including several technical components of such a system in detail. It shows how the single components are affected by psychoacoustics. Furthermore, the paper also discusses the perceptive effect by means of listening tests demonstrating the appropriateness of the approaches.

Neidhardt, Annika; Schneiderwind, Christian
Physical and perceptual differences of selected approaches to realize an echolocation scenario in room acoustical auralizations. - In: Proceedings of the International Symposium on Room Acoustics, (2019), S. 237

Schneiderwind, Christian; Neidhardt, Annika
Perceptual differences of position dependent room acoustics in a small conference room. - In: Proceedings of the International Symposium on Room Acoustics, (2019), S. 499-506