M. Sc. Christian Schneiderwind

Wissenschaftliche Mitarbeiter und Doktorand

Telefon 03677 69-2671
Fax: 03677 69-1255
Raum H 3527




Anzahl der Treffer: 12
Erstellt: Tue, 16 Apr 2024 23:19:47 +0200 in 0.0855 sec

Klein, Florian; Treybig, Lukas; Schneiderwind, Christian; Werner, Stephan; Sporer, Thomas
Just noticeable reverberation difference at varying loudness levels. - In: AES Europe 2023, (2023), S. 361-368

In order to successfully fuse virtual sound sources with the real acoustic environment, the acoustic properties of the real environment must be estimated and utilized for the synthesis of virtual sound sources. Often, just noticeable differences (JNDs) of room acoustic parameters are utilized to predict a good match between virtual and real acoustics. However, several studies in this domain have shown that existing JND values of room acoustic parameters are often not able to predict the perception of the listeners. This can have various reasons: Differences in first reflection patterns are barely measurable with classical acoustic parameters; Even if acoustic differences are above the JND, a plausible reproduction might still be possible; JNDs depend on various factors (such as sound signal, etc.) and existing studies do not cover all of them. The last factor is addressed in this research paper. A three-alternative forced (3AFC) choice test was conducted at four different loudness levels (75 dB(A), 65 dB(A), 55 dB(A), and 45 dB(A)) in a reverberation time range from 0.5 s to 0.8 s. A dependency of the loudness on the detectability of reverberation differences was found for the randomly interleaved presentation of loudness levels but not for sequential presentation. Individual hearing thresholds as well as expertise level significantly influence the JND of reverberation time.

Schneiderwind, Christian; Richter, Maike; Merten, Nils; Neidhardt, Annika
Effects of modified late reverberation on audio-visual plausibility and externalization in AR. - In: 2023 Immersive and 3D Audio: from Architecture to Automotive (I3DA), (2023), insges. 9 S.

Binaural synthesis systems can create virtual sound sources that are indistinguishable from reality. In Augmented Reality (AR) applications, virtual sound sources need to blend in with the real environment to create plausible illusions. However, in some scenarios, it may be desirable to enhance the natural acoustic properties of the virtual content to improve speech intelligibility, alleviate listener fatigue, or achieve a specific artistic effect. Previous research has shown that deviating from the original room acoustics can degrade the quality of the auditory illusion, often referred to as the room divergence effect. This study investigates whether it is possible to modify the auditory aesthetics of a room environment without compromising the plausibility of a sound event in AR. To accomplish this, the length of the reverberation tails of measured binaural room impulse responses are modified after the mixing time to change reverberance.A listening test was conducted to evaluate the externalization and audio-visual plausibility of an exemplary AR scene for different degrees of reverberation modification. The results indicate that externalization is unaffected even with extreme modifications (such as a stretch ratio of 1.8). However, audio-visual plausibility is only maintained for moderate modifications (such as stretch ratios of 0.8 and 1.2).

Schneiderwind, Christian; Neidhardt, Annika
Discriminability of concurrent virtual and real sound sources in an augmented audio scenario. - In: AES Europe Spring 2022, (2022), S. 521-529

This exploratory study investigates peoples’ ability to discriminate between real and virtual sound sources in a position-dynamic headphone based augmented audio scene. For this purpose, an acoustic scene was created consisting of two loudspeakers at different positions in a small seminar room. Considering the presence of headphones, non-individualized BRIRs measured along a line with a dummy head wearing AKG K1000 headphones were used to allow for head rotation and translation. In a psychoacoustic experiment, participants had to explore the acoustic scene and tell which sound source they believe is real or virtual. The test cases included a dialog scenario, stereo pop-music and one person speaking while the other speaker played mono-music simultaneously. Results show that the participants were on trend able to debunk individual virtual sources. However, for the cases where both sound sources reproduced sound simultaneously, lower distinguishability rates were observed.

Gupta, Rishabh; He, Jianjun; Ranjan, Rishabh; Gan, Woon Seng; Klein, Florian; Schneiderwind, Christian; Neidhardt, Annika; Brandenburg, Karlheinz; Välimäki, Vesa
Augmented/mixed reality audio for hearables: sensing, control, and rendering. - In: IEEE signal processing magazine, ISSN 1558-0792, Bd. 39 (2022), 3, S. 63-89

Augmented or mixed reality (AR/MR) is emerging as one of the key technologies in the future of computing. Audio cues are critical for maintaining a high degree of realism, social connection, and spatial awareness for various AR/MR applications, such as education and training, gaming, remote work, and virtual social gatherings to transport the user to an alternate world called the metaverse. Motivated by a wide variety of AR/MR listening experiences delivered over hearables, this article systematically reviews the integration of fundamental and advanced signal processing techniques for AR/MR audio to equip researchers and engineers in the signal processing community for the next wave of AR/MR.

Neidhardt, Annika; Schneiderwind, Christian; Klein, Florian
Perceptual matching of room acoustics for auditory augmented reality in small rooms - literature review and theoretical framework. - In: Trends in hearing, ISSN 2331-2165, Bd. 26 (2022), S. 1-22

For the realization of auditory augmented reality (AAR), it is important that the room acoustical properties of the virtual elements are perceived in agreement with the acoustics of the actual environment. This perceptual matching of room acoustics is the subject reviewed in this paper. Realizations of AAR that fulfill the listeners? expectations were achieved based on pre-characterization of the room acoustics, for example, by measuring acoustic impulse responses or creating detailed room models for acoustic simulations. For future applications, the goal is to realize an online adaptation in (close to) real-time. Perfect physical matching is hard to achieve with these practical constraints. For this reason, an understanding of the essential psychoacoustic cues is of interest and will help to explore options for simplifications. This paper reviews a broad selection of previous studies and derives a theoretical framework to examine possibilities for psychoacoustical optimization of room acoustical matching.

Schneiderwind, Christian; Neidhardt, Annika; Meyer, Dominik
Comparing the effect of different open headphone models on the perception of a real sound source. - In: 150th Audio Engineering Society Convention 2021, (2021), S. 389-398

Werner, Stephan; Klein, Florian; Neidhardt, Annika; Sloma, Ulrike; Schneiderwind, Christian; Brandenburg, Karlheinz
Creation of auditory augmented reality using a position-dynamic binaural synthesis system - technical components, psychoacoustic needs, and perceptual evaluation. - In: Applied Sciences, ISSN 2076-3417, Bd. 11 (2021), 3, 1150, S. 1-20

For a spatial audio reproduction in the context of augmented reality, a position-dynamic binaural synthesis system can be used to synthesize the ear signals for a moving listener. The goal is the fusion of the auditory perception of the virtual audio objects with the real listening environment. Such a system has several components, each of which help to enable a plausible auditory simulation. For each possible position of the listener in the room, a set of binaural room impulse responses (BRIRs) congruent with the expected auditory environment is required to avoid room divergence effects. Adequate and efficient approaches are methods to synthesize new BRIRs using very few measurements of the listening room. The required spatial resolution of the BRIR positions can be estimated by spatial auditory perception thresholds. Retrieving and processing the tracking data of the listener’s head-pose and position as well as convolving BRIRs with an audio signal needs to be done in real-time. This contribution presents work done by the authors including several technical components of such a system in detail. It shows how the single components are affected by psychoacoustics. Furthermore, the paper also discusses the perceptive effect by means of listening tests demonstrating the appropriateness of the approaches.

Neidhardt, Annika; Schneiderwind, Christian
Physical and perceptual differences of selected approaches to realize an echolocation scenario in room acoustical auralizations. - In: Proceedings of the International Symposium on Room Acoustics, (2019), S. 237

Schneiderwind, Christian; Neidhardt, Annika
Perceptual differences of position dependent room acoustics in a small conference room. - In: Proceedings of the International Symposium on Room Acoustics, (2019), S. 499-506

Brandenburg, Karlheinz; Fiedler, Bernhard; Fischer, Georg; Klein, Florian; Neidhardt, Annika; Schneiderwind, Christian; Sloma, Ulrike; Stirnat, Claudia; Werner, Stephan
Perceptual aspects in spatial audio processing. - In: Proceedings of the 23rd International Congress on Acoustics, (2019), S. 3354-3360

Spatial audio processing includes recording, modification and rendering of multichannel audio. In all these fields there is the choice of either a physical representation or of perceptual approaches trying to achieve a target perceived audio quality. Classical microphone techniques on one hand and wave field synthesis, higher order ambisonics or certain methods of binaural rendering for headphone reproduction on the other hand target a good physical representation of sound. As it is known today, especially in the case of sound reproduction a faithful physical recreation of the sound wave forms ("correct signal at the ear drums") is neither necessary nor does it allow a fully authentic or even plausible reproduction of sound. 20 years ago, MPEG-4 standardized different modes for perception based versus physics based reproduction (called "Perceptual approach to modify natural source" and "Acoustic properties for physical based audio rendering"). In spatial rendering today, more and more the perceptual approach is used in state of the art systems. We give some examples of such rendering. The same distinction of physics based versus psychoacoustics (including cognitive effects) based rendering is used today for room simulation or artificial reverb systems. Perceptual aspects are at the heart of audio signal processing today.