http://www.tu-ilmenau.de

Logo TU Ilmenau


You are here

INHALTE

Audiovisual Technology Group

The Audiovisual Technology Group (AVT) deals with the function, application and perception of audio and video equipment. An essential focus of the research is on the relationship between the technical characteristics of audio, video and audiovisual systems and human perception and experience (“Quality of Experience”, QoE).

further information on the group

News

The winners Domink Keller and Anton Schubert with the chairman of the Förderverein Prof. Seitz.

Prizes for Graduates of the AVT Group

For the second time, the Förderverein Elektrotechnik und Informationstechnik e. V. Ilmenau (Association for the Promotion of Electrical Engineering and Information Technology Ilmenau) in conjunction with the Department of Electrical Engineering and Information Technology of the TU Ilmenau presented its award for outstanding theses. The three endowed prizes honor the achievements of the students during the exmatriculation ceremony at the end of June. Fortunately, two master theses of the AVT group which were carried out with industrial partners were honored and awarded as outstanding due to their high degree of interdisciplinarity and scientific character as well as their execution.

We congratulate the award winners Anton Schubert, who has worked on the implementation of a compressed broadband audio codec for driver communication in motor sports, and Dominik Keller, who has worked on identification and analysis of texture dimensions in motion pictures using sensory evaluation techniques.

The youngest participant while watching Roller Coaster in VR during the event Lange Nacht der Technik 2019.

Best Paper Award

Dominik Keller (AVT Group), Tamara Seybold (ARRI Munich), Janto Skowronek (former AVT Group) and Alexander Raake (AVT Group) got the Best Paper Award at the 11th International Conference on Quality of Multimedia Experience (QoMEX 2019) in Berlin.

You find the abstract of the article below.

Offers for theses in the AVT Lab

Now you can inform yourself directly about the range of topics for bachelor and master theses as well as for media projects on our website .

Take a look under the point Theses!

Recent publications from the group

Dominik Keller, Tamara Seybold, Janto Skowronek, and Alexander Raake
Assessing Texture Dimensions and Video Quality in Motion Pictures using Sensory Evaluation Techniques

The paper resulting from the cooperation of members of the Audiovisual Technology Group and Scientific and Engineering Academy Award winner ARRI (Arnold & Richter Cine Technik) received Best Paper Award at this year’s 11th  Int. Conference on Quality of Multimedia Experience (QoMEX 2019).

The quality of images and videos is usually examined with well-established subjective tests or instrumental models. These often target content transmitted over the internet, such as streaming or videoconferences and address the human preferential experience. In the area of high-quality motion pictures, however, other factors are relevant. These mostly are not error-related but aimed at the creative image design, which has gained comparatively little attention in image and video quality research. To determine the perceptual dimensions underlying movie-type video quality, we combine sensory evaluation techniques extensively used in food assessment – Degree of Difference test and Free Choice Profiling – with more classical video quality tests. The main goal of this research is to analyze the suitability of sensory evaluation methods for high-quality video assessment. To understand which features in motion pictures are recognizable and critical to quality, we address the example of image texture properties, measuring human perception and preferences with a panel of image-quality experts. To this aim, different capture settings were simulated applying sharpening filters as well as digital and analog noise to exemplary source sequences. The evaluation, involving Multidimensional Scaling, Generalized Procrustes Analysis as well as Internal and External Preference Mapping, identified two separate perceptual dimensions. We conclude that Free Choice Profiling connected with a quality test offers the highest level of insight relative to the needed effort. The combination enables a quantitative quality measurement including an analysis of the underlying perceptual reasons.

External Preference Mapping results: Best ratings for stimuli of low noise and medium-high sharpness (Landscape scene)

In a study presented at the QoMEX 2019 conference, we compare the impact of various motion interpolation (MI) algorithms on 360° video Quality of Experience (QoE). For doing so, we conducted a subjective test with 12 video expert viewers, while a pair comparison test method was used. We interpolated four different 20 s long 30 fps 360° source contents to the native 90 Hz refresh rate of popular Head-Mounted Displays using three different MI algorithms. Subsequently, we compared these 90 fps videos against each other to investigate the influence on the QoE. Regarding the algorithms, we found out that ffmpeg blend does not lead to a significant improvement of QoE, while MCI and butterflow do so. Additionally, we concluded that for 360°  videos containing fast and sudden movements, MCI should be preferred over butterflow, while butterflow is more suitable for slow and medium motion videos. While comparing the time needed for rendering the 90 fps interpolated videos, ffmpeg blend is the fastest, while MCI and butterflow need much more time.

Published in 26th IEEE Conference on Virtual Reality and 3D User Interfaces, March 2019, Osaka, Japan

A. Singla, R. R. R. Rao, S. Göring and A. Raake: Assessing Media QoE, Simulator Sickness and Presence for Omnidirectional Videos with Different Test Protocols

QoE for omnidirectional videos comprises additional components such as simulator sickness and presence. In this paper, a series of tests is presented comparing different test protocols to assess integral quality, simulator sickness and presence for omnidirectional videos in one test run, using the HTC Vive Pro as head-mounted display. For quality ratings, the five-point ACR scale was used. In addition, the well-established Simulator Sickness Questionnaire and PresenceQuestionnaire methods were used, once in a full version, and once with only one single integral scale, to analyze how well presence and simulator sickness can be captured using only a single scale.

 

Ashutosh Singla while presenting his poster at the IEEE VR conference in Japan


Eleventh International Conference on Quality of Multimedia Experience (QoMEX) (QoMEX 2019). Berlin, Germany. June 2019

Steve Göring, Rakesh Rao Ramachandra Rao, Alexander Raake

nofu - A Lightweight No-Reference Pixel Based Video Quality Model for Gaming Content

Popularity of streaming services for gaming videos has increased tremendously over the last years, e.g. Twitch and Youtube Gaming. Compared to classical video streaming applications, gaming videos have additional requirements. For example, it is important that videos are streamed live with only a small delay. In addition, users expect low stalling, waiting time and in general high video quality during streaming, e.g. using http-based adaptive streaming. These requirements lead to different challenges for quality prediction in case of streamed gaming videos. We describe newly developed features and a no-reference video quality machine learning model, that uses only the recorded video to predict video quality scores. In different evaluation experiments we compare our proposed model nofu with state-of-the-art reduced or full reference models and metrics.
In addition, we trained a no-reference baseline model using brisque+niqe features. We show that our model has a similar or better performance than other models. Furthermore, nofu outperforms VMAF for subjective gaming QoE prediction, even though nofu does not require any reference video.

 

scatter_plot_mos_nofu: results for gaming dataset and subjective score prediction


 

7th European Workshop on Visual Information Processing (EUVIP), Tampere (Finland), 26 - 28 November 2018 (http://www.tut.fi/euvip2018/)

Steve Göring, Alexander Raake

deimeq – A Deep Neural Network Based Hybrid No-reference Image Quality Model

Current no reference image quality assessment models are mostly based on hand-crafted features (signal, computer vision, . . . ) or deep neural networks. Using DNNs for image quality prediction leads to several problems, e.g. the input size is restricted; higher resolutions will increase processing time and memory consumption. Large inputs are handled by image patching and aggregation a quality score. In a pure patching approach connections between the sub-images are getting lost.

Also, a huge dataset is required for training a DNN from scratch, though only small datasets with annotations are available. We provide a hybrid solution (deimeq) to predict image quality using

DNN feature extraction combined with random forest models. Firstly, deimeq uses a pre-trained DNN for feature extraction in a hierarchical sub-image approach, this avoids a huge training dataset. Further, our proposed sub-image approach circumvents a pure patching, because of hierarchical connections between the sub-images. Secondly, deimeq can be extended using signal-based features from state-of-the art models. To evaluate our approach, we choose a strict cross-dataset evaluation with the Live-2 and TID2013 datasets with several pre-trained DNNs. Finally, we show that deimeq and variants of it perform better or similar than other methods.

Picture: General approach for a classification in HD and UHD


 

Human Vision and Electronic Imaging 2019, Burlingame (California USA), 13 - 17  January 2019 (http://www.imaging.org/site/IST/IST/Conferences/EI/Symposium_Overview.aspx)

Steve Göring, Julian Zebelein, Simon Wedel, Dominik Keller, Alexander Raake

Analyze And Predict the Perceptibility of UHD Video Contents

720p, Full-HD, 4K, 8K, ..., display resolutions are increasing heavily over the past time. However many video streaming providers are currently streaming videos with a maximum of 4K/UHD-1 resolution. Considering that normal video viewers are enjoying their videos in typical living rooms, where viewing distances are quite large, the question arises if more resolution is even recognizable. In the following paper we will analyze the problem of UHD perceptibility in comparison with lower resolutions. As a first step, we conducted a subjective video test, that focuses on short uncompressed video sequences and compares two different testing methods for pairwise discrimination of two representations of the same source video in different resolutions.

We selected an extended stripe method and a temporal switching method. We found that the temporal switching is more suitable to recognize UHD video content. Furthermore, we developed features, that can be used in a machine learning system to predict whether there is a benefit in showing a given video in UHD or not.

Evaluating different models based on these features for predicting perceivable differences shows good performance on the available test data. Our implemented system can be used to verify UHD source video material or to optimize streaming applications.

Older News

Older news from the AVT lab can be found on this website.