http://www.tu-ilmenau.de

Logo TU Ilmenau


You are here

INHALTE

Audiovisual Technology Group

The Audiovisual Technology Group (AVT) deals with the function, application and perception of audio and video equipment. An essential focus of the research is on the relationship between the technical characteristics of audio, video and audiovisual systems and human perception and experience (“Quality of Experience”, QoE).

further information on the group

Twitter

You can also find news about the lab on our Twitter channel:

https://twitter.com/avt_imt

News

AVT members win DASH Industry Forum Excellence Award in collaboration with TU Berlin, NTNU and TU Munich

AVT members win DASH Industry Forum Excellence Award in collaboration with TU Berlin, NTNU and TU Munich

Award certificate

This year's DASH Industry Forum Excellence in DASH Awards were presented at ACM MMSys 2020. The prizes were awarded for  "practical enhancements and developments which can sustain future commercial usefulness of DASH". The paper "Comparing Fixed and Variable Segment Durations for Adaptive Video Streaming – A Holistic Analysis" was written by Susanna Schwarzmann (TU Berlin), Nick Hainke (TU Berlin), Thomas Zinner (NTNU Norway), and Christian Sieber (TU Munich) together with Werner Robitza and Alexander Raake from the AVT group. The paper won the first prize in the ceremony.

More info about the awards can be found here (https://multimediacommunication.blogspot.com/2020/06/dash-if-awarded-excellence-in-dash.html). The paper is available here (https://dl.acm.org/doi/abs/10.1145/3339825.3391858).

New article: Bitstream-based Model Standard for 4K/UHD: ITU-T P.1204.3 -- Model Details, Evaluation, Analysis and Open Source Implementation

New article: Bitstream-based Model Standard for 4K/UHD: ITU-T P.1204.3 -- Model Details, Evaluation, Analysis and Open Source Implementation

Twelfth International Conference on Quality of Multimedia Experience (QoMEX). Athlone, Ireland. May 2020

Rakesh Rao Ramachandra Rao, Steve Göring, Werner Robitza, Alexander Raake, Bernhard Feiten, Peter List, and Ulf Wüstenhagen

With the increasing requirement of users to view high-quality videos with a constrained bandwidth, typically realized using HTTP-based adaptive streaming, it becomes more and more important to determine the quality of the encoded videos accurately, to assess and possibly optimize the overall streaming quality.
In this paper, we describe a bitstream-based no-reference video quality model developed as part of the latest model-development competition conducted by ITU-T Study Group 12 and the Video Quality Experts Group (VQEG), "P.NATS Phase 2''. It is now part of the new P.1204 series of Recommendations as P.1204.3.

It can be applied to bitstreams encoded with H.264/AVC, HEVC and VP9, using various encoding options, including resolution, bitrate, framerate and typical encoder settings such as number of passes, rate control variants and speeds.

The proposed model follows an ensemble-modelling--inspired approach with weighted parametric and machine-learning parts to efficiently leverage the performance of both approaches.
The paper provides details about the general approach to modelling, the
features used and the final feature aggregation.

The model creates per-segment and per-second video quality scores on the 5-point Absolute Category Rating scale, and is applicable to segments of 5--10 seconds duration.

It covers both PC/TV and mobile/tablet viewing scenarios. We outline the databases on which the model was trained and validated as part of the competition, and perform an additional evaluation using a total of four independently created databases, where resolutions varied from 360p to 2160p, and frame rates from 15--60fps, using realistic coding and bitrate settings.

We found that the model performs well on the independent dataset, with a Pearson correlation of 0.942 and an RMSE of 0.42. We also provide an open-source reference implementation of the described P.1204.3 model, as well as the multi-codec bitstream parser required to extract the input data, which is not part of the standard.

New article: Are you still watching? Streaming Video Quality and Engagement Assessment in the Crowd

New article: Are you still watching? Streaming Video Quality and Engagement Assessment in the Crowd

Twelfth International Conference on Quality of Multimedia Experience (QoMEX), May 26 - 28, 2020

Werner Robitza, Alexander M. Dethof, Steve Göring, Alexander Raake, André Beyer, Tim Polzehl

We present first results from a large-scale crowdsourcing study in which three major video streaming OTTs were compared across five major national ISPs in Germany. We not only look at streaming performance in terms of loading times and stalling, but also customer behavior (e.g., user engagement) and Quality of Experience based on the ITU-T P.1203 QoE model. We used a browser extension to evaluate the streaming quality and to passively collect anonymous OTT usage information based on explicit user consent. Our data comprises over 400,000 video playbacks from more than 2,000 users, collected throughout the entire year of 2019.

The results show differences in how customers use the video services, how the content is watched, how the network influences video streaming QoE, and how user engagement varies by service. Hence, the crowdsourcing paradigm is a viable approach for third parties to obtain streaming QoE insights from OTTs.

The paper was written together with the TU Ilmenau spin-off AVEQ GmbH, and the Berlin-based company Crowdee GmbH, and it can be downloaded here (https://aveq.info/resources/).

New article: Prenc - Predict Number Of Video Encoding Passes With Machine Learning

New article: Prenc - Predict Number Of Video Encoding Passes With Machine Learning

Twelfth International Conference on Quality of Multimedia Experience(QoMEX). Athlone, Ireland. May 2020

Steve Göring, Rakesh Rao Ramachandra Rao and Alexander Raake

Video streaming providers spend huge amounts of processing time to get a quality-optimized encoding.
While the quality-related impact may be known to the service provider, the impact on video quality is hard to assess, when no reference is available.

Here, bitstream-based video quality models may be applicable, delivering estimates that include encoding-specific settings. Such models typically use several input parameters, e.g. bitrate, framerate, resolution, video codec, QP values and more.

However, for a given bitstream, to determine which encoding parameters were selected, e.g., the number of encoding passes, is not a trivial task.

This leads to our following research question: Given an unknown video bitstream, which encoding settings have been used? To tackle this reverse engineering problem, we introduce a system called prenc.
Besides the use in video-quality estimation, such algorithms may also be used in other applications such as video forensics. We prove our concept by applying prenc to distinguish between one- and two-pass encoding.

Starting from modeling the problem as a classification task, estimating bitstream-based features, we further describe a machine learning approach with feature selection to automatically predict the number of encoding passes for a given video bitstream.

Our large-scale evaluation consists of 16 short movie type 4K videos that were segmented and encoded with different settings (resolutions, codecs, bitrates), so that we in total analyzed 131.976 DASH video segments.

We further show that our system is robust, based on a 50\% train and 50\% validation approach without source video overlapping, where we get a classification performance of 65\%~F1 score.
Moreover, we also describe the used bitstream-based features in detail, the feature pooling strategy and include other machine learning algorithms in our evaluation.

New article: Development and evaluation of a test setup to investigate distance differences in immersive virtual environments

New article: Development and evaluation of a test setup to investigate distance differences in immersive virtual environments

2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX), May 26 - 28, 2020

Stephan Fremerey, Muhammad Sami Suleman, Abdul Haq Azeem Paracha and Alexander Raake

Nowadays, with recent advances in virtual reality technology, it is easily possible to integrate real objects into virtual environments by creating an exact virtual replication and enabling interaction with them by mapping the obtained tracking data of the real to the virtual objects.

The primary goal of our study is to develop a system to investigate distance differences for near-field interaction in immersive virtual environments. In this context, the term distance difference refers to the shift between a real object and the respective replication of the real object in the virtual environment of the same size. This could occur for a number of reasons e.g. due to errors in motion tracking or mistakes in designing the virtual environment. Our virtual environment is developed using the Unity3D game engine, while the immersive contents were displayed on an HTC Vive Pro head-mounted display. The virtual room shown to the user includes a replication of the real testing lab environment, while one of the two real objects is tracked and mirrored to the virtual world using an HTC Vive Tracker.

Both objects are present in the real as well as in the virtual world. To find perceivable distance differences in the near-field, the actual task in the subjective test was to pick up one object and place it into another object.

The position of the static object in the virtual world is shifted by values between 0 and 4 cm, while the position of the real object is kept constant. The system is evaluated by conducting a subjective proof-of-concept test with 18 test subjects.

The distance difference is evaluated by the subjects through estimating perceived confusion on a modified 5-point absolute category rating scale. The study provides quantitative insights into allowable real-world vs. virtual-world mismatch boundaries for near-field interactions, with a threshold value of around 1 cm.

Link to the repository: https://github.com/Telecommunication-Telemedia-Assessment/distance_differences_ives

New article: Let the Music Play: An Automated Test Setup for Blind Subjective Evaluation of Music Playback on Mobile Devices

New article: Let the Music Play: An Automated Test Setup for Blind Subjective Evaluation of Music Playback on Mobile Devices

Twelfth International Conference on Quality of Multimedia Experience (QoMEX), May 2020

Keller, D.; Raake, A.; Vaalgamaa, M.; Paajanen, E.

Several methods for subjective evaluation for audio and speech have been standardized over the last years. However, with the advancement of mobile devices such as smartphones and Bluetooth speakers, people listen to music even outside their home environment, when traveling and in social situations. Conventional comparative methodologies are difficult to use for sound-quality evaluation of such devices, since subjects are likely to include other factors such as brand or design. Hence, we propose an automated test setup to evaluate music and audio playback of portable devices with subjects without revealing the devices or interfering with the tests. Furthermore, an identical placement of the devices in front of the listener is crucial to accommodate the individual acoustic directivity of the device. For this purpose, we use a large motorized turntable on which the devices are mounted so that the playback device is automatically moved to the defined position in advance. An enhanced version of rating software avrateNG enables the automatic playout of musical pieces and appropriate turning of the devices to face the listeners. Devices that can automatically be tested using our setup include Android and iOS smartphones, as well as Bluetooth and wired portable speakers. Preliminary user tests were conducted to verify the practical applicability and stability of the proposed setup.

SiSiMo: Towards Simulator Sickness Modeling for 360° Videos Viewed with an HMD

SiSiMo: Towards Simulator Sickness Modeling for 360° Videos Viewed with an HMD

27th IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), March 2020, Atlanta, USA

A. Raake, A. Singla, R. R. R. Rao, W. Robitza and F. Hofmeyer

Users may experience symptoms of simulator sickness while watching 360°/VR videos with Head-Mounted Displays (HMDs). At present, practically no solution exists that can efficiently eradicate the symptoms of simulator sickness from virtual environments. Therefore, in the absence of a solution, it is required to at least quantify the amount of sickness. In this paper, we present initial work on our Simulator Sickness Model SiSiMo including a first component to predict simulator sickness scores over time. Using linear regression of short term scores already shows promising performance for predicting the scores collected from a number of user tests.

Project CYTEMEX

Project CYTEMEX

The project is a scientific cooperation between the labs of Audiovisual Technology, Virtual Worlds and Digital Games (Prof. Wolfgang Broll, Faculty of Economics and Media) and Electronic Media Technology (Prof. Karlheinz Brandenburg, Faculty of Electrical Engineering and Information Technology).

The project, funded by the Free State of Thuringia, was co-financed by the European Union within the European Regional Development Fund (ERDF).

Project Website

ITU-T standard P.1204 for predicting video quality developed

ITU-T standard for predicting video quality developed with the significant participation of the AVT department

ITU-T recently consented the P.1204 series of Recommendations titled “Video quality assessment of streaming services over reliable transport for resolutions up to 4K”. This work was jointly conducted by Question 14 of Study Group 12 (SG12/Q14) of the ITU-T and the Video Quality Experts Group (VQEG). Overall 9 companies and universities were part of this competition-based development, with the best set of models recommended as standards.

From the official ITU-T SG12 communication it reads:

"The P.1204 Recommendation series describes a set of objective video quality models. These can be used standalone for assessing video quality for 5-10 sec long video sequences, providing a 5-point ACR-type Mean Opinion Score (MOS) output. In addition, they deliver per-1-second MOS-scores that together with audio information and stalling / initial loading data can be used to form a complete model to predict the impact of audio and video media encodings and observed IP network impairments on quality experienced by the end-user in multimedia streaming applications. The addressed streaming techniques comprise progressive download as well as adaptive streaming, for both mobile and fixed network streaming applications."

To date, the P.1204 series of Recommendations comprises four sub-recommendations, namely P.1204 (an introductory document for the whole P.1204 series), P.1204.3 (bitstream-based model with full access to bitstream), P1204.4 (reference-/pixel-based model) and P1204.5 (hybrid bitstream- and pixel-based no-reference) with 2 more sub-recommendations, P1204.1 (meta-data-based) and P1204.2 (meta-data- and video-frame-information-based) planned to be consented by April 2020.

The AVT group of TU Ilmenau in collaboration with Deutsche Telekom were the sole winners in the category which resulted in Recommendation P1204.3 and are co-winners in the category which is planned to result in Recommendations P1204.1 and P1204.2 by April 2020.

In the official ITU-T SG12 communication it is further stated that: 

"The consent of the P.1204 model standards marks the first time that video-quality models of all relevant types have been developed and validated within the same standardization campaign. The respective “P.NATS Phase 2” model competition used a total of 13 video-quality test databases for training, and another 13 video-quality test databases for validation. With this comparatively high number of data (more than 5000 video sequences), the resulting standards deliver class-leading video-quality prediction performance."

The published ITU standards:

P.1204: https://www.itu.int/rec/T-REC-P.1204-202001-P/en

P.1204.3: https://www.itu.int/rec/T-REC-P.1204.3-202001-P/en

 

 

The building blocks of the consented Recommendation

cencro – Speedup of Video Quality Calculation using Center Cropping

21st IEEE International Symposium on Multimedia (2019 IEEE ISM), Dec 9 - 11, 2019, San Diego, USA

Steve Göring, Christopher Krämmer, Alexander Raake

cencro – Speedup of Video Quality Calculation using Center Cropping

Today's video streaming providers, e.g. Youtube, Netflix or Amazon Prime, are able to deliver high resolution and high-quality content to end users. To optimize video quality and to reduce transmission bandwidth, new encoders and smarter encoding schemes are required. Encoding optimization forms an important part of this effort in reducing bandwidth and results in saving considerable amount of bitrate. For such optimization, accurate and computationally fast video quality models are required, e.g. Netflix's VMAF. However, VMAF is a full-reference (FR) metric, and the calculation of such metrics tend to be slower in comparison to other metrics, due to the amount of data that needs to be processed, especially for high resolutions of 4k and beyond.

We introduce an approach to speed up video quality metric calculations in general. We use VMAF as an example with a video database up to 4K resolution videos, to show that our approach works well.
Our main idea is that we reduce each frame of the reference and distorted video based on a center crop of the frame, assuming that most important visual information are presented in the middle of most typical videos. In total we analyze 18 different crop settings and compare our results with uncropped VMAF values and subjective scores. We show that this approach -- named cencro -- is able to save up to 95% computation time, with just an overall error of 4% considering a 360p center crop.

Furthermore, we checked other full-reference metrics, and show that cencro performs similar good. As a last evaluation, we apply our approach to full-hd gaming videos, also in this scenario cencro can be successfully applied.

The idea behind cencro is not restricted to full-reference models and can also be applied to other type of video quality models or datasets, or even for higher resolution videos such as 8K.

Link to the source code: https://git.io/JeR5q

AVT-VQDB-UHD-1: A Large Scale Video Quality Database for UHD-1

21st IEEE International Symposium on Multimedia (2019 IEEE ISM), Dec 9 - 11, 2019, San Diego, USA

Rakesh Rao Ramachandra Rao, Steve Göring, Werner Robitza, Bernhard Feiten, Alexander Raake

AVT-VQDB-UHD-1: A Large Scale Video Quality Database for UHD-1

4K television screens or even with higher resolutions are currently available in the market.Moreover video streaming providers are able to stream videos in 4K resolution and beyond.Therefore, it becomes increasingly important to have a proper understanding of video quality especially in case of 4K videos. To this effect, in this paper, we present a study of subjective and objective quality assessment of 4K ultra-high-definition videos of short duration, similar to DASH segment lengths.

As a first step, we conducted four subjective quality evaluation tests for compressed versions of the 4K videos. The videos were encoded using three different video codecs, namely H.264, HEVC, and VP9. The resolutions of the compressed videos ranged from 360p to 2160p with framerates varying from 15fps to 60fps. All the source 4K contents used were of 60fps. We included low-quality conditions in terms of bitrate, resolution and framerate to ensure that the tests cover a wide range of conditions, and that e.g. possible models trained on this data are more general and applicable to a wider range of real world applications. The results of the subjective quality evaluation are analyzed to assess the impact of different factors such as bitrate, resolution, framerate, and content.

In the second step, different state-of-the-art objective quality models were applied to all videos and their performance was analyzed in comparison with the subjective ratings, e.g. using Netflix's VMAF. The videos, subjective scores, both MOS and confidence interval per sequence and objective scores are made public for use by the community for further research.

Link to the videos:

Older News

Older news from the AVT lab can be found on this website.

Offers for theses in the AVT Lab

Now you can inform yourself directly about the range of topics for bachelor and master theses as well as for media projects on our website .

Take a look under the point Theses!