
Univ.-Prof. Dr.-Ing. Sattler, Kai-Uwe
Fachgebietsleiter
Technische Universität Ilmenau
Fakultät für Informatik und Automatisierung
Institut für Praktische Informatik und Medieninformatik
Fachgebiet Datenbanken und Informationssysteme
Helmholtzplatz 5
98693 Ilmenau
Zusebau, Raum 3025
Tel.: +49 3677 69-4579

The goals of the DFG priority program on Scalable Data Management for Future Hardware (SPP2037) are based on the observation that data management architectures will undergo a radical shift in the next years. This is driven by the fact that on the one hand, the range of applications requiring to handle large sets of data has significantly broadened, and on the other hand, new trends in hardware as well as at operating system level offer great opportunities for rethinking current system architectures. The priority program is coordinated by Kai-Uwe Sattler (TU Ilmenau), Alfons Kemper, Thomas Neumann (TU Munich), and Jens Teubner (TU Dortmund).
Funding: DFG as part of SPP 2037
Today’s enterprise computing architectures are characterized by a complex memory hierarchy: the different application requirements in terms of latency, bandwidth, persistence, and access pattern as well as the characteristics of available memory and storage technology require combining different technologies.
Building highly efficient data management and analytics solutions which meet the challenges of modern applications requires to utilize this memory hierarchy, e.g. by caching strategies, taking the specific characteristics of a given technology into account, and keeping data objects in the optimal level. In this project, we plan to exploit modern memory hierarchies to support Hybrid transactional/analytical processing (HTAP) on graph data.

In energy management system, reliability of information is important as it is needed for the optimal energy supply.Therefore, the main goal is to implement a distributed data processing pipeline that continuously adapts its behaviour to the changing environmental conditions due to threats and error conditions. In the context of energy management, fail-safe and resilient IoT platforms are necessary for the distributed energy management. Already there are different approaches to handle the errors in the data processing pipeline. However, additional costs such as runtime and overhead are imposed for the development and operation of these approaches.Therefore, a secure, cost- and runtime-efficient implementation must be defined.
The project reDesigN is a joint project with the project partners TU Ilmenau, Fraunhofer IOSB-AST, CUCULUS GmbH and HKW Elektronik GmbH. This project mainly consists of following tasks,
This project is funded by the BMBF.
In the project "Learning Products" the TU Ilmenau and the FSU Jena are developing methods for intelligent suggestion and decision systems that support and monitor the operation of medical technology devices and the evaluation of their measurement results. LearningProducts is a joint project of the Software Engineering for Safety-Critical Systems Group, the Biomedical Engineering Group, and the Databases and Information Systems Group of the TU Ilmenau as well as the Computer Vision Group of the FSU Jena.
For the integration of machine learning methods into medical technology products, the project focuses on the following tasks:

E4SM is a joint research project of seven different groups of the TU Ilmenau with the goal of researching innovative methods for the development and operation of machine learning based assistence systems for smart manufacturing in industrial fields.
The project is focused on integrated engineering methods based on the partial solutions of the specialized groups with the core areas: assistance robotics, management and analysis of heterogeneous data, IT-Security and IT-Safety.
The field of work for the database and information systems group consists of the processing of datastreams with spatial / temporal parameters as well as the development of efficient operators to support computation intensive machine learning tasks.
"Deciphering the pandemic public sphere" is a multi-disciplinary research project. It follows three key research questions:
1. What explanations and messages about Covid-19 as well as related protective actions did governments and health institutions in Europe and the USA provide to the public and media?
2. How did legacy media across the aforementioned countries cover and frame risk messages about Covid-19 disseminated by governments and health institutions?
3. How did citizens in these countries perceive and respond to the pandemic and risk messages about Covid-19 disseminated by governments, health institutions, and legacy media?
Countries included: Germany, Spain, Italy, Netherlands, Sweden, UK, USA
Methods: Qualitative Interview, Computational Methods, Online Survey, Secondary Data Analysis, Content Analysis


Im Rahmen der thurAI Initiative wurde das Projekt als Pilotprojekt im Bereich Smart City vom Land Thüringen gefördert. Gemeinsam mit dem Fachgebiet CCS (Prof. Emese Domahidi) wurde in zwei Projektteilen sowohl der Informationsbedarf ermittelt als auch prototypisch eine natürlich-sprachliche Schnittstelle zur Abfrage von vielfältigen Informationen entwickelt. Die abzufragenden Informationen werden in einem Wissensgraph gespeichert und bereitgestellt, so dass auch ein semantische Informationsabfrage entlang der Graphverbindungen - etwa zu verwandten Anfragen oder weiterführenden Informationen - ermöglicht werden kann.
Weitere Informationen finden Sie hier.
Software and systems engineering projects accumulate a mass of data in the form of domain documents, requirements, safety analysis, design, code, test cases, simulations, version control data, fault logs, model checkers, project plans and so on. When combined with the power of software analytics, this data can deliver the precise answers to questions that stakeholders demand. In particular, it can support decision making, process improvement, safety analysis, and a myriad of other software engineering tasks.
The project is a joint project with Actian, who develops the distributed database management system Actian Avalanche next do different solutions for data management, data integration and data analysis. Avalanche relies on VectorH as the underlying distributed data technology, which was developed for on-premise cluster environments. The main target of the project is to optimize Avalanche for the cloud environment, which especially contains two major aspects. On the one hand, approaches to optimize the system's architecture for the new hardware environment are investigated to exploit all opportunities the cloud environment offers. On the other hand, the system is driven in the direction of self-managing databases to increase it's ease-of-use and reduce the effort for administration and running a database.