Trusted AI applications

Calendar page icon
Tu. 15.02.2022
Calendar page icon
Pin icon
Person icon
Target group
All AI interested

Artificial intelligence (AI) solutions are already finding their way into numerous business areas. Especially in quality assurance (QA) or predictive maintenance, many AI applications have proven to be extremely effective and efficient and have established themselves accordingly. But what (among other things, legal) consequences am I faced with if an AI application makes an incorrect statement? Especially in quality assurance, the documentation and traceability of an AI's decisions is important. These techniques of AI comprehensibility (explainable AI or XAI) are a major topic of AI research with already practical solutions for enterprise use. In this theme day, we will therefore present AI applications in QA, which legal consequences arise in case of a misjudgement as well as possibilities how results of an AI can be made comprehensible. Let's open the AI black box together.

What can you expect?

  1. Presentation of AI applications from business practice
  2. Presentation of an AI demonstrator for camera-based quality assurance
  3. Illumination and discussion of legal implications when using AI
  4. Explanation of technical possibilities of XAI: How to explain and make understandable the approach of machine learning systems?

Participation is free of charge. We will send you the access data to your deposited e-mail address shortly before the event.


Information & Contact

All information about the event is available here as a PDF file.

Your questions will be answered by

Michael Rätze
+49 371 531-35860