Google Search
Artificial intelligence (AI) solutions are already finding their way into numerous business areas. Especially in quality assurance (QA) or predictive maintenance, many AI applications have proven to be extremely effective and efficient and have established themselves accordingly. But what (among other things, legal) consequences am I faced with if an AI application makes an incorrect statement? Especially in quality assurance, the documentation and traceability of an AI's decisions is important. These techniques of AI comprehensibility (explainable AI or XAI) are a major topic of AI research with already practical solutions for enterprise use. In this theme day, we will therefore present AI applications in QA, which legal consequences arise in case of a misjudgement as well as possibilities how results of an AI can be made comprehensible. Let's open the AI black box together.
Participation is free of charge. We will send you the access data to your deposited e-mail address shortly before the event.
All information about the event is available here as a PDF file.
Your questions will be answered by
Michael Rätze
+49 371 531-35860
michael.raetze@digitalzentrum-chemnitz.de