Literaturliste

Anzahl der Treffer: 160
Erstellt: Fri, 26 Apr 2024 23:17:36 +0200 in 2.5628 sec


Knauf, Rainer; Tsuruta, Setsuo;
Towards modeling human expertise : an empirical case study. - Ilmenau : Univ.-Bibliothek. - 6 S. = 145,4 KB, TextDruckausg.: Recent advances in artificial intelligence : proceedings of the eighteenth International Florida Artificial Intelligence Research Society Conference (FLAIRS â05) : [May 15 - 17, 2005, Clearwater Beach, Florida] / ed. by Ingrid Russell ... - Menlo Park, Calif. : AAAI Press, 2005. - ISBN 1-57735-234-3. - S. 232-237

The success of Turing Test technologies for system validation depends on the quality of the human expertise behind the system. The authors developed models of collective and individual human expertise, which are shortly outlined here. The focus of the paper is an experimental work aimed at determining the quality of these models. The models have been used for both solving problem cases and rating (other agents') solutions to these cases. By comparing the models' solutions and ratings with those of the human original we derived assessments of their quality. An analysis revealed both the general usefulness and some particular weaknesses.



http://www.db-thueringen.de/servlets/DocumentServlet?id=4540
Knauf, Rainer; Jantke, Klaus P.;
Towards an evaluation of (e-)learning systems. - 7 S. = 233,7 KB, TextDruckausg.: Recent advances in artificial intelligence : proceedings of the eighteenth International Florida Artificial Intelligence Research Society Conference (FLAIRS â05) : [May 15 - 17, 2005, Clearwater Beach, Florida] / ed. by Ingrid Russell ... - Menlo Park, Calif. : AAAI Press, 2005. - ISBN 1-57735-234-3. - S. 226-231

The paper introduces an evaluation approach for learning systems, which is applicable but not limited to e-learning systems. A discussion of current customs in evaluating learning processes reveals some weaknesses of current (not only e-) learning systems, making sophisticated evaluation technologies unsuitable. To overcome these weaknesses, the authors introduced a storyboard concept to represent a learning system's didactic design. This way the subject of evaluation becomes explicit and, thus, assessable to validation technologies. The evaluation approach based on the storyboard concept allows both the communication of general assessments about the system's validity and the indication of the particular weaknesses in the system.



http://www.db-thueringen.de/servlets/DocumentServlet?id=4539
Knauf, Rainer; Tsuruta, Setsuo;
Overcoming human weaknesses in validation of knowledge-based systems. - In: Marktplatz Internet: von e-Learning bis e-Payment, (2005), S. 254-263

Human experts employed in validation exercises for knowledge-based systems often have limited time and availability. They often have different opinions from each other as well as from themselves over time. We address this situation by introducing the use of validation knowledge used in prior validation exercises for the same knowledge-based system. We present a Validation Knowledge Base (VKB) that is the collective best experience of several human expert validators. We also present the concept of Validation Expert Software Agents (VESA) that represent a particular expert's knowledge. VESA is a software agent corresponding to a specific human expert validator. It models the validation knowledge and behavior of its human counterpart by analyzing similarities with the responses of other experts. We also describe experiments with a small prototype system to evaluate the usefulness of these concepts.



Knauf, Rainer; Tsuruta, Setsuo;
Towards reducing human involvement in validation of knowledge-based systems. - In: Pozyskiwanie wiedzy i zarz&onced;adzanie wiedz&onced;a, (2005), S. 108-120

Human experts employed in validation exercises for knowledge-based systems (expert validators) often have limited time and availability. Furthermore, they often have different opinions from each other as well as from themselves over time. We adress this situation by using validation knowledge of prior validation exercises for the same system. We present a Validation Knowledge Base (VKB) that holds the persistent collective best experience of serveral human expert validators. Its primary benefits include more reliable validation results and decreasing the expert validators' workload. We also present the concept of Validation Expert Software Agents (VESA) that represent a particular expert's knowledge. After a learning period, it can temporarily substitute its corresponding human expert validator. This can help by reducing the need for human expert validators or to maintain the required numbers when one becomes unavailable. We describe experiments with a small prototype system to evaluate these concepts.



Jantke, Klaus P.; Knauf, Rainer
Didactic design through storyboarding : standard concepts for standard tools. - In: Proceedings of the 4th International Symposium on Information and Communication Technologies, (2005), S. 20-25

The current state of affair in e-learning world-wide shows a reluctance to didactic˜design. Learners frequently complain and scientists discuss about insufficient adaptivity of e-learning offers to the learnersâ needs. Didactics is badly underestimated. - High quality didactic designs is seen as a crucial aspect of dissemination. E-learning content and services need to reach their audience properly. Learners with different prerequisites, with different needs, with different expectations and under varying context conditions have to be addressed appropriately. Didactic designs is seen as an issue of quality assurance in e-learning. - As well-known from quality management, high quality requirements and related measures towards quality assurance may turn out to be obstacles to dissemination, because quality may turn out to be expensive. The related answer are solutions frequently called quick and dirty. This does apply to e-learning as well. - The authors' own storyboard concept is introduced. Its reach goes far beyond the limits of current practices in e-learning systems and service development. The modeling concepts required are standard: annotated graphs. The software in use is standard as well: Visio. Emphasis is put on the investigation of how a suitable usage of the concepts allows for an expressive didactic design. - To sum up, the authors' intended contribution is twofold. First, they want to encourage didactic design through storyboarding in e-learning. Concepts are introduced and applications are demonstrated. Second, with the dissemination problem in mind, they want to show that concepts are crucial, but not tools. One can exploit advanced concepts toward sophisticated didactic design without an urgent need for costly software.



Knauf, Rainer; Spreeuwenberg, Silvie; Gerrits, Rik; Jendreck, Martin
A step out of the ivory tower : experiences with adapting a test case generation idea to business rules. - 7 S. = 140,3 KB, TextDruckausg.: Proceedings of the seventeenth International Florida Artificial Intelligence Research Society conference : [Miami Beach, Florida, May 17 - 19, 2004] / ed. by Valerie Barr ... - Menlo Park, Calif : AAAI Press, 2004, S. 343-349

One author developed a validation technology for rule bases that aims at several validity statements and a refined rule base. Two other authors are experienced developers of rule bases for commercial and administrative applications and engaged in the Business Rule (BR) research community. To ensure the requested performance of BR, they developed a verification tool for BR. To reach this objective completely, one gap needs to be bridged: The application of validation technologies to BR. The application of the validation technology's first step, the test case generation, revealed basic insights about the different viewpoints and terminologies of foundation- and logic oriented AI research and application oriented knowledge- and software engineering. The experiences gained during the realization of the test case generation for a BR language are reported. In particular, (1) the trade-offs between logic approaches, commercial needs, and the desired involvement of other (non-AI) software technologies as well as (2) the derived refinements of the theoretical approach are one subject of the present paper.



http://www.db-thueringen.de/servlets/DocumentServlet?id=4538
Knauf, Rainer; Tsuruta, Setsuo; Uehara, Kenichi; Onoyama, Takashi; Kurbad, Torsten
The power of experience : on the usefulness of validation knowledge. - 7 S. = 108,0 KB, TextDruckausg.: Proceedings of the seventeenth International Florida Artificial Intelligence Research Society conference : [Miami Beach, Florida, May 17 - 19, 2004] / ed. by Valerie Barr ... - Menlo Park, Calif. : AAAI Press, 2004, S. 337-342

Turing Test technologies are promising ways to validate AI systems which may have no alternative way to indicate validity. Human experts (validators) are often too expensive to involve. Furthermore, they often have different opinions from each other and from themselves over time. One way out of this situation is to employ a Validation Knowledge Base (VKB) which can be considered to be the collective experience of human expert panels. VKB is constructed and maintained across various validation sessions. Primary benefits are (1) decreasing the validators' workload and (2) refining the methodology itself. Additionally, there are some side effects that (1) improve the selection of an appropriate expert panel and (2) improve the identification of an optimal solution to a test cases. Furthermore, Validation Experts Software Agents (VESA) are introduced as an model of a particular expert's knowledge. VESA is a software agent corresponding to a human validator. It systematically models the validation knowledge and behavior of its human origin. After a learning period, it can be used to substitute the human exert.



http://www.db-thueringen.de/servlets/DocumentServlet?id=4537
Knauf, Rainer; Spreeuwenberg, Silvie; Jendreck, Martin
A step out of the ivory tower : experiences with adapting a test case generation idea to business rules. - In: BNAIC '04, (2004), S. 315-316

Knauf, Rainer; Tsuruta, Setsuo; Ihara, Hirokazu; Gonzalez, Avelino J.; Kurbad, Torsten
Improving AI systems' dependability by utilizing historical knowledge. - In: Proceedings, (2004), S. 343-352
Parallel als Druckausg. erschienen

A Turing Test is a promising way to validate AI systems which usually have no way to proof correctness. However, human experts (validators) are often too busy to participate in it and sometimes have different opinions per person as well as per validation session. To cope with these and increase the validation dependability, a Validation Knowledge Base (VKB) in Turing Test - like validation is proposed. The VKB is constructed and maintained across various validation sessions. Primary benefits are (1) decreasing validators' workload, (2) refining the methodology itself, e.g. selecting dependable validators using VKB, and (3) increasing AI systems' dependabilities through dependable validation, e.g. support to identify optimal solutions. Finally, Validation Experts Software Agents (VESA) are introduced to further break limitations of human validator's dependability. Each VESA is a software agent corresponding to a particular human validator. This suggests the ability to systematically "construct" human-like validators by keeping personal validation knowledge per corresponding validator. This will bring a new dimension towards dependable AI systems.



http://dx.doi.org/10.1109/PRDC.2004.1276590
Knauf, Rainer; Tsuruta, Setsuo; Uehara, Keiichi; Gonzalez, Avelino J.
Validation knowledge bases and validation expert software agents : models of collective and individual human expertise. - Ilmenau : Univ.-Bibliothek, 2004. - 94 S. = 1,25 MB, TextZuerst erschienen als: Technical Report # 01/04. Tokyo Denki University, School of Information Environment, Chiba, Japan, 2004

http://www.db-thueringen.de/servlets/DocumentServlet?id=4285