CURDEX - Curiosity-driven exploration using abstract knowledge

One of the big tasks in Artificial Intelligence is the acquisition and refinement of knowledge which is also known as Knowledge Representation and Reasoning. Recent progresses have also been achieved in the semantic web domain by knowledge graphs that process on huge amounts of unstructured data. The most famous ones are probably Google’s Knowledge Graph or IBMs WATSON. Others for example are Cyc, DBpedia or Yago to mention just a few.

On the contrary, the robotics domain is still far from using such big data for everyday activities, because the relation between internet knowledge and real world instances is still a very challenging and unsolved task. Even more far-fetched is the generalization from such knowledge and infer common concepts to achieve an intelligent behaviour.

First steps in this direction model a detailed environment for the robot to give it an ability to transfer knowledge into actions and vice versa. On the long-term, the knowledge representation of a robot could be enriched using big and unstructured data from the internet. This research topic steps into the second part mentioned above, that is to enrich all day activities making use of semantic web knowledge.

Another impressive progress has been made in game engines where rendering of environments become photorealistic, for example by the Unreal Engine 4. Such Virtual Reality (VR) environments have recently been used as interface between robot and real world. VR is another tool for learning, testing and exploring the everyday environment before going to real world. In turn, the gathered knowledge from real world could be represented in VR. In other words, the inner state of a knowledge base could be visualized by VR.

The combination of progresses in the areas of semantic web, game engines and robotics show a promising direction in reaching a level of autonomy for robots that is more general and flexible compared to classical AI approaches.

Bridging the gap between high-level symbolic and low-level subsymbolic approaches is a significant challenge and a topic of active research. A system that operates both in the symbolic and subsymbolic domain could potentially leverage positive aspects from both domains, such as the associative power of subsymbolic, distributed representations, or powerful inference methods and knowledge databanks available in symbolic AI. However, directly specifying a system that covers the entire stack from perception to high-level symbolic AI is not trivial, and potentially doomed to fail from the start.

Here, we start from the perspective of an agent that gradually builds a distributed representation by interacting with its environment, and explores how explicit knowledge accessible in a symbolic form can be used to improve the learning performance, e.g., by guiding exploration. The agent can perform a range of (discrete) actions in an interactive environment (reinforcement learning). Generally speaking, we are interested in an agent which can, after some training time, fulfill a number of tasks or quickly learn new tasks which are not known at training time. Therefore, the goal is to build an agent that can explore the environment and its own actions during training phase, and build a representation that is useful for learning/performing new tasks afterwards.

The main idea is to explore how to build a representation/algorithmic framework which can make use of symbolic information during training time, e.g. to guide exploration, and assess how this information can support the learning process, as well as the performance on new tasks. This could, for example, be done by using the symbolic information to dynamically construct intrinsic reward functions, or choose appropriate goal states.