RESINA - Spatiotemporal representation spaces for interaction analysis


 

Situation awareness is a necessary enabler for efficient human-machine collaboration scenarios in both the robotic and the automotive domain. According to Endsley [1] the term situation awareness refers to a continuous process of perceiving the environment with its objects and subjects, understanding the environment, and predicting the state of the environment with its objects and subjects for an adequate time interval. Given these capabilities an intelligent system is able to make reasonable decisions and optimally plan actions with respect to its own goals and the environment. A machine features situation awareness, if it is able to recognize interactions within the environment, anticipate human behavior, and appreciate human intentions.

A variety of approaches is available, addressing the perception of static and dynamic environments, such as SLAM, object localization and human pose estimation. These representations are primarily used to predict object positions and plan trajectories. The general and robust fusion of static and dynamic components in a complex environment incorporating the interaction, the individual goals and intentions of the involved humans and machines is an open research issue. The outcomes have implications for many interesting application scenarios, such as in industrial or household human-machine interaction and manipulation. Especially, how to deal systematically with partially observable and probabilistic information is only solved partially. This includes hidden intentions as well as the uncertain awareness of the involved humans.

In this collaboration project, we aim at a refinement of this dynamic environment interpretation by conducting research on spatiotemporal representation spaces for interaction analysis during human-system interaction. At this, we consider interaction patterns with both the machine and manipulable objects. In an industrial context this may then e.g. be applicable for a workbench / assembly-line scenario, where a monitoring of the human-object or human-environment interaction occurs via (mostly) visual sensors (eventually also simple wearable sensors, like accelerometers) and appropriate machine-learning and computer vision techniques. A computer-based system should be investigated. It detects and tracks the human motion in the scene, records the measurements in a common representational basis, and then analyses simple activities. These activities may indicate interaction with particular points or objects in the system environment and allow for gradually attaining situation awareness. Thus, we address an important step towards efficient and safe human-machine collaboration.