Overview


 

Our research is focused on two long-term objectives. While the aim of our neuroinformatics research is to develop machine intelligence by understanding and emulating neural information processing and learning processes, our work on cognitive robotics is aimed at the emergence of behavioral intelligence in active learning, mobile and interactive service and assistive robots. Both areas complement each other from a methodological as well as an application-oriented point of view and are researched in parallel and across the board.

The application scenarios of such interactive mobile assistive robots we are working on include:

  • Assistive, guidance and transport robots for public use environments (improvement stores, supermarkets, shopping malls, office buildings, clinics, care facilities, enterprises, etc.)
  • User-adaptive social robots (Companions) for home assistance in the context of Ambient Assisted Living (AAL)
  • Rehab robots for cognitive and physical mobilization of people through actively corrected movement and gait training in clinics, rehabilitation and care facilities
  • Collaborative assistive robots (cobots) for industrial or manual assembly scenarios for situation-specific, proactive user support.
     

These application scenarios require robust, everyday basic skills in user-centered and socially acceptable robot navigation and manipulation, as well as human-robot interaction (HRI).

Basic skills we have developed in the area of human-robot interaction include:

  • Multimodal person perception (detection, tracking)
  • Biometric and soft-biometric person re-identification (face- and view-based)
  • Non-verbal recognition of age, gender, facial expression and interaction interest
  • Recognition of everyday activities and actions of users in the application context (pointing poses, gestures, gait and movement patterns, daily activities, movement and assembly sequences, handover/takeover readiness)
  • Helper contacting and integration of human assistance for elevator use and passing closed doors
  • Adaptive and user-specific robot-human communication via voice outputs, facial expressions, gaze direction, display outputs and laser projections into the environment as well as body language and movement patterns of the robot
     

Basic robot navigation and manipulation skills we have developed include:

  • Mapping and modeling of the operational environment (2D, 3D) with semantic labeling using SLAM techniques
  • Self-localization of the robot and its manipulators in space based on accurate environment modeling
  • Adaptive methods for path planning and motion control including obstacle avoidance
  • Situation analysis and detection of the need for assistance in front of closed doors and elevators
  • Recognition of objects to be picked up/transferred and their position and grip point determination
  • Gripping movement control for object takeover or transfer
  • Person-safe, situation-appropriate and polite navigation and manipulation behavior (approaching, following, waiting, handing over/taking over objects from the user's hand, ...)
     

These basic skills are based on modern machine learning and probabilistic information processing techniques with the following methodological research focus:

  • Modern deep learning architectures for analysis of images and image sequences (RGB and RGB-D) with a focus on mobile deep learning architectures for real-time classification, detection, or instance segmentation
  • Techniques for semantic scene analysis using deep learning methods
  • Multi-task deep learning
  • Learning of feature embeddings for object, person, and action recognition in deep learning applications
  • Deep learning-based 3D object reconstruction, affordance modeling, and grasp pose determination
  • Modeling uncertainty in Convolutional Neural Networks for more comprehensible inference and detection of "out of distribution" data
  • Scalable annotation and learning with few or only coarsely labeled training examples; human-in-the-loop to label selectively
  • Federated deep learning techniques as a form of distributed learning
  • Use of simulations to generate labeled or annotated data
  • Transfer learning between different application domains
  • Reinforcement learning for acquisition and coordination of sensorimotor behaviors including deep reinforcement learning
 

Important research highlights of our institute are the development of

- the world's first mobile shopping assistance robot TOOMAS for everyday use (2007)

- the mobile home-based assistive robots (Companions) for elderly people (CompanionAble 2012, ALIAS 2013, SERROGA 2015, SYMPARTNER 2018)

- and the mobile robotic rehabilitation assistants ROREAS (2015) and ROGER (2019) for gait training in rehabilitation after stroke and orthopedic surgery.

 

Our robotic systems:

In total, our Lab has 14 mobile assistive robots:

  • 3 mobile interactive assistive robots HERA, ZEUS and RINGO based on the SCITOS G5 platform from MetraLabs GmbH (https://www.metralabs.com/mobiler-roboter-scitos-g5/) equipped with multimodal sensor technology (color and depth cameras, laser scanner) and on-board deep learning computing technology (Jetson Xavier)
  • 1 mobile collaborative assistive robot TIAGO (https://pal-robotics.com/robots/tiago/) from PAL Robotics enhanced with multimodal sensor technology (color and depth cameras) and on-board deep learning computing technology (Jetson Xavier) in collaboration with ThZM (https://www.maschinenbau-thueringen.de/)
  • 1 mobile collaborative assistive robot DURGA consisting of the Scitos X3 platform from MetraLabs GmbH (https://www.metralabs.com/team/scitos-x3/) and two robotic arms JACO 2 from Kinova (https://www.kinovarobotics.com/de/assistive-technologien/column-a1/kinova-jaco-roboterarm) in collaboration with ThZM (https://www.maschinenbau-thueringen.de/)
  • 2 mobile home robot companions TWEETY and MAX and 1 mobile home robot companion SYMPARTNER based on the SCITOS G3 platform of MetraLabs GmbH equipped with multimodal sensor technology (color and depth cameras, laser scanner) and on-board deep learning computing technology (Jetson Xavier)
  • 2 mobile, interactive guidance robots KONRAD and SUSE based on the platform SCITOS A5 of MetraLabs GmbH (https://www.metralabs.com/service-roboter-scitos-a5/) equipped with multimodal sensor technology (color cameras, laser scanner)
  • 1 mobile interaction robot PERSES based on a B21 from Real World Interface, Inc. (RWI)
  • 1 mobile robot MILVA for outdoor use from MECOS Robotics AG 
  • 2 experimental robots HOROS based on a Pioneer II from Real World Interface, Inc. (RWI)