Core Robotic Functionality


Research methods and implement software modules needed to endow the humanoid robot platforms with core functionalities, focusing mainly on:

  1. Motion planning and control for synthesis of expressive and communicative gestures.
  2. Dynamic control for the execution of tasks that involve physical or non-physical interaction with the environment, including visual tracking, grasping and kinaesthetic interaction.
  3. Learning by human demonstration, aiming to develop motion and interaction control skills that will mimic human behavior.
  4. Design and implementation of a behavior-based robot control architecture that will incorporate adaptation and developmental learning modules on different levels of abstraction, including both symbolic and sub-symbolic layers, enabling the system to optimize control policies on the fly.

Description of Work

  • Gestural Kinematics: The aim of this task will be to design and implement the motion planning and control algorithms that are needed to endow different humanoid robot platforms with meaningful gesticulation functionalities. Various classifications of gestures will be researched and a taxonomy of gestural primitives will be designed, incorporating manipulative, expressive, and communicative gestures. The kinematics of hand and arm gestural activities will be analyzed, followed by the design and implementation of motion planning and control primitives.
  • Environment Interaction Skills: This task will provide robot control modules needed to endow different humanoid platforms with interaction skills including: 1) motion and activity tracking; 2) planning and execution of grasping actions; 3) planning and execution of manipulative tasks; 4) dynamic and adaptive control of collaborative interactive tasks; 5) control of actions that involve direct haptic interaction with a human.
  • Robot Learning: The aim of this task is to develop and implement imitation learning algorithms that will enable the humanoid robot to acquire new skills using human demonstrations and guidance. We will investigate new imitation learning paradigms based on inverse reinforcement learning and structured classification capable of generalizing to unseen situation using data collected during actual human-robot interactions. We will also explore various policy encoding schemes supporting the development of multi-layer learning strategies.
  • Behavior-based Control Architecture: The goal of this task will be to develop a behavior-based control architecture for the humanoid robot platforms, which will incorporate different levels of abstraction. This architecture will support different adaptation and developmental learning schemes, enabling the system to optimize control policies on the fly based on cognitive feedback provided by human action and intention recognition modules.