Babyrobot Concept Demo Video
This video showcases the core functionalities developed in the frames of the BabyRobot project: a) audio-visual processing, b) behavioural analysis, and c) interactive robot learning and control. These technologies are developed for child-robot interaction scenarios and applied in relevant use-cases aiming to enhance social skills of children. The video demonstrates a multi-party game, which is introduced to the child by Furhat robot (named Anna), orchestrating a game that also involves the Zeno Robot (named Nikos) and the Nao Robot (named Paris).
The robots and the child participate jointly in a game called “Guess the object or the motion“. Child and robots take turns to play the game. Either the child performs a guess or a robot. Functionalities such as visual localization, gesture recognition, object recognition, distant speech recognition and semantic similarity, action recognition, visual emotion recognition, reinforcement learning and robot behaviour adaptation, are demonstrated in this concept video.
Joint Attention Concept Demo Video
This video showcases the concept of a so called “Joint Attention” game. In this child-robot interaction game, a robot agent seeks for the child’s help by using motor (i.e. non-verbal) actions, and in particular by performing a range of pointing gestures involving arm and gaze movements.
The aim of this scenario is to assess how joint attention between child and robot can be achieved and in what extent a child can be engaged by a robot into a simple collaborative activity such as an object handover. By evaluating how a robot can motivate a child into such a ‘help offer’ behavior, different aspects of the child-robot social interaction framework can be analyzed (attention detection and manipulation, engagement estimation, intention understanding and social coordination) and potential effects on specific child’s social responses can be studied.