UTAIL

English UT

Home

News

Member

Mission

Research

Publications

Joining Lab

Access

Links

Square
ETedgeedge

et

Year: 2010-
Member:
Anna Gruebler
Vincent Berenz
Masakazu Hirokawa
Kenji Suzuki
Partners:
NASA JPL (USA)
Tags:
- Cognitive Robotics
- Cybernics
- Augmented Human

 
Coaching Robot Behavior
Motor guidance using continuous physiological affective feedback

 

In this work we present a new way for human-robot interaction, where a robot is able to receive physiological affective feedback for its actions from a human trainer and learn from it. We capture the human trainer’s facial expressions using a wearable device that records distal electromyographic signals and uses computational methods of signal processing and pattern recognition in real time. We show how a robot can be coached to perform a certain action when confronted with an object by using the continuous physiological affective feedback from the human trainer. We also show that the robot is able to quickly learn the appropriate actions for different situations from the trainer in a manner modeled after the way children learn from their parent’s encouragement or reproach. This work shows an effective way to coach a robot using affective feedback and has the advantage of working in multiple lighting conditions and camera angles as well as not increasing the cognitive load of the trainer. Our method has applications in the area of social robotics because it shows that interaction between humans and robots is possible using continuous non-verbal social cues, which are characteristic for human-human interaction.

As robots become progressively more integrated into human environments, the ability to behave correctly in different and complicated situations and interact naturally with humans becomes necessary. Previous research has shown that robots that can learn to perform appropriate actions to human satisfaction from simple human feedback have a good opportunity for success in social situations. An appropriate model for learning is the natural way that children learn from parents. Instead of being explicitly told how to perform a task, children explore their environment and perform different actions. Initially children have no judgment on their actions and then their parents show them which actions are worth pursuing in certain situations. In the face of a parental blame or reproach, children learn that a goal was not a good one to pursue. On the other hand, when faced with parental praise and encouragement, children are more inclined to perform the action again. This model of positive and negative feedback can be used to coach a robot.

In this work we present a new method for the training of a robot by using positive and negative feedback obtained directly from the facial expressions of the human trainer while interacting with the robot in a natural manner using a wearable device that captures EMG signals. In the same manner that children learn the appropriate behavior for a certain situation from parents observing affective cues, such as facial expressions, the robot can be coached by the human trainer using facial expressions in order to achieve the appropriate pattern of behavior for a given situation.


 


This study was supported in part by the Global COE Program on "Cybernics: fusion of human, machine, and information systems.”

     
Publications
  • Gruebler, A., Berenz, V., and Suzuki, K., Emotionally Assisted Human-Robot Interaction using a Wearable Device for Reading Facial Expressions, Advanced Robotics, 26(10):1143-1159, 2012.
  • Gruebler, A., Berenz, V., Suzuki, K.: "Coaching robot behavior using continuous physiological affective feedback," Proc. of IEEE-RAS International Conference on Humanoid Robots, pp. 466-471, 2011.
   
     
Related Projects

 

 

 



  © 2005-2011 Artificial Intelligent Laboratory, University of Tsukuba, Japan