Abstract
In this paper we present a socially interactive multi-modal robotic head, ERWIN - Emotional Robot With Intelligent Networks, capable of emotion expression and interaction via speech and vision. The model presented shows how a robot can learn to attend to the voice of a specific speaker, providing a relevant emotional expressive response based on previous interactions. We show three aspects of the system; first, the learning phase, allowing the robot to learn faces and voices from interaction. Second, recognition of the learnt faces and voices, and third, the emotion expression aspect of the system. We show this from the perspective of an adult and child interacting and playing a small game, much like an infant and caregiver situation. We also discuss the importance of speaker recognition in terms of Human-Robot-Interaction and emotion, showing how the interaction process between a participant and ERWIN can allow the robot to prefer to attend to that person.
| Original language | English |
|---|---|
| Title of host publication | 2009 IEEE Symposium on Artificial Life, ALIFE 2009 - Proceedings |
| Publisher | IEEE |
| Pages | 77-84 |
| Number of pages | 8 |
| ISBN (Print) | 9781424427635 |
| DOIs | |
| Publication status | Published - 15 May 2009 |
| Externally published | Yes |
| Event | 2009 IEEE Symposium on Artificial Life - Nashville, TN, United States Duration: 30 Mar 2009 → 2 Apr 2009 https://ieeexplore.ieee.org/xpl/conhome/4911409/proceeding |
Publication series
| Name | IEEE Symposium on Artificial Life - Proceedings |
|---|---|
| Publisher | IEEE |
| ISSN (Print) | 2160-6374 |
| ISSN (Electronic) | 2160-6382 |
Conference
| Conference | 2009 IEEE Symposium on Artificial Life |
|---|---|
| Abbreviated title | ALIFE 2009 |
| Country/Territory | United States |
| City | Nashville, TN |
| Period | 30/03/09 → 2/04/09 |
| Internet address |