University of Lincoln
Browse

Facial communicative signal interpretation in human-robot interaction by discriminative video subsequence selection

Download (617.43 kB)
conference contribution
posted on 2024-02-09, 19:04 authored by Christian Lang, Heiko Wersing, Sven Wachsmuth, Marc HanheideMarc Hanheide

Facial communicative signals (FCSs) such as head gestures, eye gaze, and facial expressions can provide useful feedback in conversations between people and also in humanrobot interaction. This paper presents a pattern recognition approach for the interpretation of FCSs in terms of valence, based on the selection of discriminative subsequences in video data. These subsequences capture important temporal dynamics and are used as prototypical reference subsequences in a classi?cation procedure based on dynamic time warping and feature extraction with active appearance models. Using this valence classi?cation, the robot can discriminate positive from negative interaction situations and react accordingly. The approach is evaluated on a database containing videos of people interacting with a robot by teaching the names of several objects to it. The verbal answer of the robot is expected to elicit the display of spontaneous FCSs by the human tutor, which were classi?ed in this work. The achieved classi?cation accuracies are comparable to the average human recognition performance and outperformed our previous results on this task.

History

School affiliated with

  • School of Computer Science (Research Outputs)

Publisher

IEEE

ISSN

1050-4729

ISBN

9781467356411

Date Submitted

2013-03-07

Date Accepted

2013-05-01

Date of First Publication

2013-05-01

Date of Final Publication

2013-05-01

Event Name

International Conference on Robotics and Automation (ICRA)

Event Dates

May 6-10 2013

Date Document First Uploaded

2013-03-13

ePrints ID

7880

Usage metrics

    University of Lincoln (Research Outputs)

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC