Skip to main content
IEEE Robotics and Automation LettersVolume 3, Issue 4, October 2018, Article number 8423498, Pages 4132-4139

Action Anticipation: Reading the Intentions of Humans and Robots(Article)(Open Access)

  Save all to author list
  • aVislab, Institute for Systems and Robotics, Instituto Superior Tecnico, Universidade de Lisboa, Lisbon, 1649-004, Portugal
  • bFaculty of Technical Sciences, University of Novi Sad, Novi Sad, 21000, Serbia
  • cDepartment of Psychology, Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh, Edinburgh, EH8 9JZ, United Kingdom
  • dLearning Algorithms and Systems Laboratory, School of Engineering, Ecole Polytechnique Federale de Lausanne, Lausanne, 1015, Switzerland

Abstract

Humans have the fascinating capacity of processing nonverbal visual cues to understand and anticipate the actions of other humans. This 'intention reading' ability is underpinned by shared motor repertoires and action models, which we use to interpret the intentions of others as if they were our own. We investigate how different cues contribute to the legibility of human actions during interpersonal interactions. Our first contribution is a publicly available dataset with recordings of human body motion and eye gaze, acquired in an experimental scenario with an actor interacting with three subjects. From these data, we conducted a human study to analyze the importance of different nonverbal cues for action perception. As our second contribution, we used motion/gaze recordings to build a computational model describing the interaction between two persons. As a third contribution, we embedded this model in the controller of an iCub humanoid robot and conducted a second human study, in the same scenario with the robot as an actor, to validate the model's 'intention reading' capability. Our results show that it is possible to model (nonverbal) signals exchanged by humans during interaction, and how to incorporate such a mechanism in robotic systems with the twin goal of being able to 'read' human action intentionsand acting in a way that is legible by humans. © 2016 IEEE.

Author keywords

humanoid robotssensor fusionSocial human-robot interaction

Indexed keywords

Engineering controlled terms:Anthropomorphic robotsVisual servoing
Engineering uncontrolled termsAction anticipationsComputational modelHuman body motionHumanoid robotIntention readingsRobotic systemsSensor fusionSocial human-robot interactions
Engineering main heading:Human robot interaction

Funding details

Funding sponsor Funding number Acronym
Horizon 2020 Framework Programme
See opportunities by H2020
752611H2020
  • ISSN: 23773766
  • Source Type: Journal
  • Original language: English
  • DOI: 10.1109/LRA.2018.2861569
  • Document Type: Article
  • Publisher: Institute of Electrical and Electronics Engineers Inc.

  Duarte, N.F.; Vislab, Institute for Systems and Robotics, Instituto Superior Tecnico, Universidade de Lisboa, Lisbon, Portugal;
© Copyright 2019 Elsevier B.V., All rights reserved.

Cited by 61 documents

Plizzari, C. , Goletto, G. , Furnari, A.
An Outlook into the Future of Egocentric Vision
(2024) International Journal of Computer Vision
Belardinelli, A.
Gaze-Based Intention Estimation: Principles, Methodologies, and Applications in HRI
(2024) ACM Transactions on Human-Robot Interaction
Baptista, J. , Castro, A. , Gomes, M.
Human–Robot Collaborative Manufacturing Cell with Learning-Based Interaction Abilities
(2024) Robotics
View details of all 61 citations
{"topic":{"name":"Human Robot Interaction; Intelligent Robots; Motion Planning","id":66421,"uri":"Topic/66421","prominencePercentile":86.36112,"prominencePercentileString":"86.361","overallScholarlyOutput":0},"dig":"a6b8c6a77085152267290d42db9126778328af1d7a5255de89123e00bf25821d"}

SciVal Topic Prominence

Topic:
Prominence percentile: