To complete a task consisting of a series of actions that involve human-robot interaction, it is necessary to plan a motion that considers each action individually as well as in relation to the following action. We then focus on the specific action of “approaching a group of people” in order to accurately obtain human data that is used to make the performance of tasks involving interactions with multiple people more smooth. The movement depends on the characteristics of the important sensors used for the task and on the placement of people at and around the destination. Considering the multiple tasks and placement of people, the pre-calculation of the destinations and paths is difficult. This paper thus presents a system of navigation that can accurately obtain human data based on sensor characteristics, task content, and real-time sensor data for processes involving human-robot interaction (HRI); this method does not navigate specifically toward a previously determined static point. Our goal was achieved by using a multimodal path planning based on integration of action modeling by considering both voice and image sensing of interacting people as well as obstacle avoidance. We experimentally verified our method by using a robot in a coffee shop environment.

A Multimodal Path Planning Approach to Human Robot Interaction Based on Integrating Action Modeling

Pagello E.
2020

Abstract

To complete a task consisting of a series of actions that involve human-robot interaction, it is necessary to plan a motion that considers each action individually as well as in relation to the following action. We then focus on the specific action of “approaching a group of people” in order to accurately obtain human data that is used to make the performance of tasks involving interactions with multiple people more smooth. The movement depends on the characteristics of the important sensors used for the task and on the placement of people at and around the destination. Considering the multiple tasks and placement of people, the pre-calculation of the destinations and paths is difficult. This paper thus presents a system of navigation that can accurately obtain human data based on sensor characteristics, task content, and real-time sensor data for processes involving human-robot interaction (HRI); this method does not navigate specifically toward a previously determined static point. Our goal was achieved by using a multimodal path planning based on integration of action modeling by considering both voice and image sensing of interacting people as well as obstacle avoidance. We experimentally verified our method by using a robot in a coffee shop environment.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3361565
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 2
social impact