Decision taking for human robot collaboration

The field of Human-Robot Collaboration has seen a considerable amount of progress in the recent years. Although genuinely collaborative platforms are far from being deployed in real-world scenarios, advances in control and perception algorithms have progressively popularized robots in manufacturing settings, where they work side by side with human peers to achieve shared tasks. The work presented here aims toward the development of systems that are proactive in their collaboration, and autonomously take care of some of the chores that compose most of the collaboration tasks.

This work explains how, starting from high level models of a joint task, to derive partially observable decision processes (POMDP) to model low level decision of a collaborative robot.

A POMDP is built from an abstract task model

Role assignment

The first experiments demonstrates how such a controller is capable of basic reasoning capabilities for what concerns role assignment and task allocation. It particular it interfaces with the human partner at the level of abstraction he is most comfortable with. The system is readily available to non-expert users, and programmable with high-level commands in an intuitive interface. The results from the paper demonstrate an overall improvement in terms of completion time, as well as a reduced cognitive load for the human partner.

Supportive behaviors with user preferences

Another set of experiments work (see the paper), presents a collaborative system capable of assisting the human partner with a variety of supportive behaviors in spite of its limited perceptual and manipulation capabilities and incomplete model of the task. In these experiment, the framework leverages information from a high-level, hierarchical model of the task. The model, that is shared between the human and robot, enables transparent synchronization between the peers and understanding of each other’s plan. The online solver computes a robot policy that is robust to unexpected observations such as inaccuracies of perception, failures in object manipulations, as well as discovers hidden user preferences. The experiments demonstrate that the system is capable of robustly providing support to the human in a furniture construction task.