Publication:
Reward learning from very few demonstrations

Placeholder

Organizational Units

Program

KU Authors

Co-Authors

Advisor

Publication Date

2021

Language

English

Type

Journal Article

Journal Title

Journal ISSN

Volume Title

Abstract

This article introduces a novel skill learning framework that learns rewards from very few demonstrations and uses them in policy search (PS) to improve the skill. The demonstrations are used to learn a parameterized policy to execute the skill and a goal model, as a hidden Markov model (HMM), to monitor executions. The rewards are learned from the HMM structure and its monitoring capability. The HMM is converted to a finite-horizon Markov reward process (MRP). A Monte Carlo approach is used to calculate its values. Then, the HMM and the values are merged into a partially observable MRP to obtain execution returns to be used with PS for improving the policy. In addition to reward learning, a black box PS method with an adaptive exploration strategy is adopted. The resulting framework is evaluated with five PS approaches and two skills in simulation. The results show that the learned dense rewards lead to better performance compared to sparse monitoring signals, and using an adaptive exploration lead to faster convergence with higher success rates and lower variance. The efficacy of the framework is validated in a real-robot settings by improving three skills to complete success from complete failure using learned rewards where sparse rewards failed completely.

Description

Source:

Ieee Transactions On Robotics

Publisher:

Ieee-Inst Electrical Electronics Engineers Inc

Keywords:

Subject

Robotics

Citation

Endorsement

Review

Supplemented By

Referenced By

Copy Rights Note

0

Views

0

Downloads

View PlumX Details