Potential based reward shaping using learning to rank
- Publication Type:
- Conference Proceeding
- Citation:
- ACM/IEEE International Conference on Human-Robot Interaction, 2017, pp. 261 - 262
- Issue Date:
- 2017-03-06
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
Potential Based Reward Shaping Using Learning to Rank.pdf | Published version | 645.57 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© 2017 Authors. This paper presents a novel method for the computation of potential function using human input for potential based reward shaping. It defines a ranking over state space which is used to define a potential function. Specifically, it seeks multiple, partial to full, rankings of robot's states from a user in a HRI scenario. These rankings are used to learn a ranking model using a learning-to-rank algorithm. The ranking model is used to define a complete ranking of states. From the ranked states, a potential function is computed using a mapping function. For the proof of concept, we compared it with a base-line reinforcement learner in a simulated domain. The empirical results showed that the proposed method clearly outperformed the benchmark.
Please use this identifier to cite or link to this item: