Potential based reward shaping using learning to rank

Publication Type:
Conference Proceeding
Citation:
ACM/IEEE International Conference on Human-Robot Interaction, 2017, pp. 261 - 262
Issue Date:
2017-03-06
Filename Description Size
Potential Based Reward Shaping Using Learning to Rank.pdfPublished version645.57 kB
Adobe PDF
Full metadata record
© 2017 Authors. This paper presents a novel method for the computation of potential function using human input for potential based reward shaping. It defines a ranking over state space which is used to define a potential function. Specifically, it seeks multiple, partial to full, rankings of robot's states from a user in a HRI scenario. These rankings are used to learn a ranking model using a learning-to-rank algorithm. The ranking model is used to define a complete ranking of states. From the ranked states, a potential function is computed using a mapping function. For the proof of concept, we compared it with a base-line reinforcement learner in a simulated domain. The empirical results showed that the proposed method clearly outperformed the benchmark.
Please use this identifier to cite or link to this item: