Generalization in Text-based Games via Hierarchical Reinforcement Learning

Publication Type:
Conference Proceeding
Citation:
Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021, 2021, pp. 1343-1353
Issue Date:
2021-01-01
Full metadata record
Deep reinforcement learning provides a promising approach for text-based games in studying natural language communication between humans and artificial agents. However, the generalization still remains a big challenge as the agents depend critically on the complexity and variety of training tasks. In this paper, we address this problem by introducing a hierarchical framework built upon the knowledge graph-based RL agent. In the high level, a meta-policy is executed to decompose the whole game into a set of subtasks specified by textual goals, and select one of them based on the KG. Then a subpolicy in the low level is executed to conduct goal-conditioned reinforcement learning. We carry out experiments on games with various difficulty levels and show that the proposed method enjoys favorable generalizability.
Please use this identifier to cite or link to this item: