Multi-Task Learning for Conversational Question Answering over a Large-Scale Knowledge Base
- Publication Type:
- Journal Article
- Citation:
- 2019
- Issue Date:
- 2019-10-11
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
We consider the problem of conversational question answering over a
large-scale knowledge base. To handle huge entity vocabulary of a large-scale
knowledge base, recent neural semantic parsing based approaches usually
decompose the task into several subtasks and then solve them sequentially,
which leads to following issues: 1) errors in earlier subtasks will be
propagated and negatively affect downstream ones; and 2) each subtask cannot
naturally share supervision signals with others. To tackle these issues, we
propose an innovative multi-task learning framework where a pointer-equipped
semantic parsing model is designed to resolve coreference in conversations, and
naturally empower joint learning with a novel type-aware entity detection
model. The proposed framework thus enables shared supervisions and alleviates
the effect of error propagation. Experiments on a large-scale conversational
question answering dataset containing 1.6M question answering pairs over 12.8M
entities show that the proposed framework improves overall F1 score from 67% to
79% compared with previous state-of-the-art work.
Please use this identifier to cite or link to this item: