Towards Robust and Interpretable Logical Reasoning in Machine Reading Comprehension

Publication Type:
Thesis
Issue Date:
2023
Full metadata record
Natural Language Processing (NLP) has made significant strides using large pretrained language models in recent years. However, Natural Language Understanding (NLU) necessitates more profound understanding and reasoning capabilities that traditional NLP methods struggle to provide. This thesis concentrates on four aspects of augmentation of machine reading comprehension models: knowledge graph completion, procedural text understanding, temporal order extraction, and autodebiasing. Collectively, these components contribute to robust and interpretable logical reasoning in machine reading comprehension step by step.
Please use this identifier to cite or link to this item: