Improving Adversarial Text Generation with n-Gram Matching

Publisher:
https://aclanthology.org/volumes/2021.paclic-1/
Publication Type:
Conference Proceeding
Citation:
Proceedings of the 35th Pacific Asia Conference on Language, Information and Computation, PACLIC 2021, 2021, pp. 647-655
Issue Date:
2021-01-01
Filename Description Size
Piccardi-2019-ImprovingAdversarialTextGenerationwithn-GramMatching.pdfAccepted version489.84 kB
Adobe PDF
Full metadata record
In the past few years, generative adversarial networks (GANs) have become increasingly important in natural language generation. However, their performance seems to still have a significant margin for improvement. For this reason, in this paper we propose a new adversarial training method that tackles some of the limitations of GAN training in unconditioned generation tasks. In addition to the commonly used reward signal from the discriminator, our approach leverages another reward signal which is based on the occurrence of n-gram matches between the generated sentences and the training corpus. Thanks to the inherent correlation of this reward signal with the commonly used evaluation metrics such as BLEU, our approach implicitly bridges the gap between the objectives used during training and inference. To circumvent the non-differentiability issues associated with a discrete objective, our approach leverages the reinforcement learning policy gradient theorem. Our experimental results show that the model trained with mixed rewards from both n-gram matching and the discriminator has been able to outperform other GAN-based models in terms of BLEU score and quality-diversity trade-off at a parity of computational budget.
Please use this identifier to cite or link to this item: