Gloss-Free End-to-End Sign Language Translation

Publication Type:
Conference Proceeding
Citation:
Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2023, 1, pp. 12904-12916
Issue Date:
2023-01-01
Filename Description Size
2305.12876.pdfAccepted version3.92 MB
Adobe PDF
Full metadata record
In this paper, we tackle the problem of sign language translation (SLT) without gloss annotations. Although intermediate representation like gloss has been proven effective, gloss annotations are hard to acquire, especially in large quantities. This limits the domain coverage of translation datasets, thus handicapping real-world applications. To mitigate this problem, we design the Gloss-Free End-to-end sign language translation framework (GloFE). Our method improves the performance of SLT in the gloss-free setting by exploiting the shared underlying semantics of signs and the corresponding spoken translation. Common concepts are extracted from the text and used as a weak form of intermediate representation. The global embedding of these concepts is used as a query for cross-attention to find the corresponding information within the learned visual features. In a contrastive manner, we encourage the similarity of query results between samples containing such concepts and decrease those that do not. We obtained state-of-the-art results on large-scale datasets, including OpenASL and How2Sign.
Please use this identifier to cite or link to this item: