Publication:
Cross-lingual visual pre-training for multimodal machine translation

Thumbnail Image

Departments

School / College / Institute

Program

KU-Authors

KU Authors

Co-Authors

Çağlayan, O.
Kuyu, M.
Amaç, M. S.
Madhyastha, P.
Erdem, E.
Specia, L.

Publication Date

Language

Embargo Status

NO

Journal Title

Journal ISSN

Volume Title

Alternative Title

Abstract

Pre-trained language models have been shown to improve performance in many natural language tasks substantially. Although the early focus of such models was single language pre-training, recent advances have resulted in cross-lingual and visual pre-training methods. In this paper, we combine these two approaches to learn visually-grounded cross-lingual representations. Specifically, we extend the translation language modelling (Lample and Conneau, 2019) with masked region classification and perform pre-training with three-way parallel vision & language corpora. We show that when fine-tuned for multimodal machine translation, these models obtain state-of-the-art performance. We also provide qualitative insights into the usefulness of the learned grounded representations.

Source

Publisher

Association for Computational Linguistics (ACL)

Subject

Visual languages

Citation

Has Part

Source

EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference

Book Series Title

Edition

DOI

10.18653/v1/2021.eacl-main.112

item.page.datauri

Link

Rights

Copyrights Note

Endorsement

Review

Supplemented By

Referenced By

0

Views

4

Downloads

View PlumX Details