Publication:
Cross-lingual visual pre-training for multimodal machine translation

Placeholder

Organizational Units

Program

KU-Authors

KU Authors

Co-Authors

Caglayan, Ozan
Kuyu, Menekse
Amac, Mustafa Sercan
Madhyastha, Pranava
Erdem, Aykut
Specia, Lucia

Advisor

Publication Date

Language

English

Journal Title

Journal ISSN

Volume Title

Abstract

Pre-trained language models have been shown to improve performance in many natural language tasks substantially. Although the early focus of such models was single language pre-training, recent advances have resulted in cross-lingual and visual pre-training methods. In this paper, we combine these two approaches to learn visually-grounded cross-lingual representations. Specifically, we extend the translation language modelling (Lample and Conneau, 2019) with masked region classification and perform pre-training with three-way parallel vision & language corpora. We show that when fine-tuned for multimodal machine translation, these models obtain state-of-the-art performance. We also provide qualitative insights into the usefulness of the learned grounded representations.

Description

Source:

16th Conference of The European Chapter of The Association For Computational Linguistics (Eacl 2021)

Publisher:

Assoc Computational Linguistics-Acl

Keywords:

Subject

Computer science, Artificial intelligence, Computer science, Linguistics

Citation

Endorsement

Review

Supplemented By

Referenced By

Copy Rights Note

0

Views

0

Downloads

View PlumX Details