Research Data:
Cross-lingual Visual Pre-training for Multimodal Machine Translation

Placeholder

Institution Author

Ozan Caglayan
Menekse Kuyu
Mustafa Sercan Amac
Pranava Madhyastha
Erkut Erdem
Aykut Erdem
Lucia Specia

Departments

School / College / Institute

Program

KU-Authors

KoƧ University Affiliated Author

KU Authors

Co-Authors

Editor & Affiliation

Compiler & Affiliation

Translator

Other Contributor

Language

Journal Title

Volume Title

Alternative Title

Other Of Anamed Title

Abstract

Supplements for the paper entitled "Cross-lingual Visual Pre-training for Multimodal Machine Translation" which is accepted by the EACL'2021 conference. Further instructions on how to use these resources are explained at https://github.com/ImperialNLP/VTLM A tarball that contains a custom train, valid, test split of Conceptual Captions (CC) dataset. The included TSV files havean additional column containing automatic German translations of the original English captions. We only provide samples for which we could download the images and extract meaningful features. This amounts to ~3M out ouf ~3.3M original CC samples. A tarball of the exact object detector checkpoint used for feature extraction. A tarball with pre-extracted Multi30k dataset features.

Source

Publisher

Zenodo

Subject

multimodal machine translation, image captioning, machine translation

Citation

Has Part

Book Series Title

DOI

item.page.datauri

Link

Rights

OPEN

Rights URI

Grant No

Sponsors

Copyrights Note

Related Research Data

Collections

Endorsement

Review

Supplemented By

Referenced By

Related Goal

0

Views

0

Downloads