Publication:
Learning to follow verbal instructions with visual grounding

Placeholder

School / College / Institute

Organizational Unit

Program

KU Authors

Co-Authors

Publication Date

Language

Embargo Status

Journal Title

Journal ISSN

Volume Title

Alternative Title

Sözel komutların takibinin görsel temelli öǧrenilmesi

Abstract

We present a visually grounded deep learning model towards a virtual robot that can follow navigational instructions. Our model is capable of processing raw visual input and natural text instructions. The aim is to develop a model that can learn to follow novel instructions from instruction-perception examples. The proposed model is trained on data collected in a synthetic environment and its architecture allows it to work also with real visual data. We show that our results are on par with the previously proposed methods.

Source

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Subject

Civil engineering, Electrical electronics engineering, Telecommunication

Citation

Has Part

Source

27th Signal Processing and Communications Applications Conference, SIU 2019

Book Series Title

Edition

DOI

10.1109/SIU.2019.8806335

item.page.datauri

Link

Rights

Copyrights Note

Endorsement

Review

Supplemented By

Referenced By

0

Views

0

Downloads

View PlumX Details