Publication:
Learning to follow verbal instructions with visual grounding

Placeholder

Program

KU Authors

Co-Authors

Advisor

Publication Date

2019

Language

Turkish

Type

Conference proceeding

Journal Title

Journal ISSN

Volume Title

Abstract

We present a visually grounded deep learning model towards a virtual robot that can follow navigational instructions. Our model is capable of processing raw visual input and natural text instructions. The aim is to develop a model that can learn to follow novel instructions from instruction-perception examples. The proposed model is trained on data collected in a synthetic environment and its architecture allows it to work also with real visual data. We show that our results are on par with the previously proposed methods.

Description

Source:

27th Signal Processing and Communications Applications Conference, SIU 2019

Publisher:

Institute of Electrical and Electronics Engineers (IEEE)

Keywords:

Subject

Civil engineering, Electrical electronics engineering, Telecommunication

Citation

Endorsement

Review

Supplemented By

Referenced By

Copy Rights Note

0

Views

0

Downloads

View PlumX Details