Publication: Learning to follow verbal instructions with visual grounding
Program
KU-Authors
KU Authors
Co-Authors
Publication Date
Language
Embargo Status
Journal Title
Journal ISSN
Volume Title
Alternative Title
Sözel komutların takibinin görsel temelli öǧrenilmesi
Abstract
We present a visually grounded deep learning model towards a virtual robot that can follow navigational instructions. Our model is capable of processing raw visual input and natural text instructions. The aim is to develop a model that can learn to follow novel instructions from instruction-perception examples. The proposed model is trained on data collected in a synthetic environment and its architecture allows it to work also with real visual data. We show that our results are on par with the previously proposed methods.
Source
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Subject
Civil engineering, Electrical electronics engineering, Telecommunication
Citation
Has Part
Source
27th Signal Processing and Communications Applications Conference, SIU 2019
Book Series Title
Edition
DOI
10.1109/SIU.2019.8806335