Publication:
Visually grounded language learning for robot navigation

Thumbnail Image

Organizational Units

Program

KU Authors

Co-Authors

Advisor

Publication Date

2019

Language

English

Type

Conference proceeding

Journal Title

Journal ISSN

Volume Title

Abstract

We present an end-to-end deep learning model for robot navigation from raw visual pixel input and natural text instructions. The proposed model is an LSTM-based sequence-to-sequence neural network architecture with attention, which is trained on instructionperception data samples collected in a synthetic environment. We conduct experiments on the SAIL dataset which we reconstruct in 3D so as to generate the 2D images associated with the data. Our experiments show that the performance of our model is on a par with state-of-the-art, despite the fact that it learns navigational language with end-to-end training from raw visual data.

Description

Source:

MULEA '19: 1st International Workshop on Multimodal Understanding and Learning for Embodied Applications

Publisher:

Association for Computing Machinery (ACM)

Keywords:

Subject

Computer engineering

Citation

Endorsement

Review

Supplemented By

Referenced By

Copy Rights Note

0

Views

0

Downloads

View PlumX Details