Publication:
Learning to follow verbal instructions with visual grounding

dc.contributor.departmentDepartment of Electrical and Electronics Engineering
dc.contributor.departmentN/A
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.kuauthorÜnal, Emre
dc.contributor.kuauthorCan, Ozan Arkan
dc.contributor.kuauthorYemez, Yücel
dc.contributor.kuprofileOther
dc.contributor.kuprofilePhD Student
dc.contributor.kuprofileFaculty Member
dc.contributor.otherDepartment of Electrical and Electronics Engineering
dc.contributor.otherDepartment of Computer Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteGraduate School of Sciences and Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.yokidN/A
dc.contributor.yokidN/A
dc.contributor.yokid107907
dc.date.accessioned2024-11-09T23:59:20Z
dc.date.issued2019
dc.description.abstractWe present a visually grounded deep learning model towards a virtual robot that can follow navigational instructions. Our model is capable of processing raw visual input and natural text instructions. The aim is to develop a model that can learn to follow novel instructions from instruction-perception examples. The proposed model is trained on data collected in a synthetic environment and its architecture allows it to work also with real visual data. We show that our results are on par with the previously proposed methods.
dc.description.indexedbyWoS
dc.description.indexedbyScopus
dc.description.openaccessYES
dc.description.publisherscopeInternational
dc.identifier.doi10.1109/SIU.2019.8806335
dc.identifier.isbn9781-7281-1904-5
dc.identifier.linkhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85071986881&doi=10.1109%2fSIU.2019.8806335&partnerID=40&md5=581b5b694b4b4cfcc30d3fb34e5067fa
dc.identifier.scopus2-s2.0-85071986881
dc.identifier.urihttp://dx.doi.org/10.1109/SIU.2019.8806335
dc.identifier.urihttps://hdl.handle.net/20.500.14288/15624
dc.identifier.wos518994300061
dc.keywordsAutonomous agents
dc.keywordsComputer vision
dc.keywordsNatural language processing
dc.keywordsNavigational instruction following
dc.languageTurkish
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.source27th Signal Processing and Communications Applications Conference, SIU 2019
dc.subjectCivil engineering
dc.subjectElectrical electronics engineering
dc.subjectTelecommunication
dc.titleLearning to follow verbal instructions with visual grounding
dc.title.alternativeSözel komutların takibinin görsel temelli öǧrenilmesi
dc.typeConference proceeding
dspace.entity.typePublication
local.contributor.authoridN/A
local.contributor.authorid0000-0001-9690-0027
local.contributor.authorid0000-0002-7515-3138
local.contributor.kuauthorÜnal, Emre
local.contributor.kuauthorCan, Ozan Arkan
local.contributor.kuauthorYemez, Yücel
relation.isOrgUnitOfPublication21598063-a7c5-420d-91ba-0cc9b2db0ea0
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication.latestForDiscovery21598063-a7c5-420d-91ba-0cc9b2db0ea0

Files