Publication: Learning to follow verbal instructions with visual grounding
dc.contributor.department | Department of Electrical and Electronics Engineering | |
dc.contributor.department | N/A | |
dc.contributor.department | Department of Computer Engineering | |
dc.contributor.kuauthor | Ünal, Emre | |
dc.contributor.kuauthor | Can, Ozan Arkan | |
dc.contributor.kuauthor | Yemez, Yücel | |
dc.contributor.kuprofile | Other | |
dc.contributor.kuprofile | PhD Student | |
dc.contributor.kuprofile | Faculty Member | |
dc.contributor.other | Department of Electrical and Electronics Engineering | |
dc.contributor.other | Department of Computer Engineering | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.schoolcollegeinstitute | Graduate School of Sciences and Engineering | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.yokid | N/A | |
dc.contributor.yokid | N/A | |
dc.contributor.yokid | 107907 | |
dc.date.accessioned | 2024-11-09T23:59:20Z | |
dc.date.issued | 2019 | |
dc.description.abstract | We present a visually grounded deep learning model towards a virtual robot that can follow navigational instructions. Our model is capable of processing raw visual input and natural text instructions. The aim is to develop a model that can learn to follow novel instructions from instruction-perception examples. The proposed model is trained on data collected in a synthetic environment and its architecture allows it to work also with real visual data. We show that our results are on par with the previously proposed methods. | |
dc.description.indexedby | WoS | |
dc.description.indexedby | Scopus | |
dc.description.openaccess | YES | |
dc.description.publisherscope | International | |
dc.identifier.doi | 10.1109/SIU.2019.8806335 | |
dc.identifier.isbn | 9781-7281-1904-5 | |
dc.identifier.link | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85071986881&doi=10.1109%2fSIU.2019.8806335&partnerID=40&md5=581b5b694b4b4cfcc30d3fb34e5067fa | |
dc.identifier.scopus | 2-s2.0-85071986881 | |
dc.identifier.uri | http://dx.doi.org/10.1109/SIU.2019.8806335 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14288/15624 | |
dc.identifier.wos | 518994300061 | |
dc.keywords | Autonomous agents | |
dc.keywords | Computer vision | |
dc.keywords | Natural language processing | |
dc.keywords | Navigational instruction following | |
dc.language | Turkish | |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | |
dc.source | 27th Signal Processing and Communications Applications Conference, SIU 2019 | |
dc.subject | Civil engineering | |
dc.subject | Electrical electronics engineering | |
dc.subject | Telecommunication | |
dc.title | Learning to follow verbal instructions with visual grounding | |
dc.title.alternative | Sözel komutların takibinin görsel temelli öǧrenilmesi | |
dc.type | Conference proceeding | |
dspace.entity.type | Publication | |
local.contributor.authorid | N/A | |
local.contributor.authorid | 0000-0001-9690-0027 | |
local.contributor.authorid | 0000-0002-7515-3138 | |
local.contributor.kuauthor | Ünal, Emre | |
local.contributor.kuauthor | Can, Ozan Arkan | |
local.contributor.kuauthor | Yemez, Yücel | |
relation.isOrgUnitOfPublication | 21598063-a7c5-420d-91ba-0cc9b2db0ea0 | |
relation.isOrgUnitOfPublication | 89352e43-bf09-4ef4-82f6-6f9d0174ebae | |
relation.isOrgUnitOfPublication.latestForDiscovery | 21598063-a7c5-420d-91ba-0cc9b2db0ea0 |