Publication:
RGB-D object recognition using deep convolutional neural networks

dc.contributor.departmentN/A
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.kuauthorZia, Saman
dc.contributor.kuauthorYüksel, Buket
dc.contributor.kuauthorYüret, Deniz
dc.contributor.kuauthorYemez, Yücel
dc.contributor.kuprofileMaster Student
dc.contributor.kuprofileTeaching Faculty
dc.contributor.kuprofileFaculty Member
dc.contributor.kuprofileFaculty Member
dc.contributor.otherDepartment of Computer Engineering
dc.contributor.schoolcollegeinstituteGraduate School of Sciences and Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.yokidN/A
dc.contributor.yokid326941
dc.contributor.yokid179996
dc.contributor.yokid107907
dc.date.accessioned2024-11-10T00:08:20Z
dc.date.issued2017
dc.description.abstractWe address the problem of object recognition from RGB-D images using deep convolutional neural networks (CNNs). We advocate the use of 3D CNNs to fully exploit the 3D spatial information in depth images as well as the use of pretrained 2D CNNs to learn features from RGB-D images. There exists currently no large scale dataset available comprising depth information as compared to those for RGB data. Hence transfer learning from 2D source data is key to be able to train deep 3D CNNs. To this end, we propose a hybrid 2D/3D convolutional neural network that can be initialized with pretrained 2D CNNs and can then be trained over a relatively small RGB-D dataset. We conduct experiments on the Washington dataset involving RGB-D images of small household objects. Our experiments show that the features learnt from this hybrid structure, when fused with the features learnt from depth-only and RGB-only architectures, outperform the state of the art on RGB-D category recognition.
dc.description.indexedbyWoS
dc.description.indexedbyScopus
dc.description.openaccessNO
dc.description.sponsorshipScientific and Technological Research Council of Turkey (TUBITAK) [114E628, 215E201] This work was supported by the Scientific and Technological Research Council of Turkey (TUBITAK) Grants 114E628 and 215E201.
dc.identifier.doi10.1109/ICCVW.2017.109
dc.identifier.isbn978-1-5386-1034-3
dc.identifier.issn2473-9936
dc.identifier.scopus2-s2.0-85046298204
dc.identifier.urihttp://dx.doi.org/10.1109/ICCVW.2017.109
dc.identifier.urihttps://hdl.handle.net/20.500.14288/16938
dc.identifier.wos425239600101
dc.keywordsN/A
dc.languageEnglish
dc.publisherIeee
dc.source2017 Ieee International Conference On Computer Vision Workshops (Iccvw 2017)
dc.subjectComputer Science
dc.subjectArtificial intelligence
dc.subjectElectrical electronics engineering
dc.titleRGB-D object recognition using deep convolutional neural networks
dc.typeConference proceeding
dspace.entity.typePublication
local.contributor.authoridN/A
local.contributor.authoridN/A
local.contributor.authorid0000-0002-7039-0046
local.contributor.authorid0000-0002-7515-3138
local.contributor.kuauthorZia, Saman
local.contributor.kuauthorYüksel, Buket
local.contributor.kuauthorYüret, Deniz
local.contributor.kuauthorYemez, Yücel
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication.latestForDiscovery89352e43-bf09-4ef4-82f6-6f9d0174ebae

Files