Publication:
Modulating bottom-up and top-down visual processing via language-conditional filters

dc.contributor.coauthorErdem, Erkut
dc.contributor.departmentN/A
dc.contributor.departmentN/A
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.kuauthorKesen, İlker
dc.contributor.kuauthorCan, Ozan Arkan
dc.contributor.kuauthorErdem, Aykut
dc.contributor.kuauthorYüret, Deniz
dc.contributor.kuprofilePhD Student
dc.contributor.kuprofilePhD Student
dc.contributor.kuprofileFaculty Member
dc.contributor.kuprofileFaculty Member
dc.contributor.otherDepartment of Computer Engineering
dc.contributor.researchcenterKoç Üniversitesi İş Bankası Yapay Zeka Uygulama ve Araştırma Merkezi (KUIS AI)/ Koç University İş Bank Artificial Intelligence Center (KUIS AI)
dc.contributor.schoolcollegeinstituteGraduate School of Sciences and Engineering
dc.contributor.schoolcollegeinstituteGraduate School of Sciences and Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.yokidN/A
dc.contributor.yokidN/A
dc.contributor.yokid20331
dc.contributor.yokid179996
dc.date.accessioned2024-11-09T23:28:32Z
dc.date.issued2022
dc.description.abstractHow to best integrate linguistic and perceptual processing in multi-modal tasks that involve language and vision is an important open problem. In this work, we argue that the common practice of using language in a top-down manner, to direct visual attention over high-level visual features, may not be optimal. We hypothesize that the use of language to also condition the bottom-up processing from pixels to high-level features can provide benefits to the overall performance. To support our claim, we propose a U-Net-based model and perform experiments on two language-vision dense-prediction tasks: referring expression segmentation and language-guided image colorization. We compare results where either one or both of the top-down and bottom-up visual branches are conditioned on language. Our experiments reveal that using language to control the filters for bottom-up visual processing in addition to top-down attention leads to better results on both tasks and achieves competitive performance. Our linguistic analysis suggests that bottom-up conditioning improves segmentation of objects especially when input text refers to low-level visual concepts. Code is available at https://github.com/ilkerkesen/bvpr.
dc.description.indexedbyWoS
dc.description.indexedbyScopus
dc.description.openaccessYES
dc.description.sponsorshipTurkish Academy of Sciences This work was supported in part by an AI Fellowship to I. Kesen provided by the KUIS AI Center, GEBIP 2018 Award of the Turkish Academy of Sciences to E. Erdem, and BAGEP 2021 Award of the Science Academy to A. Erdem.
dc.identifier.doi10.1109/CVPRW56347.2022.00507
dc.identifier.isbn978-1-6654-8739-9
dc.identifier.scopus2-s2.0-85137780572
dc.identifier.urihttp://dx.doi.org/10.1109/CVPRW56347.2022.00507
dc.identifier.urihttps://hdl.handle.net/20.500.14288/11902
dc.identifier.wos861612704072
dc.keywordsWords
dc.keywordsVision, Attention
dc.keywordsObject
dc.languageEnglish
dc.publisherIeee
dc.source2022 Ieee/Cvf Conference On Computer Vision And Pattern Recognition Workshops (Cvprw 2022)
dc.subjectComputer science
dc.subjectArtificial intelligence
dc.titleModulating bottom-up and top-down visual processing via language-conditional filters
dc.typeConference proceeding
dspace.entity.typePublication
local.contributor.authoridN/A
local.contributor.authorid0000-0001-9690-0027
local.contributor.authorid0000-0002-6280-8422
local.contributor.authorid0000-0002-7039-0046
local.contributor.kuauthorKesen, İlker
local.contributor.kuauthorCan, Ozan Arkan
local.contributor.kuauthorErdem, Aykut
local.contributor.kuauthorYüret, Deniz
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication.latestForDiscovery89352e43-bf09-4ef4-82f6-6f9d0174ebae

Files