Publication:
Vilma: a zero-shot benchmark for linguıstic and temporal grounding in video-language models

dc.contributor.coauthorPedrotti, Andrea
dc.contributor.coauthorDogan, Mustafa
dc.contributor.coauthorCafagna, Michele
dc.contributor.coauthorParcalabescu, Letitia
dc.contributor.coauthorCalixto, Iacer
dc.contributor.coauthorFrank, Anetteh
dc.contributor.coauthorGatt, Albert
dc.contributor.departmentDepartment of Electrical and Electronics Engineering
dc.contributor.departmentGraduate School of Sciences and Engineering
dc.contributor.departmentKUIS AI (Koç University & İş Bank Artificial Intelligence Center)
dc.contributor.kuauthorErdem, Aykut
dc.contributor.kuauthorKesen, İlker
dc.contributor.kuauthorAçıkgöz, Emre Can
dc.contributor.kuauthorErdem, Erkut
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteGRADUATE SCHOOL OF SCIENCES AND ENGINEERING
dc.contributor.schoolcollegeinstituteResearch Center
dc.date.accessioned2024-12-29T09:41:22Z
dc.date.issued2024
dc.description.abstractWith the ever-increasing popularity of pretrained Video-Language Models (VidLMs), there is a pressing need to develop robust evaluation methodologies that delve deeper into their visio-linguistic capabilities. To address this challenge, we present VILMA), a task-agnostic benchmark that places the assessment of fine-grained capabilities of these models on a firm footing. Task-based evaluations, while valuable, fail to capture the complexities and specific temporal aspects of moving images that VidLMs need to process. Through carefully curated counterfactuals, VILMA offers a controlled evaluation suite that sheds light on the true potential of these models, as well as their performance gaps compared to human-level understanding. VILMA also includes proficiency tests, which assess basic capabilities deemed essential to solving the main counterfactual tests. We show that current VidLMs' grounding abilities are no better than those of vision-language models which use static images. This is especially striking once the performance on proficiency tests is factored in. Our benchmark serves as a catalyst for future research on VidLMs, helping to highlight areas that still need to be explored.
dc.description.indexedbyScopus
dc.description.publisherscopeInternational
dc.description.sponsoredbyTubitakEuEU
dc.description.sponsorshipIC has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\u0142odowska-Curie grant agreement No 838188. AG and MC are supported by the European Union's Horizon 2020 research and innovation Programme under the Marie Sk\u0142odowska-Curie grant agreement No 860621. This publication is based upon work from the COST Action Multi3Generation CA18231, supported by COST (European Cooperation in Science and Technology). It was supported in part by AI Fellowships to IK and EA provided by the KUIS AI Center. MC and AG are supported by Marie Sk\u0142odowska-Curie grant agreement No 860621 to the NL4XAI (Natural Language for Explainable AI) under the European Union's Horizon 2020 research and innovation programme. AP was supported by the European Commission (Grant 951911) under the H2020 Programme ICT-48-2020, and by the FAIR project, funded by the Italian Ministry of University and Research under the NextGenerationEU program.
dc.identifier.quartileN/A
dc.identifier.scopus2-s2.0-85198747579
dc.identifier.urihttps://hdl.handle.net/20.500.14288/23603
dc.keywordsElectric grounding
dc.keywordsPetroleum reservoir evaluation
dc.keywordsZero-shot learning
dc.keywordsComputational linguistics
dc.language.isoeng
dc.publisherInternational Conference on Learning Representations, ICLR
dc.relation.ispartof12th International Conference on Learning Representations, ICLR 2024
dc.subjectElectrical and electronics engineering
dc.titleVilma: a zero-shot benchmark for linguıstic and temporal grounding in video-language models
dc.typeConference Proceeding
dspace.entity.typePublication
local.contributor.kuauthorKesen, İlker
local.contributor.kuauthorErdem, Aykut
local.contributor.kuauthorErdem, Erkut
local.contributor.kuauthorAçıkgöz, Emre Can
local.publication.orgunit1GRADUATE SCHOOL OF SCIENCES AND ENGINEERING
local.publication.orgunit1College of Engineering
local.publication.orgunit1Research Center
local.publication.orgunit2Department of Electrical and Electronics Engineering
local.publication.orgunit2KUIS AI (Koç University & İş Bank Artificial Intelligence Center)
local.publication.orgunit2Graduate School of Sciences and Engineering
relation.isOrgUnitOfPublication21598063-a7c5-420d-91ba-0cc9b2db0ea0
relation.isOrgUnitOfPublication3fc31c89-e803-4eb1-af6b-6258bc42c3d8
relation.isOrgUnitOfPublication77d67233-829b-4c3a-a28f-bd97ab5c12c7
relation.isOrgUnitOfPublication.latestForDiscovery21598063-a7c5-420d-91ba-0cc9b2db0ea0
relation.isParentOrgUnitOfPublication8e756b23-2d4a-4ce8-b1b3-62c794a8c164
relation.isParentOrgUnitOfPublication434c9663-2b11-4e66-9399-c863e2ebae43
relation.isParentOrgUnitOfPublicationd437580f-9309-4ecb-864a-4af58309d287
relation.isParentOrgUnitOfPublication.latestForDiscovery8e756b23-2d4a-4ce8-b1b3-62c794a8c164

Files

Original bundle

Now showing 1 - 1 of 1
Thumbnail Image
Name:
IR04813.pdf
Size:
25.21 MB
Format:
Adobe Portable Document Format