Publication: EVREAL: towards a comprehensive benchmark and analysis suite for event-based video reconstruction
dc.contributor.coauthor | Ercan, Burak | |
dc.contributor.coauthor | Eker, Onur | |
dc.contributor.department | KUIS AI (Koç University & İş Bank Artificial Intelligence Center) | |
dc.contributor.department | Department of Computer Engineering | |
dc.contributor.kuauthor | Erdem, Aykut | |
dc.contributor.kuauthor | Erdem, Erkut | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.schoolcollegeinstitute | Research Center | |
dc.date.accessioned | 2025-01-19T10:30:39Z | |
dc.date.issued | 2023 | |
dc.description.abstract | Event cameras are a new type of vision sensor that incorporates asynchronous and independent pixels, offering advantages over traditional frame-based cameras such as high dynamic range and minimal motion blur. However, their output is not easily understandable by humans, making the reconstruction of intensity images from event streams a fundamental task in event-based vision. While recent deep learning-based methods have shown promise in video reconstruction from events, this problem is not completely solved yet. To facilitate comparison between different approaches, standardized evaluation protocols and diverse test datasets are essential. This paper proposes a unified evaluation methodology and introduces an open-source framework called EVREAL to comprehensively benchmark and analyze various event-based video reconstruction methods from the literature. Using EVREAL, we give a detailed analysis of the state-of-the-art methods for event-based video reconstruction, and provide valuable insights into the performance of these methods under varying settings, challenging scenarios, and downstream tasks. | |
dc.description.indexedby | Scopus | |
dc.description.openaccess | All Open Access; Green Open Access | |
dc.description.publisherscope | International | |
dc.description.sponsoredbyTubitakEu | N/A | |
dc.description.sponsorship | This work was supported in part by KUIS AI Research Award, TUBITAK-1001 Program Award No. 121E454, and BAGEP 2021 Award of the Science Academy to A. Erdem. | |
dc.identifier.doi | 10.1109/CVPRW59228.2023.00410 | |
dc.identifier.isbn | 979-835030249-3 | |
dc.identifier.issn | 2160-7508 | |
dc.identifier.quartile | N/A | |
dc.identifier.scopus | 2-s2.0-85170820439 | |
dc.identifier.uri | https://doi.org/10.1109/CVPRW59228.2023.00410 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14288/26074 | |
dc.keywords | Computer vision | |
dc.keywords | Deep learning | |
dc.keywords | Image reconstruction | |
dc.language.iso | eng | |
dc.publisher | IEEE Computer Society | |
dc.relation.grantno | KUIS; TUBITAK-1001, (121E454); Bilim Akademisi | |
dc.relation.ispartof | IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops | |
dc.subject | Engineering | |
dc.title | EVREAL: towards a comprehensive benchmark and analysis suite for event-based video reconstruction | |
dc.type | Conference Proceeding | |
dspace.entity.type | Publication | |
local.contributor.kuauthor | Erdem, Aykut | |
local.contributor.kuauthor | Erkut, Erdem | |
local.publication.orgunit1 | College of Engineering | |
local.publication.orgunit1 | Research Center | |
local.publication.orgunit2 | Department of Computer Engineering | |
local.publication.orgunit2 | KUIS AI (Koç University & İş Bank Artificial Intelligence Center) | |
relation.isOrgUnitOfPublication | 77d67233-829b-4c3a-a28f-bd97ab5c12c7 | |
relation.isOrgUnitOfPublication | 89352e43-bf09-4ef4-82f6-6f9d0174ebae | |
relation.isOrgUnitOfPublication.latestForDiscovery | 77d67233-829b-4c3a-a28f-bd97ab5c12c7 | |
relation.isParentOrgUnitOfPublication | 8e756b23-2d4a-4ce8-b1b3-62c794a8c164 | |
relation.isParentOrgUnitOfPublication | d437580f-9309-4ecb-864a-4af58309d287 | |
relation.isParentOrgUnitOfPublication.latestForDiscovery | 8e756b23-2d4a-4ce8-b1b3-62c794a8c164 |
Files
Original bundle
1 - 1 of 1