Publication:
HyperE2VID: improving event-based video reconstruction via hypernetworks

dc.contributor.coauthorErcan, Burak
dc.contributor.coauthorEker, Onur
dc.contributor.coauthorSağlam, Canberk
dc.contributor.coauthorErdem, Erkut
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.kuauthorErdem, Aykut
dc.contributor.otherDepartment of Computer Engineering
dc.contributor.researchcenterKoç Üniversitesi İş Bankası Enfeksiyon Hastalıkları Uygulama ve Araştırma Merkezi (EHAM) / Koç University İşbank Center for Infectious Diseases (KU-IS CID)
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.unit 
dc.date.accessioned2024-12-29T09:37:53Z
dc.date.issued2024
dc.description.abstractEvent-based cameras are becoming increasingly popular for their ability to capture high-speed motion with low latency and high dynamic range. However, generating videos from events remains challenging due to the highly sparse and varying nature of event data. To address this, in this study, we propose HyperE2VID, a dynamic neural network architecture for event-based video reconstruction. Our approach uses hypernetworks to generate per-pixel adaptive filters guided by a context fusion module that combines information from event voxel grids and previously reconstructed intensity images. We also employ a curriculum learning strategy to train the network more robustly. Our comprehensive experimental evaluations across various benchmark datasets reveal that HyperE2VID not only surpasses current state-of-the-art methods in terms of reconstruction quality but also achieves this with fewer parameters, reduced computational requirements, and accelerated inference times.
dc.description.indexedbyWoS
dc.description.indexedbyScopus
dc.description.indexedbyPubMed
dc.description.openaccessGreen Submitted
dc.description.publisherscopeInternational
dc.description.sponsorsNo Statement Available
dc.description.volume33
dc.identifier.doi10.1109/TIP.2024.3372460
dc.identifier.eissn1941-0042
dc.identifier.issn1057-7149
dc.identifier.link 
dc.identifier.quartileQ1
dc.identifier.scopus2-s2.0-85187329240
dc.identifier.urihttps://doi.org/10.1109/TIP.2024.3372460
dc.identifier.urihttps://hdl.handle.net/20.500.14288/22491
dc.identifier.wos1184885100007
dc.keywordsEvent-based vision
dc.keywordsVideo reconstruction
dc.keywordsDynamic neural networks
dc.keywordsHypernetworks
dc.keywordsDynamic convolutions
dc.languageen
dc.publisherIEEE-Inst Electrical Electronics Engineers Inc
dc.relation.grantnoKoc University Is Bank AI Center (KUIS AI) Research Award
dc.rights 
dc.sourceIEEE Transactions on Image Processing
dc.subjectComputer science
dc.subjectArtificial intelligence
dc.subjectElectrical engineering
dc.subjectElectronic engineering
dc.titleHyperE2VID: improving event-based video reconstruction via hypernetworks
dc.typeJournal article
dc.type.other 
dspace.entity.typePublication
local.contributor.kuauthorErdem, Aykut
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication.latestForDiscovery89352e43-bf09-4ef4-82f6-6f9d0174ebae

Files