Publication: HyperE2VID: improving event-based video reconstruction via hypernetworks
dc.contributor.coauthor | Ercan, Burak | |
dc.contributor.coauthor | Eker, Onur | |
dc.contributor.coauthor | Sağlam, Canberk | |
dc.contributor.coauthor | Erdem, Erkut | |
dc.contributor.department | Department of Computer Engineering | |
dc.contributor.kuauthor | Erdem, Aykut | |
dc.contributor.other | Department of Computer Engineering | |
dc.contributor.researchcenter | Koç Üniversitesi İş Bankası Enfeksiyon Hastalıkları Uygulama ve Araştırma Merkezi (EHAM) / Koç University İşbank Center for Infectious Diseases (KU-IS CID) | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.unit | ||
dc.date.accessioned | 2024-12-29T09:37:53Z | |
dc.date.issued | 2024 | |
dc.description.abstract | Event-based cameras are becoming increasingly popular for their ability to capture high-speed motion with low latency and high dynamic range. However, generating videos from events remains challenging due to the highly sparse and varying nature of event data. To address this, in this study, we propose HyperE2VID, a dynamic neural network architecture for event-based video reconstruction. Our approach uses hypernetworks to generate per-pixel adaptive filters guided by a context fusion module that combines information from event voxel grids and previously reconstructed intensity images. We also employ a curriculum learning strategy to train the network more robustly. Our comprehensive experimental evaluations across various benchmark datasets reveal that HyperE2VID not only surpasses current state-of-the-art methods in terms of reconstruction quality but also achieves this with fewer parameters, reduced computational requirements, and accelerated inference times. | |
dc.description.indexedby | WoS | |
dc.description.indexedby | Scopus | |
dc.description.indexedby | PubMed | |
dc.description.openaccess | Green Submitted | |
dc.description.publisherscope | International | |
dc.description.sponsors | No Statement Available | |
dc.description.volume | 33 | |
dc.identifier.doi | 10.1109/TIP.2024.3372460 | |
dc.identifier.eissn | 1941-0042 | |
dc.identifier.issn | 1057-7149 | |
dc.identifier.link | ||
dc.identifier.quartile | Q1 | |
dc.identifier.scopus | 2-s2.0-85187329240 | |
dc.identifier.uri | https://doi.org/10.1109/TIP.2024.3372460 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14288/22491 | |
dc.identifier.wos | 1184885100007 | |
dc.keywords | Event-based vision | |
dc.keywords | Video reconstruction | |
dc.keywords | Dynamic neural networks | |
dc.keywords | Hypernetworks | |
dc.keywords | Dynamic convolutions | |
dc.language | en | |
dc.publisher | IEEE-Inst Electrical Electronics Engineers Inc | |
dc.relation.grantno | Koc University Is Bank AI Center (KUIS AI) Research Award | |
dc.rights | ||
dc.source | IEEE Transactions on Image Processing | |
dc.subject | Computer science | |
dc.subject | Artificial intelligence | |
dc.subject | Electrical engineering | |
dc.subject | Electronic engineering | |
dc.title | HyperE2VID: improving event-based video reconstruction via hypernetworks | |
dc.type | Journal article | |
dc.type.other | ||
dspace.entity.type | Publication | |
local.contributor.kuauthor | Erdem, Aykut | |
relation.isOrgUnitOfPublication | 89352e43-bf09-4ef4-82f6-6f9d0174ebae | |
relation.isOrgUnitOfPublication.latestForDiscovery | 89352e43-bf09-4ef4-82f6-6f9d0174ebae |