Multi-field de-interlacing using deformable convolution residual blocks and self-attention
dc.contributor.authorid | 0000-0003-1465-8121 | |
dc.contributor.authorid | 0000-0001-6840-5766 | |
dc.contributor.department | Department of Electrical and Electronics Engineering | |
dc.contributor.department | N/A | |
dc.contributor.kuauthor | Tekalp, Ahmet Murat | |
dc.contributor.kuauthor | Ji, Ronglei | |
dc.contributor.kuprofile | Faculty Member | |
dc.contributor.kuprofile | PhD Student | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.schoolcollegeinstitute | Graduate School of Sciences and Engineering | |
dc.contributor.yokid | 26207 | |
dc.contributor.yokid | N/A | |
dc.date.accessioned | 2025-01-19T10:31:51Z | |
dc.date.issued | 2022 | |
dc.description.abstract | Although deep learning has made significant impact on image/video restoration and super-resolution, learned deinterlacing has so far received less attention in academia or industry. This is despite deinterlacing is well-suited for supervised learning from synthetic data since the degradation model is known and fixed. In this paper, we propose a novel multi-field full frame-rate deinterlacing network, which adapts the state-of-the-art superresolution approaches to the deinterlacing task. Our model aligns features from adjacent fields to a reference field (to be deinterlaced) using both deformable convolution residual blocks and self attention. Our extensive experimental results demonstrate that the proposed method provides state-of-the-art deinterlacing results in terms of both numerical and perceptual performance. At the time of writing, our model ranks first in the Full FrameRate LeaderBoard at https://videoprocessing.ai/benchmarks/deinterlacer.html | |
dc.description.indexedby | WoS | |
dc.description.indexedby | Scopus | |
dc.description.openaccess | Green Submitted | |
dc.description.publisherscope | International | |
dc.description.sponsors | This work was supported in part by TUBITAK 2247-A Award No. 120C156 and KUIS AI Center funded by Turkish Is Bank. A.M. Tekalp also acknowledges support from Turkish Academy of Sciences (TUBA). Ronglei Ji would like to acknowledge a Fung Scholarship. | |
dc.identifier.doi | 10.1109/ICIP46576.2022.9897353 | |
dc.identifier.isbn | 978-1-6654-9620-9 | |
dc.identifier.issn | 1522-4880 | |
dc.identifier.quartile | N/A | |
dc.identifier.scopus | 2-s2.0-85146709598 | |
dc.identifier.uri | https://doi.org/10.1109/ICIP46576.2022.9897353 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14288/26303 | |
dc.identifier.wos | 1058109501002 | |
dc.keywords | Deep learning | |
dc.keywords | Deinterlacing | |
dc.keywords | Deformable convolution | |
dc.keywords | Feature alignment | |
dc.keywords | Self attention | |
dc.language | en | |
dc.publisher | IEEE | |
dc.relation.grantno | TUBITAK [120C156]; Turkish Is Bank; Turkish Academy of Sciences (TUBA); Fung Scholarship | |
dc.source | 2022 IEEE International Conference on Image Processing, ICIP | |
dc.subject | Computer science | |
dc.subject | Electrical and electronics engineering | |
dc.title | Multi-field de-interlacing using deformable convolution residual blocks and self-attention | |
dc.type | Conference proceeding |