Multi-field de-interlacing using deformable convolution residual blocks and self-attention

dc.contributor.authorid0000-0003-1465-8121
dc.contributor.authorid0000-0001-6840-5766
dc.contributor.departmentDepartment of Electrical and Electronics Engineering
dc.contributor.departmentN/A
dc.contributor.kuauthorTekalp, Ahmet Murat
dc.contributor.kuauthorJi, Ronglei
dc.contributor.kuprofileFaculty Member
dc.contributor.kuprofilePhD Student
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteGraduate School of Sciences and Engineering
dc.contributor.yokid26207
dc.contributor.yokidN/A
dc.date.accessioned2025-01-19T10:31:51Z
dc.date.issued2022
dc.description.abstractAlthough deep learning has made significant impact on image/video restoration and super-resolution, learned deinterlacing has so far received less attention in academia or industry. This is despite deinterlacing is well-suited for supervised learning from synthetic data since the degradation model is known and fixed. In this paper, we propose a novel multi-field full frame-rate deinterlacing network, which adapts the state-of-the-art superresolution approaches to the deinterlacing task. Our model aligns features from adjacent fields to a reference field (to be deinterlaced) using both deformable convolution residual blocks and self attention. Our extensive experimental results demonstrate that the proposed method provides state-of-the-art deinterlacing results in terms of both numerical and perceptual performance. At the time of writing, our model ranks first in the Full FrameRate LeaderBoard at https://videoprocessing.ai/benchmarks/deinterlacer.html
dc.description.indexedbyWoS
dc.description.indexedbyScopus
dc.description.openaccessGreen Submitted
dc.description.publisherscopeInternational
dc.description.sponsorsThis work was supported in part by TUBITAK 2247-A Award No. 120C156 and KUIS AI Center funded by Turkish Is Bank. A.M. Tekalp also acknowledges support from Turkish Academy of Sciences (TUBA). Ronglei Ji would like to acknowledge a Fung Scholarship.
dc.identifier.doi10.1109/ICIP46576.2022.9897353
dc.identifier.isbn978-1-6654-9620-9
dc.identifier.issn1522-4880
dc.identifier.quartileN/A
dc.identifier.scopus2-s2.0-85146709598
dc.identifier.urihttps://doi.org/10.1109/ICIP46576.2022.9897353
dc.identifier.urihttps://hdl.handle.net/20.500.14288/26303
dc.identifier.wos1058109501002
dc.keywordsDeep learning
dc.keywordsDeinterlacing
dc.keywordsDeformable convolution
dc.keywordsFeature alignment
dc.keywordsSelf attention
dc.languageen
dc.publisherIEEE
dc.relation.grantnoTUBITAK [120C156]; Turkish Is Bank; Turkish Academy of Sciences (TUBA); Fung Scholarship
dc.source2022 IEEE International Conference on Image Processing, ICIP
dc.subjectComputer science
dc.subjectElectrical and electronics engineering
dc.titleMulti-field de-interlacing using deformable convolution residual blocks and self-attention
dc.typeConference proceeding

Files