Publication:
Multi-field de-interlacing using deformable convolution residual blocks and self-attention

dc.contributor.departmentDepartment of Electrical and Electronics Engineering
dc.contributor.departmentN/A
dc.contributor.departmentDepartment of Electrical and Electronics Engineering
dc.contributor.kuauthorTekalp, Ahmet Murat
dc.contributor.kuauthorJi, Ronglei
dc.contributor.kuprofileFaculty Member
dc.contributor.kuprofilePhD Student
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteGraduate School of Sciences and Engineering
dc.contributor.yokid26207
dc.contributor.yokidN/A
dc.date.accessioned2024-11-09T23:43:09Z
dc.date.issued2022
dc.description.abstractAlthough deep learning has made significant impact on image/video restoration and super-resolution, learned deinterlacing has so far received less attention in academia or industry. This is despite deinterlacing is well-suited for supervised learning from synthetic data since the degradation model is known and fixed. In this paper, we propose a novel multi-field full frame-rate deinterlacing network, which adapts the state-of-the-art superresolution approaches to the deinterlacing task. Our model aligns features from adjacent fields to a reference field (to be deinterlaced) using both deformable convolution residual blocks and self attention. Our extensive experimental results demonstrate that the proposed method provides state-of-the-art deinterlacing results in terms of both numerical and perceptual performance. At the time of writing, our model ranks first in the Full FrameRate LeaderBoard at https://videoprocessing.ai/benchmarks/deinterlacer.html.
dc.description.indexedbyScopus
dc.description.indexedbyWoS
dc.description.openaccessYES
dc.description.publisherscopeInternational
dc.description.sponsorshipThis work was supported in part by TUBITAK 2247-A Award No. 120C156 and KUIS AI Center funded by Turkish Is Bank. A.M. Tekalp also acknowledges support from Turkish Academy of Sciences (TUBA). Ronglei Ji would like to acknowledge a Fung Scholarship.
dc.identifier.doi10.1109/ICIP46576.2022.9897353
dc.identifier.isbn9781-6654-9620-9
dc.identifier.issn1522-4880
dc.identifier.linkhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85146709598&doi=10.1109%2fICIP46576.2022.9897353&partnerID=40&md5=a3c9baa408b4da76094e86a51dd53692
dc.identifier.scopus2-s2.0-85146709598
dc.identifier.urihttp://dx.doi.org/10.1109/ICIP46576.2022.9897353
dc.identifier.urihttps://hdl.handle.net/20.500.14288/13448
dc.identifier.wos1058109501002
dc.keywordsDeep learning
dc.keywordsDeformable convolution
dc.keywordsDeinterlacing
dc.keywordsFeature alignment
dc.keywordsSelf attention Computer vision
dc.keywordsDeep learning
dc.keywordsNumerical methods
dc.keywordsOptical resolving power
dc.keywordsDe-interlacing
dc.keywordsDeep learning
dc.keywordsDeformable convolution
dc.keywordsFeature alignment
dc.keywordsFrame-rate
dc.keywordsMulti-field
dc.keywordsSelf attention
dc.keywordsState of the art
dc.keywordsSuperresolution
dc.keywordsVideo restoration
dc.keywordsConvolution
dc.languageEnglish
dc.publisherThe Institute of Electrical and Electronics Engineers Signal Processing Society
dc.sourceProceedings - International Conference on Image Processing, ICIP
dc.subjectComputer Science
dc.subjectArtificial intelligence
dc.subjectElectrical electronics engineering
dc.titleMulti-field de-interlacing using deformable convolution residual blocks and self-attention
dc.typeConference proceeding
dspace.entity.typePublication
local.contributor.authorid0000-0003-1465-8121
local.contributor.authorid0000-0001-6840-5766
local.contributor.kuauthorTekalp, Ahmet Murat
local.contributor.kuauthorJi, Ronglei
relation.isOrgUnitOfPublication21598063-a7c5-420d-91ba-0cc9b2db0ea0
relation.isOrgUnitOfPublication.latestForDiscovery21598063-a7c5-420d-91ba-0cc9b2db0ea0

Files

Original bundle

Now showing 1 - 1 of 1
Thumbnail Image
Name:
IR05109.pdf
Size:
385.64 KB
Format:
Adobe Portable Document Format