Publication:
MMSR: Multiple-model learned image super-resolution benefiting from class-specific image priors

dc.contributor.departmentDepartment of Electrical and Electronics Engineering
dc.contributor.departmentDepartment of Electrical and Electronics Engineering
dc.contributor.kuauthorDoğan, Zafer
dc.contributor.kuauthorTekalp, Ahmet Murat
dc.contributor.kuauthorKorkmaz, Cansu
dc.contributor.kuprofileFaculty Member
dc.contributor.kuprofileFaculty Member
dc.contributor.kuprofilePhD Student
dc.contributor.otherDepartment of Electrical and Electronics Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteGraduate School of Sciences and Engineering
dc.contributor.yokid280658
dc.contributor.yokid26207
dc.contributor.yokidN/A
dc.date.accessioned2024-11-09T22:51:43Z
dc.date.issued2022
dc.description.abstractAssuming a known degradation model, the performance of a learned image super-resolution (SR) model depends on how well the variety of image characteristics within the training set matches those in the test set. As a result, the performance of an SR model varies noticeably from image to image over a test set depending on whether characteristics of specific images are similar to those in the training set or not. Hence, in general, a single SR model cannot generalize well enough for all types of image content. In this work, we show that training multiple SR models for different classes of images (e.g., for text, texture, etc.) to exploit class-specific image priors and employing a post-processing network that learns how to best fuse the outputs produced by these multiple SR models surpasses the performance of state-of-the-art generic SR models. Experimental results clearly demonstrate that the proposed multiple-model SR (MMSR) approach significantly outperforms a single pre-trained state-of-the-art SR model both quantitatively and visually. It even exceeds the performance of the best single class-specific SR model trained on similar text or texture images. © 2022 IEEE.
dc.description.indexedbyScopus
dc.description.indexedbyWoS
dc.description.openaccessYES
dc.description.publisherscopeInternational
dc.description.sponsorshipThis work was supported by in part by TUBITAK 2247-A Award No. 120C156 and a grant from Turkish Is Bank to KUIS AILab. AMT also acknowledges support from Turkish Academy of Sciences (TUBA).
dc.identifier.doi10.1109/ICIP46576.2022.9897278
dc.identifier.isbn9781-6654-9620-9
dc.identifier.issn1522-4880
dc.identifier.linkhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85146684704&doi=10.1109%2fICIP46576.2022.9897278&partnerID=40&md5=8ca4fa12b17812a8835865ca6d634adc
dc.identifier.scopus2-s2.0-85146684704
dc.identifier.urihttp://dx.doi.org/10.1109/ICIP46576.2022.9897278
dc.identifier.urihttps://hdl.handle.net/20.500.14288/6884
dc.identifier.wos1058109502181
dc.keywordsClass-specific image prior
dc.keywordsImage super-resolution
dc.keywordsMultiple learned models
dc.keywordsZero-shot learning computer vision
dc.keywordsImage texture
dc.keywordsOptical resolving power
dc.keywordsTextures
dc.keywordsImage priors
dc.keywordsImage super resolutions
dc.keywordsMultiple learned model
dc.keywordsMultiple-modeling
dc.keywordsPerformance
dc.keywordsState of the art
dc.keywordsSuper-resolution models
dc.keywordsTest sets
dc.keywordsTraining sets
dc.keywordsZero-shot learning
dc.languageEnglish
dc.publisherThe Institute of Electrical and Electronics Engineers Signal Processing Society
dc.sourceProceedings - International Conference on Image Processing, ICIP
dc.subjectConvolutional neural network
dc.subjectHallucinations
dc.subjectSparse representation
dc.titleMMSR: Multiple-model learned image super-resolution benefiting from class-specific image priors
dc.typeConference proceeding
dspace.entity.typePublication
local.contributor.authorid0000-0002-5078-4590
local.contributor.authorid0000-0003-1465-8121
local.contributor.authoridN/A
local.contributor.kuauthorDoğan, Zafer
local.contributor.kuauthorTekalp, Ahmet Murat
local.contributor.kuauthorKorkmaz, Cansu
relation.isOrgUnitOfPublication21598063-a7c5-420d-91ba-0cc9b2db0ea0
relation.isOrgUnitOfPublication.latestForDiscovery21598063-a7c5-420d-91ba-0cc9b2db0ea0

Files