Publication:
Perception-distortion trade-off in the SR space spanned by flow models

Placeholder

Program

KU Authors

Co-Authors

Erdem, Erkut

Advisor

Publication Date

Language

English

Journal Title

Journal ISSN

Volume Title

Abstract

Flow-based generative super-resolution (SR) models learn to produce a diverse set of feasible SR solutions, called the SR space. Diversity of SR solutions increases with the temperature (τ) of latent variables, which introduces random variations of texture among sample solutions, resulting in visual artifacts and low fidelity. In this paper, we present a simple but effective image ensembling/fusion approach to obtain a single SR image eliminating random artifacts and improving fidelity without significantly compromising perceptual quality. We achieve this by benefiting from a diverse set of feasible photo-realistic solutions in the SR space spanned by flow models. We propose different image ensembling and fusion strategies which offer multiple paths to move sample solutions in the SR space to more desired destinations in the perception-distortion plane in a controllable manner depending on the fidelity vs. perceptual quality requirements of the task at hand. Experimental results demonstrate that our image ensembling/fusion strategy achieves more promising perception-distortion tradeoff compared to sample SR images produced by flow models and adversarially trained models in terms of both quantitative metrics and visual quality. © 2022 IEEE.

Source:

Proceedings - International Conference on Image Processing, ICIP

Publisher:

The Institute of Electrical and Electronics Engineers Signal Processing Society

Keywords:

Subject

Convolutional neural network, Hallucinations, Sparse representation

Citation

Endorsement

Review

Supplemented By

Referenced By

Copyrights Note

0

Views

0

Downloads

View PlumX Details