Perception-distortion trade-off in the SR space spanned by flowmodels
Publication Date
2022
Advisor
Institution Author
Tekalp, Ahmet Murat
Korkmaz, Cansu
Doğan, Zafer
Erdem, Aykut
Co-Authors
Erdem, Erkut
Journal Title
Journal ISSN
Volume Title
Publisher:
IEEE
Type
Conference proceeding
Abstract
Flow-based generative super-resolution (SR) models learn to produce a diverse set of feasible SR solutions, called the SR space. Diversity of SR solutions increases with the temperature (t) of latent variables, which introduces random variations of texture among sample solutions, resulting in visual artifacts and low fidelity. In this paper, we present a simple but effective image ensembling/fusion approach to obtain a single SR image eliminating random artifacts and improving fidelity without significantly compromising perceptual quality. We achieve this by benefiting from a diverse set of feasible photorealistic solutions in the SR space spanned by flow models. We propose different image ensembling and fusion strategies which offer multiple paths to move sample solutions in the SR space to more desired destinations in the perception-distortion plane in a controllable manner depending on the fidelity vs. perceptual quality requirements of the task at hand. Experimental results demonstrate that our image ensembling/fusion strategy achieves more promising perception-distortion trade-off compared to sample SR images produced by flow models and adversarially trained models in terms of both quantitative metrics and visual quality.
Description
Subject
Electrical and electronics engineering, Computer engineering