Publication:
Video frame prediction via deep learning

Placeholder

School / College / Institute

Organizational Unit

Program

KU Authors

Co-Authors

Publication Date

Language

Embargo Status

Journal Title

Journal ISSN

Volume Title

Alternative Title

Abstract

This paper provides new results over our previous work presented in ICIP 2019 on the performance of learned frame prediction architectures and associated training methods. More specifically, we show that using an end-to-end residual connection in the fully convolutional neural network (FCNN) provides improved performance. in order to provide comparative results, we trained a residual FCNN, A convolutional RNN (CRNN), and a convolutional long-short term memory (CLSTM) network for next frame prediction using the mean square loss. We performed both stateless and stateful training for recurrent networks. Experimental results show that the residual FCNN architecture performs the best in terms of peak signal to noise ratio (PSNR) at the expense of higher training and test (inference) computational complexity. the CRNN can be stably and efficiently trained using the stateful truncated backpropagation through time procedure, and requires an order of magnitude less inference runtime to achieve an acceptable performance in near real-time.

Source

Publisher

IEEE

Subject

Civil engineering, Electrical electronics engineering, Telecommunication

Citation

Has Part

Source

2020 28th Signal Processing and Communications Applications Conference (Siu)

Book Series Title

Edition

DOI

item.page.datauri

Link

Rights

Copyrights Note

Endorsement

Review

Supplemented By

Referenced By

0

Views

0

Downloads