Publication:
Video frame prediction via deep learning

dc.contributor.departmentN/A
dc.contributor.departmentDepartment of Electrical and Electronics Engineering
dc.contributor.kuauthorYılmaz, Mustafa Akın
dc.contributor.kuauthorTekalp, Ahmet Murat
dc.contributor.kuprofilePhD Student
dc.contributor.kuprofileFaculty Member
dc.contributor.otherDepartment of Electrical and Electronics Engineering
dc.contributor.schoolcollegeinstituteGraduate School of Sciences and Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.yokidN/A
dc.contributor.yokid26207
dc.date.accessioned2024-11-09T23:13:11Z
dc.date.issued2020
dc.description.abstractThis paper provides new results over our previous work presented in ICIP 2019 on the performance of learned frame prediction architectures and associated training methods. More specifically, we show that using an end-to-end residual connection in the fully convolutional neural network (FCNN) provides improved performance. in order to provide comparative results, we trained a residual FCNN, A convolutional RNN (CRNN), and a convolutional long-short term memory (CLSTM) network for next frame prediction using the mean square loss. We performed both stateless and stateful training for recurrent networks. Experimental results show that the residual FCNN architecture performs the best in terms of peak signal to noise ratio (PSNR) at the expense of higher training and test (inference) computational complexity. the CRNN can be stably and efficiently trained using the stateful truncated backpropagation through time procedure, and requires an order of magnitude less inference runtime to achieve an acceptable performance in near real-time.
dc.description.indexedbyWoS
dc.description.openaccessNO
dc.description.publisherscopeInternational
dc.description.sponsoredbyTubitakEuTÜBİTAK
dc.description.sponsorshipTUBITAKproject [217E033]
dc.description.sponsorshipTurkish academy of Sciences (TUBa) This work was supported by TUBITAKproject 217E033. a. Murat Tekalp also acknowledges support from Turkish academy of Sciences (TUBa).
dc.identifier.doiN/A
dc.identifier.isbn978-1-7281-7206-4
dc.identifier.issn2165-0608
dc.identifier.quartileN/A
dc.identifier.urihttps://hdl.handle.net/20.500.14288/9946
dc.identifier.wos653136100021
dc.keywordsFrame prediction
dc.keywordsDeep learning
dc.keywordsRecurrent network architectures
dc.keywordsStateful training
dc.keywordsConvolutional network architectures
dc.languageTurkish
dc.publisherIEEE
dc.source2020 28th Signal Processing and Communications Applications Conference (Siu)
dc.subjectCivil engineering
dc.subjectElectrical electronics engineering
dc.subjectTelecommunication
dc.titleVideo frame prediction via deep learning
dc.typeConference proceeding
dspace.entity.typePublication
local.contributor.authorid0000-0002-0795-8970
local.contributor.authorid0000-0003-1465-8121
local.contributor.kuauthorYılmaz, Mustafa Akın
local.contributor.kuauthorTekalp, Ahmet Murat
relation.isOrgUnitOfPublication21598063-a7c5-420d-91ba-0cc9b2db0ea0
relation.isOrgUnitOfPublication.latestForDiscovery21598063-a7c5-420d-91ba-0cc9b2db0ea0

Files