Publication:
Leveraging frequency based salient spatial sound localization to improve 360 degrees video saliency prediction

Thumbnail Image

Organizational Units

Program

KU-Authors

KU Authors

Co-Authors

Çökelek, Mert
İmamoğlu, Nevrez
Özçınar, Çağrı

Advisor

Publication Date

2021

Language

English

Type

Conference proceeding

Journal Title

Journal ISSN

Volume Title

Abstract

Virtual and augmented reality (VR/AR) systems dramatically gained in popularity with various application areas such as gaming, social media, and communication. It is therefore a crucial task to have the knowhow to efficiently utilize, store or deliver 360° videos for end-users. Towards this aim, researchers have been developing deep neural network models for 360° multimedia processing and computer vision fields. In this line of work, an important research direction is to build models that can learn and predict the observers' attention on 360° videos to obtain so-called saliency maps computationally. Although there are a few saliency models proposed for this purpose, these models generally consider only visual cues in video frames by neglecting audio cues from sound sources. In this study, an unsupervised frequency-based saliency model is presented for predicting the strength and location of saliency in spatial audio. The prediction of salient audio cues is then used as audio bias on the video saliency predictions of state-of-the-art models. Our experiments yield promising results and show that integrating the proposed spatial audio bias into the existing video saliency models consistently improves their performance.

Description

Source:

Proceedings of MVA 2021 - 17th International Conference on Machine Vision Applications

Publisher:

Institute of Electrical and Electronics Engineers (IEEE)

Keywords:

Subject

Computer science, Engineering

Citation

Endorsement

Review

Supplemented By

Referenced By

Copy Rights Note

0

Views

0

Downloads

View PlumX Details