Publication:
Use of affective visual Information for summarization of human-centric videos

dc.contributor.coauthorKopro, Berkay
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.kuauthorErzin, Engin
dc.contributor.kuprofileFaculty Member
dc.contributor.otherDepartment of Computer Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.yokid34503
dc.date.accessioned2024-11-09T23:25:01Z
dc.date.issued2022
dc.description.abstractThe increasing volume of user-generated human-centric video content and its applications, such as video retrieval and browsing, require compact representations addressed by the video summarization literature. Current supervised studies formulate video summarization as a sequence-to-sequence learning problem, and the existing solutions often neglect the surge of the human-centric view, which inherently contains affective content. In this study, we investigate the affective-information enriched supervised video summarization task for human-centric videos. First, we train a visual input-driven state-of-the-art continuous emotion recognition model (CER-NET) on the RECOLA dataset to estimate activation and valence attributes. Then, we integrate the estimated emotional attributes and their high-level embeddings from the CER-NET with the visual information to define the proposed affective video summarization (AVSUM) architectures. In addition, we investigate the use of attention to improve the AVSUM architectures and propose two new architectures based on temporal attention (TA-AVSUM) and spatial attention (SA-AVSUM). We conduct video summarization experiments on the TvSum and COGNIMUSE datasets. The proposed temporal attention-based TA-AVSUM architecture attains competitive video summarization performances with strong improvements for the human-centric videos compared to the state-of-the-art in terms of F-score, self-defined face recall, and rank correlation metrics.
dc.description.indexedbyScopus
dc.description.indexedbyWoS
dc.description.openaccessYES
dc.description.publisherscopeInternational
dc.identifier.doi10.1109/TAFFC.2022.3222882
dc.identifier.issn1949-3045
dc.identifier.linkhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85142815204&doi=10.1109%2fTAFFC.2022.3222882&partnerID=40&md5=689b8b9552de8c666e1c75fe46476e79
dc.identifier.scopus2-s2.0-85142815204
dc.identifier.urihttps://dx.doi.org/10.1109/TAFFC.2022.3222882
dc.identifier.urihttps://hdl.handle.net/20.500.14288/11299
dc.identifier.wos1124163900041
dc.keywordsAffective computing
dc.keywordsContinuous emotion recognition
dc.keywordsNeural networks
dc.keywordsVideo summarization
dc.languageEnglish
dc.sourceIEEE Transactions on Affective Computing
dc.subjectHuman-computer interaction
dc.subjectUser interfaces (Computer systems)
dc.subjectArtificial intelligence
dc.subjectComputer networks
dc.subjectVideo recording
dc.subjectDigital video|
dc.titleUse of affective visual Information for summarization of human-centric videos
dc.title.alternative由难民同伴提供以减轻荷兰境内成年叙利亚难民心理困扰的心理干预的有效性:研究方案; Efectividad de una intervención psicológica brindada por un refugiado aotro para reducir el malestar psicológico entre refugiados Sirios en los Países Bajos: estudio piloto
dc.typeJournal Article
dspace.entity.typePublication
local.contributor.authorid0000-0002-2715-2368
local.contributor.kuauthorErzin, Engin
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication.latestForDiscovery89352e43-bf09-4ef4-82f6-6f9d0174ebae

Files