Publication:
ARTMV: A Cross-Modal Art Music Video Dataset for Proprioceptive Valence Perception

Placeholder

Departments

School / College / Institute

Program

KU-Authors

KU Authors

Co-Authors

Arslantürk, Sitare (60121960200)
Erzin, Engin (6603621358)

Publication Date

Language

Embargo Status

No

Journal Title

Journal ISSN

Volume Title

Alternative Title

Abstract

We present a novel approach for affective multimedia content analysis to study how the human keypoints contribute to the perceived emotion of art music. Traditional music information retrieval methodologies have extensively used the cross-modal bias of audio and visual modalities to assess affective states. In the case of art music videos, the visual modality is limited by orchestra footage or static images, lacking the dynamic visual elements commonly found in videos of other music genres. In this paper, we introduce ARTMV, an art music video dataset consisting of perceived static categorical valence labels, music tracks and related dance videos. To overcome the restrictive visual content, our proposed network competitively replaces the visual modality of the videos with the proprioception of the performers from the dance performances of the corresponding art music. © 2025 Elsevier B.V., All rights reserved.

Source

Publisher

Institute of Electrical and Electronics Engineers Inc.

Subject

Citation

Has Part

Source

2025 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2025

Book Series Title

Edition

DOI

10.1109/ICMEW68306.2025.11152129

item.page.datauri

Link

Rights

CC BY-NC-ND (Attribution-NonCommercial-NoDerivs)

Copyrights Note

Creative Commons license

Except where otherwised noted, this item's license is described as CC BY-NC-ND (Attribution-NonCommercial-NoDerivs)

Endorsement

Review

Supplemented By

Referenced By

0

Views

0

Downloads

View PlumX Details