Publication:
ARTMV: a cross-modal art music video dataset for proprioceptive valence perception

dc.conference.dateJUN 30-JUL 04, 2025
dc.conference.locationNantes
dc.conference.organizerInstitute of Electrical and Electronics Engineers Inc
dc.contributor.departmentKUIS AI (Koç University & İş Bank Artificial Intelligence Center)
dc.contributor.kuauthorErzin, Engin
dc.contributor.kuauthorArslantürk, Sitare
dc.contributor.schoolcollegeinstituteResearch Center
dc.date.accessioned2025-12-31T08:19:02Z
dc.date.available2025-12-31
dc.date.issued2025
dc.description.abstractWe present a novel approach for affective multimedia content analysis to study how the human keypoints contribute to the perceived emotion of art music. Traditional music information retrieval methodologies have extensively used the cross-modal bias of audio and visual modalities to assess affective states. In the case of art music videos, the visual modality is limited by orchestra footage or static images, lacking the dynamic visual elements commonly found in videos of other music genres. In this paper, we introduce ARTMV, an art music video dataset consisting of perceived static categorical valence labels, music tracks and related dance videos. To overcome the restrictive visual content, our proposed network competitively replaces the visual modality of the videos with the proprioception of the performers from the dance performances of the corresponding art music.
dc.description.fulltextYes
dc.description.harvestedfromManual
dc.description.indexedbyScopus
dc.description.indexedbyWOS
dc.description.publisherscopeInternational
dc.description.readpublishN/A
dc.description.sponsoredbyTubitakEuN/A
dc.identifier.doi10.1109/ICMEW68306.2025.11152129
dc.identifier.embargoNo
dc.identifier.isbn9798331587437
dc.identifier.issn2330-7927
dc.identifier.quartileN/A
dc.identifier.scopus2-s2.0-105017573267
dc.identifier.urihttps://doi.org/10.1109/ICMEW68306.2025.11152129
dc.identifier.urihttps://hdl.handle.net/20.500.14288/31428
dc.identifier.wos001588623300037
dc.keywordsAffective computing
dc.keywordsDataset
dc.keywordsEmbodied music cognition
dc.keywordsValence estimation
dc.language.isoeng
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.relation.affiliationKoç University
dc.relation.collectionKoç University Institutional Repository
dc.relation.ispartof2025 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2025
dc.relation.openaccessYes
dc.rightsCC BY-NC-ND (Attribution-NonCommercial-NoDerivs)
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectProprioceptive
dc.subjectMultimedia
dc.subjectVisual modality
dc.titleARTMV: a cross-modal art music video dataset for proprioceptive valence perception
dc.typeConference Proceeding
dspace.entity.typePublication
person.familyNameErzin
person.familyNameArslantürk
person.givenNameEngin
person.givenNameSitare
relation.isOrgUnitOfPublication77d67233-829b-4c3a-a28f-bd97ab5c12c7
relation.isOrgUnitOfPublication.latestForDiscovery77d67233-829b-4c3a-a28f-bd97ab5c12c7
relation.isParentOrgUnitOfPublicationd437580f-9309-4ecb-864a-4af58309d287
relation.isParentOrgUnitOfPublication.latestForDiscoveryd437580f-9309-4ecb-864a-4af58309d287

Files