Publication: Self-supervised object-centric learning for videos
Program
KU-Authors
KU Authors
Co-Authors
Xie, Weidi
Advisor
Publication Date
2023
Language
en
Type
Conference proceeding
Journal Title
Journal ISSN
Volume Title
Abstract
Unsupervised multi-object segmentation has shown impressive results on images by utilizing powerful semantics learned from self-supervised pretraining. An additional modality such as depth or motion is often used to facilitate the segmentation in video sequences. However, the performance improvements observed in synthetic sequences, which rely on the robustness of an additional cue, do not translate to more challenging real-world scenarios. In this paper, we propose the first fully unsupervised method for segmenting multiple objects in real-world sequences. Our object-centric learning framework spatially binds objects to slots on each frame and then relates these slots across frames. From these temporally-aware slots, the training objective is to reconstruct the middle frame in a high-level semantic feature space. We propose a masking strategy by dropping a significant portion of tokens in the feature space for efficiency and regularization. Additionally, we address over-clustering by merging slots based on similarity. Our method can successfully segment multiple instances of complex and high-variety classes in YouTube videos.
Description
Source:
Advances in Neural Information Processing Systems
Publisher:
Neural information processing systems foundation
Keywords:
Subject
Computer science, artificial intelligence, Computer science, information systems