Publication: Self-supervised monocular scene decomposition and depth estimation
Files
Program
KU-Authors
KU Authors
Co-Authors
Advisor
Publication Date
2021
Language
English
Type
Conference proceeding
Journal Title
Journal ISSN
Volume Title
Abstract
Self-supervised monocular depth estimation approaches either ignore independently moving objects in the scene or need a separate segmentation step to identify them. We propose MonoDepthSeg to jointly estimate depth and segment moving objects from monocular video without using any ground-truth labels. We decompose the scene into a fixed number of components where each component corresponds to a region on the image with its own transformation matrix representing its motion. We estimate both the mask and the motion of each component efficiently with a shared encoder. We evaluate our method on three driving datasets and show that our model clearly improves depth estimation while decomposing the scene into separately moving components.
Description
Source:
2021 International Conference on 3D Vision (3DV)
Publisher:
IEEE Computer Society
Keywords:
Subject
Computer science, Engineering, Imaging science, Photographic technology