Publication:
Self-supervised monocular scene decomposition and depth estimation

Thumbnail Image

School / College / Institute

Organizational Unit
Organizational Unit

Program

KU Authors

Co-Authors

Publication Date

Language

Embargo Status

NO

Journal Title

Journal ISSN

Volume Title

Alternative Title

Abstract

Self-supervised monocular depth estimation approaches either ignore independently moving objects in the scene or need a separate segmentation step to identify them. We propose MonoDepthSeg to jointly estimate depth and segment moving objects from monocular video without using any ground-truth labels. We decompose the scene into a fixed number of components where each component corresponds to a region on the image with its own transformation matrix representing its motion. We estimate both the mask and the motion of each component efficiently with a shared encoder. We evaluate our method on three driving datasets and show that our model clearly improves depth estimation while decomposing the scene into separately moving components.

Source

Publisher

IEEE Computer Society

Subject

Computer science, Engineering, Imaging science, Photographic technology

Citation

Has Part

Source

2021 International Conference on 3D Vision (3DV)

Book Series Title

Edition

DOI

10.1109/3DV53792.2021.00072

item.page.datauri

Link

Rights

Copyrights Note

Endorsement

Review

Supplemented By

Referenced By

0

Views

3

Downloads

View PlumX Details