Secrets of Event-Based Optical Flow, Depth and Ego-Motion Estimation by Contrast Maximization

Shintaro Shiba, Yannick Klose, Yoshimitsu Aoki, Guillermo Gallego

研究成果: Article査読

31 被引用数 (Scopus)

抄録

Event cameras respond to scene dynamics and provide signals naturally suitable for motion estimation with advantages, such as high dynamic range. The emerging field of event-based vision motivates a revisit of fundamental computer vision tasks related to motion, such as optical flow and depth estimation. However, state-of-the-art event-based optical flow methods tend to originate in frame-based deep-learning methods, which require several adaptations (data conversion, loss function, etc.) as they have very different properties. We develop a principled method to extend the Contrast Maximization framework to estimate dense optical flow, depth, and ego-motion from events alone. The proposed method sensibly models the space-time properties of event data and tackles the event alignment problem. It designs the objective function to prevent overfitting, deals better with occlusions, and improves convergence using a multi-scale approach. With these key elements, our method ranks first among unsupervised methods on the MVSEC benchmark and is competitive on the DSEC benchmark. Moreover, it allows us to simultaneously estimate dense depth and ego-motion, exposes the limitations of current flow benchmarks, and produces remarkable results when it is transferred to unsupervised learning settings. Along with various downstream applications shown, we hope the proposed method becomes a cornerstone on event-based motion-related tasks.

本文言語English
ページ(範囲)7742-7759
ページ数18
ジャーナルIEEE Transactions on Pattern Analysis and Machine Intelligence
46
12
DOI
出版ステータスPublished - 2024

ASJC Scopus subject areas

  • ソフトウェア
  • コンピュータ ビジョンおよびパターン認識
  • 計算理論と計算数学
  • 人工知能
  • 応用数学

フィンガープリント

「Secrets of Event-Based Optical Flow, Depth and Ego-Motion Estimation by Contrast Maximization」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル