Multiple-view video coding using depth map in projective space

Nina Yorozu, Yuko Uematsu, Hideo Saito

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)


In this paper a new video coding by using multiple uncalibrated cameras is proposed. We consider the redundancy between the cameras view points and efficiently compress based on a depth map. Since our target videos are taken with uncalibrated cameras, our depth map is computed not in the real world but in the Projective Space. This is a virtual space defined by projective reconstruction of two still images. This means that the position in the space is correspondence to the depth value. Therefore we do not require full-calibration of the cameras. Generating the depth map requires finding the correspondence between the cameras. We use a "plane sweep" algorithm for it. Our method needs only a depth map except for the original base image and the camera parameters, and it contributes to the effectiveness of compression.

Original languageEnglish
Title of host publicationAdvances in Visual Computing - 5th International Symposium, ISVC 2009, Proceedings
Number of pages11
EditionPART 2
Publication statusPublished - 2009
Event5th International Symposium on Advances in Visual Computing, ISVC 2009 - Las Vegas, NV, United States
Duration: 2009 Nov 302009 Dec 2

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 2
Volume5876 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Other5th International Symposium on Advances in Visual Computing, ISVC 2009
Country/TerritoryUnited States
CityLas Vegas, NV

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science


Dive into the research topics of 'Multiple-view video coding using depth map in projective space'. Together they form a unique fingerprint.

Cite this