'View plus depth' is an attractive compact representation format for 3D video compression and transmission. It combines 2D video with depth map sequence aligned in a per-pixel manner to represent the moving 3D scene in interest. Any different-perspective view can be synthesized out if this representation through Depth-Image Based Rendering (DIBR). However, such rendering is prone to disocclusion errors: regions originally covered by foreground objects become visible in the synthesized view and have to be filled with perceptually- meaningful data. In this work, a technique for reducing the perceived artifacts by inpainting the disoccluded areas is proposed. Based on Criminisi's exemplar-based inpainting algorithm, the developed technique recovers the disoccluded areas by using pixels of similar blocks surrounding it. In the original work, a moving window is centered on the boundaries between known and unknown parts ('target window'). The known pixels are used to select windows which are most similar to the target one. When this process is completed, the unknown region of the target patch is filled with a weighted combination of pixels from the selected windows. In the proposed scheme, the priority map, which defines the rule for selecting the order of pixels to be filled, has been modified to meet the requirement for disocclusion hole filling and a better non-local mean estimate has been suggested accordingly. Furthermore, the search for similar patches has also been extended to previous and following frames of the video under processing, thus improving both computational efficiency and resulting quality. The e.ectiveness of the proposed method is demonstrated by objective and subjective tests. © 2011 Copyright Society of Photo-Optical Instrumentation Engineers (SPIE).

A modified non-local mean inpainting technique for occlusion filling in depth-image-based rendering

Battisti F.;
2011

Abstract

'View plus depth' is an attractive compact representation format for 3D video compression and transmission. It combines 2D video with depth map sequence aligned in a per-pixel manner to represent the moving 3D scene in interest. Any different-perspective view can be synthesized out if this representation through Depth-Image Based Rendering (DIBR). However, such rendering is prone to disocclusion errors: regions originally covered by foreground objects become visible in the synthesized view and have to be filled with perceptually- meaningful data. In this work, a technique for reducing the perceived artifacts by inpainting the disoccluded areas is proposed. Based on Criminisi's exemplar-based inpainting algorithm, the developed technique recovers the disoccluded areas by using pixels of similar blocks surrounding it. In the original work, a moving window is centered on the boundaries between known and unknown parts ('target window'). The known pixels are used to select windows which are most similar to the target one. When this process is completed, the unknown region of the target patch is filled with a weighted combination of pixels from the selected windows. In the proposed scheme, the priority map, which defines the rule for selecting the order of pixels to be filled, has been modified to meet the requirement for disocclusion hole filling and a better non-local mean estimate has been suggested accordingly. Furthermore, the search for similar patches has also been extended to previous and following frames of the video under processing, thus improving both computational efficiency and resulting quality. The e.ectiveness of the proposed method is demonstrated by objective and subjective tests. © 2011 Copyright Society of Photo-Optical Instrumentation Engineers (SPIE).
2011
Proceedings of SPIE - The International Society for Optical Engineering
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3363383
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 14
  • ???jsp.display-item.citation.isi??? 6
social impact