Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-view depth video fast coding method

A deep video, fast coding technology, applied in TV, electrical components, stereo systems, etc., can solve problems such as less texture details

Active Publication Date: 2013-10-02
NINGBO UNIV
View PDF5 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Aiming at the high complexity of multi-view video coding, a lot of research has been done on fast multi-view color video coding methods at home and abroad, but these methods are all proposed for multi-view color video. It has different characteristics from color video, and its function is not for final display but for auxiliary virtual view point drawing, so the existing multi-viewpoint color video fast coding method cannot be directly applied to multi-viewpoint depth video coding

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-view depth video fast coding method
  • Multi-view depth video fast coding method
  • Multi-view depth video fast coding method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] The present invention will be further described in detail below with reference to the embodiments of the accompanying drawings.

[0041] The present invention proposes a multi-view depth video fast coding method, which starts from the spatial content correlation, temporal correlation of depth video and the correlation of coding modes of adjacent macroblocks, and proposes the coding mode complexity of a macroblock. According to the coding mode complexity factor, the depth video is divided into a simple mode area and a complex mode area, and different fast coding mode selection methods are used for different areas. In the pattern area, a relatively fine and complex search process is performed.

[0042] The multi-view depth video fast encoding method of the present invention has an overall implementation block diagram as follows: image 3 As shown, it specifically includes the following steps:

[0043] ① Define the current viewpoint to be encoded in the multi-view depth ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-view depth video fast coding method. A coding mode complexity factor of a macro block is provided; the macro block is attributed to a simple mode zone or a complex mode zone according to the coding mode complexity factor, namely, a depth video is divided into the simple mode zone and the complex mode zone, and different fast coding mode selection strategies are adopted according to different zones; search for simple coding modes is performed for a macro block in the simple mode zone, and search for complex coding modes is performed for a macro block in the complex mode zone; and therefore, little-contributed time-consuming coding mode search in the coding process of a current coding frame can be avoided. As a result, under the premise that virtual viewpoint rendering quality is ensured and the bit rate of depth video coding is not affected, the computation complexity of multi-view depth video coding can be effectively reduced, and the coding time of the multi-view depth video coding can be saved.

Description

technical field [0001] The invention relates to a video signal encoding technology, in particular to a multi-viewpoint depth video fast encoding method. Background technique [0002] With the continuous development of 3D display and related technologies, multi-viewpoint video systems such as 3D TV and free-viewpoint TV have attracted more and more attention from domestic and foreign scholars and industrial circles. This 3D scene representation method of multiview color video and depth video (Multiview Video plus Depth, MVD) can be used for multi-view autostereoscopic display, especially for scenes with a wide range of viewing angles and rich depth levels. It gives video information accurately, which has become the mainstream data format of multi-view video system. In the multi-view video system based on MVD, the depth information effectively represents the geometric information of the 3D scene, and reflects the relative distance from the shooting scene to the camera. It is ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N7/26H04N7/32H04N13/00
Inventor 彭宗举王叶群蒋刚毅郁梅陈芬
Owner NINGBO UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products