Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Region-Based 3D Video Mapping Method

A mapping method and sub-regional technology, applied in the field of 3D video synthesis, can solve the problems of large amount of data and large amount of calculation in the multi-view method

Active Publication Date: 2016-08-24
山西班姆德机械设备有限公司
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Due to the large amount of data in the multi-view method, it is a big challenge for the real-time performance of DIBR technology
At the same time, 3D mapping is one of the most computationally intensive links in DIBR, and repeated 3D mapping for multiple original viewpoints will cause a great burden on DIBR

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Region-Based 3D Video Mapping Method
  • A Region-Based 3D Video Mapping Method
  • A Region-Based 3D Video Mapping Method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0088] Preliminary test experiments have been done on the subregional 3D video mapping scheme proposed by the present invention. The standard test sequence is used as input, that is, the Philips Mobile sequence, and the first 100 frames are taken for testing, and the sequence resolution is 720*540. Three different modes are used: whole pixel, half pixel, quarter pixel. Use a dell workstation for simulation. The parameters of the workstation are: Intel(R), Xeon(R) Quad-Core CPU, 2.8GHz, 4.00GB-DDR3 memory. The software platform is Visual studio 2008, and the program is realized by programming in C++ language.

[0089] This example is implemented in this way, and its process includes the following steps:

[0090] Step 1: Ingest the texture maps and corresponding depth maps of two adjacent viewpoints of the 3D video, and then calculate the threshold ΔZ max =20.

[0091] Step 2: Select the viewpoint V 1 or viewpoint V 3 As the main viewpoint, detect the boundaries in the dep...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a region-partitioning 3D video mapping method, and belongs to the technical field of 3D video synthesis. Pixels of different regions are differentiated and processed differently, so that mapping and repeated mapping of invalid information are avoided. A view point is randomly selected as a main view point, regions are partitioned by analyzing region features according to a depth map of the main view point, and the partitioned regions include the BNER, the BSER, the DER, the FSER and the FNER. For the main view point, all the regions except the no effect regions (including the BNER and the FNER) are mapped to a virtual view point; for a non-main view point, single effect regions ((including the BSER and the FSER) are mapped to the virtual view point; gaps generated in the 3D mapping process are filled in an interpolation mode through surrounding pixels.

Description

technical field [0001] The invention belongs to the technical field of 3D video synthesis, and in particular relates to a region-by-region 3D video mapping method. Background technique [0002] In 3D video, viewpoint synthesis is a very important technology. At present, most of them use DIBR (Depth-image-based Rendering, that is, rendering based on depth images) technology to synthesize virtual viewpoints. The core of DIBR is to use depth information and camera parameters to map pixels from known original viewpoints to unknown synthetic viewpoints. There are two main methods of DIBR: single-view method and multi-view method. Single-view DIBR generally includes two steps: (1) 3D mapping; (2) hole filling. The multi-viewpoint DIBR method also includes an additional step: viewpoint fusion, which is to fuse the virtual viewpoint synthesized from each original viewpoint into a final viewpoint. Generally speaking, there are two fusion methods: (1) use a linear weight function to...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04N13/00
Inventor 王安红邢志伟金鉴武迎春李东红
Owner 山西班姆德机械设备有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products