Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Generation method of three-dimensional image

A technology of three-dimensional images and processing methods, applied in image data processing, animation production, instruments, etc., can solve the problems of low integration, complicated shooting and production process, and inability to extract spatial stereoscopic effect.

Inactive Publication Date: 2013-09-18
北京青青树动漫科技有限公司
View PDF4 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, conventional two-dimensional images enter both eyes with images of exactly the same perspective, so the vision and brain cannot extract the real sense of space and three-dimensional relationship of the objects on the screen, so the sense of integration for the audience is much lower than that of 3D images. , the most critical technology based on 3D technology is how to simultaneously obtain images from different perspectives
[0003] There are mainly three ways to make three-dimensional images: one is to use a stereo camera or multiple cameras to shoot from multiple perspectives. The shooting and production process is very complicated, which brings certain constraints to the promotion of 3D technology; the second is to use 3D animation production technology to carry out 3D modeling for the image, construct the depth of field of each part of the image, and then obtain the 3D effect. Modeling all objects in the image requires a large amount of computation and a long production cycle; in addition, it uses computer software technology and image processing technology to convert existing two-dimensional animations or other images into three-dimensional animations or Image, because this technology is based on the original two-dimensional video processing, the implementation cost is the lowest and the most convenient, so it has received the greatest attention at home and abroad. Images with complex scenes cannot obtain better immersive effects

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Generation method of three-dimensional image
  • Generation method of three-dimensional image
  • Generation method of three-dimensional image

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0037] A three-dimensional animation (image) processing method, comprising the following steps:

[0038] Step 1: Obtain an original two-dimensional image as a first view, extract multiple target areas from the original two-dimensional image, and divide the multiple target areas into target areas of interest and target areas of non-interest.

[0039] as figure 1 The original two-dimensional image P above is used as a left-eye image. When making a three-dimensional image, it is necessary to perform three-dimensional conversion based on the left-eye image to obtain a gray-scale shifted right-eye image P'. When viewing, the audience's left eye captures Left-eye image, right-eye image captures a synchronized right-eye image, and the parallax of the binocular images produces a three-dimensional effect.

[0040] Such as figure 1 As shown above, the original two-dimensional image is a top view image, including at least three target areas from near to far: the character M1, the float...

Embodiment 2

[0057] A three-dimensional animation (image) processing method, comprising the following steps:

[0058] Step 1. Obtain the original two-dimensional image P and the embedded image Q, extract one or more target areas from the original two-dimensional image P as non-interest target areas, and extract one or more target areas from the embedded image Q as interest target areas , embed the region of interest embedded in the image Q into the original 2D image P as the first view.

[0059] Such as Figure 4 As shown in , the embedded image Q includes a target region M1, which is taken as the target region of interest. Such as Figure 5 As shown, the original two-dimensional image P includes at least three target areas from near to far: soldier M2, guardrail M3, and vase column fence M4, and M2, M3, and M4 are regarded as non-interest target areas.

[0060] Step 2: Carry out an overall offset for each non-interest target area according to the depth of field of the non-interest targ...

Embodiment 3

[0071] There are usually continuous frames of the same scene group in the film and television animation. For example, in the second embodiment, when the embedded object of interest moves from near to far, in order to reduce the amount of calculation, the left and right eye views of other non-interest objects can be kept, and only the Target region of interest for processing. Since the distance of the target of interest, that is, the depth of field, changes, the size and offset of the area where the target is located will also change accordingly. Therefore, the scaling of the area and the scaling of the offset can be performed only for the target area of ​​​​interest. Such as Figure 7 , 8 as shown, Figure 7 is the previous frame image when the depth of field of the target area of ​​interest changes, Figure 8 is the current frame image. Among them, sampling point 1 is located in the non-interest target area, and sampling points 2 and 3 are located in the interesting targe...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a generation method of a three-dimensional image. The generation method of the three-dimensional image comprise, firstly, obtaining an original two-dimensional image to serve as a first view, extracting a plurality of object regions from the original two-dimensional image, and dividing the plurality of object regions into object regions of interest and object regions of non-interest; secondly, performing overall offset on every object region of non-interest according to depth of field of different object regions of non-interest; thirdly, performing fine offset on the regions of interest; and fourthly, generating a second view according to the offset regions of interest and regions of non-interest. According to the technical scheme of the generation method of the three-dimensional image, a three-dimensional image offset amount calculation mode aiming at different showing occasions are provided, and a good three-dimensional visual immersion effect is achieved through a mode of combining regional offset and the fine offset and one the basis of reducing the calculation; meanwhile, insertion and movement of an object of interest can be performed freely and flexibly according to requirements of users, and a more convenient and fast conversion mode for three-dimensional manufacturing of movies, televisions and animations is provided.

Description

technical field [0001] The invention relates to the field of three-dimensional image generation, in particular to a method for generating three-dimensional animation based on multiple target areas. Background technique [0002] At present, the development of 3D technology has been paid more and more attention by people. The principle of 3D imaging is based on the difference in the angle of view when the left and right eyes look at objects. Two left and right eye images with parallax are obtained, and the brain combines these two images with parallax. The distance difference of objects is obtained during image synthesis, thereby forming stereoscopic vision. However, conventional two-dimensional images enter both eyes with images of exactly the same perspective, so the vision and brain cannot extract the real sense of space and three-dimensional relationship of the objects on the screen, so the sense of integration for the audience is much lower than that of 3D images. , the ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T13/20
Inventor 匡宇奇
Owner 北京青青树动漫科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products