New viewpoint image synthesis method based on depth-aided full resolution network

A full-resolution, image synthesis technology, applied in image enhancement, image analysis, image data processing, etc.

Inactive Publication Date: 2018-08-17
SHENZHEN WEITESHI TECH
View PDF2 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Aiming at the problem that global feature prediction does not change local details but cannot modify local details, the purpose of the present invention is to provide a new viewpoint image synthesis method based on depth-assisted full-resolution network. The encoder part of the full-resolution network first Important local features are extracted from the input image, and then the depth predictor is pre-trained on a large image dataset by detecting the global image information to estimate the depth map of the input image, and then feeds the local features and depth to the decoder and instructs Two-channel mapping of the target viewpoint position, and finally flow-based deformation, where the decoder converts the combined features into a warp field to synthesize the final target image

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • New viewpoint image synthesis method based on depth-aided full resolution network
  • New viewpoint image synthesis method based on depth-aided full resolution network
  • New viewpoint image synthesis method based on depth-aided full resolution network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] It should be noted that, in the case of no conflict, the embodiments in the present application and the features in the embodiments can be combined with each other. The present invention will be further described in detail below in conjunction with the drawings and specific embodiments.

[0034] figure 1 It is a system frame diagram of a new viewpoint image synthesis method based on depth-assisted full-resolution network of the present invention. It mainly includes depth-assisted full-resolution network, loss function, and training.

[0035] The depth-assisted full-resolution network consists of an encoder, an estimated depth map of the input image, a decoder, and flow-based warping.

[0036]The encoder is used to extract local features of the input image; the encoder network is a series of convolution kernels with different sizes to generate features with the same resolution as the input image; a rectified linear unit (ReLU) layer is added in each convolutional layer ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a new viewpoint image synthesis method based on a depth-aided full resolution network. The main content comprises the depth-aided full resolution network, the loss function andtraining. The process comprises the steps that the encoder part of the full resolution network firstly extracts important local features from the input image; then a depth predictor performs pre-training on the large image data set after detecting the global image information so as to estimate the depth map of the input image; then the local features and the depth are fed to the decoder and dual channel mapping of the target viewpoint position is indicated; and finally the decoder converts the combined features into the distortion field so as to synthesize the final target image based on flowdeformation. The full resolution network is designed and the local image features are extracted according to the same input resolution so that the blurred ghost in the finally synthesized image can beprevented, and the high-resolution and high-quality image can be acquired.

Description

technical field [0001] The present invention relates to the field of image synthesis, in particular to a new viewpoint image synthesis method based on a depth-assisted full-resolution network. Background technique [0002] New viewpoint image synthesis is an interdisciplinary field of computer vision and image processing. It is an important part of virtual reality technology and has a wide range of applications in many aspects. For example, new viewpoint synthesis based on face images is an important application field in face processing, widely used in face recognition, face animation and many other aspects; by inputting existing face images, using the same viewpoint for the input image Samples can be expressed, and by combining and synthesizing new viewpoint images, enough images from different angles of the face can be obtained, which will help provide more effective information for the detection of criminal cases. In the research and development of digital TV in the futu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T5/00
CPCG06T5/003G06T2207/10028G06T2207/20081G06T2207/20084
Inventor 夏春秋
Owner SHENZHEN WEITESHI TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products