Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video content replacement method and system based on adversarial generative network

A video content and network technology, applied in biological neural network models, neural learning methods, digital video signal modification, etc., can solve the problems of unable to generate facial images, not many patents, and unable to make full use of key points

Active Publication Date: 2020-03-06
SHANGHAI JIAO TONG UNIV
View PDF9 Cites 25 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Aiming at computer-automated target image replacement, there are not many published patents at present. Among them, the Chinese patent with the publication number CN201611122803 and titled "A Method and Device for Image Replacement" provides a target image replacement solution. After obtaining the first human face image in the video, the party in the video obtains the photo to be replaced, and recognizes the second human face image in the photo to be replaced, and then replaces the first human face image with the second human face image to realize the Face target replacement, however, this method only pastes the existing image on the target image bluntly, cannot generate a facial image that the user has not taken before, and cannot preserve the semantic information of the source target image, such as expression, eyes, etc. The second face image is externally input instead of the source video, and the application scenarios are quite limited
The publication number is CN201810975216, and the Chinese patent titled "A Method and Device for Face Image Replacement" provides another target image replacement scheme. After obtaining the target face image set, the method distorts it and inputs it into the neural network After training, the target neural network that can replace the face image in the training scene image with the target face image will be obtained. However, this method is too rough for the target face image, and cannot make full use of key points, semantic segmentation, important region and other external information for guided image generation, and the network design is too simple to be suitable for the scene of clear image generation, and this method only focuses on the problem of face image replacement, but does not solve the generalized target image replacement problem, which is not comparable to artificial production video effects

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video content replacement method and system based on adversarial generative network
  • Video content replacement method and system based on adversarial generative network
  • Video content replacement method and system based on adversarial generative network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0281] The present invention will be described in detail below in conjunction with specific embodiments. The following examples will help those skilled in the art to further understand the present invention, but do not limit the present invention in any form. It should be noted that those skilled in the art can make several changes and improvements without departing from the concept of the present invention. These all belong to the protection scope of the present invention.

[0282] Aiming at the time-consuming and high cost of target image replacement in the current film and television production, the purpose of the present invention is to propose an effective video target image replacement method. This method can not only automatically realize the video target image replacement effect comparable to the previous manual production, but also is easy to operate, low cost, good effect and short time-consuming.

[0283] A method for replacing video content based on an adversaria...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a video content replacement method and system based on an adversarial generative network. The method comprises the following steps: extracting a source target image in a video frame; performing semantic segmentation on the source target image; performing data enhancement on the source target image by using the image transformation operation; inputting the semantic segmentation images paired by the enhanced data set into a network training generative adversarial model; accurately detecting an ROI region of the source target image and carrying out cutting and straightening; taking the ROI image as a generation model input to obtain a generated target image and a synthetic mask; controlling edge smoothing and deblurring of the generated target image by using mask control after Gaussian blur; adjusting the hue of the generated target image to be consistent with that of the frame source target image by a histogram matching strategy; generating video frame anti-jitterprocessing; and fusing the source image and generating a target image. Compared with the prior art, the method and system has the characteristics and advantages of easiness in operation, low cost, good effect, short consumed time and the like.

Description

technical field [0001] The present invention relates to the intersectional fields of video image processing and artificial intelligence, in particular, to a video content replacement method and system based on a confrontational network, and in particular to a video replacement method based on a target image generated by a confrontational network. Background technique [0002] With the rapid development of the film and television industry, computer science and applications have gradually penetrated into all aspects of film and television production, which not only accelerates the production process, but also enables human creativity to be presented unprecedentedly in film and television works. However, as the application of VFX brings huge post-manpower requirements and equipment purchase needs, the cost of the film and television industry remains high, and the cost of many large-scale productions reaches hundreds of millions of dollars. While major film and television compan...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N19/42G06N3/08G06N3/04
CPCH04N19/42G06N3/08G06N3/045
Inventor 孙锬锋蒋兴浩唐致远许可
Owner SHANGHAI JIAO TONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products