Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Three-dimensional signal processing method based on space-time adversarial

A signal processing, space-time technology, applied in the field of video super-resolution, can solve problems such as visual effect degradation

Pending Publication Date: 2021-01-12
苏州天必佑科技有限公司
View PDF0 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The object of the present invention is to provide a kind of 3D signal processing method based on spatio-temporal confrontation (video super-resolution method based on spatio-temporal confrontation network), so as to solve the problem that the visual effect drops significantly under various and blurred motions in super-frequency super-resolution, At the same time, make full use of the time information in the video to ensure the temporal and spatial consistency of the super-resolved video

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Three-dimensional signal processing method based on space-time adversarial
  • Three-dimensional signal processing method based on space-time adversarial
  • Three-dimensional signal processing method based on space-time adversarial

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] The specific implementation manners of the present invention will be further described below in conjunction with the drawings and examples. The following examples are only used to illustrate the technical solution of the present invention more clearly, but not to limit the protection scope of the present invention.

[0029] Such as Figure 1 to Figure 3 Shown, the technical scheme of concrete implementation of the present invention is as follows:

[0030] 1. Generator: based on a recurrent convolutional network coupled with an optical flow estimation network F; the generator starts from a low-resolution (LR) frame x t Generate high-resolution (HR) output g t , and recursively use the previously generated HR output g t-1 ; the generator only learns the residual information, which is then added to the bicubic interpolated low-resolution input.

[0031] 2. Discriminator, which receives two sets of inputs: true value and generated frame; these two sets of data have the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a three-dimensional signal processing method based on space-time adversarial. A space-time adversarial network comprises a cycle generator, an optical flow estimation network and a spatio-temporal discriminator. The cycle generator is used for recursively generating a high resolution video frame from the low resolution input; the optical flow estimation network learns motion compensation between frames; the spatio-temporal discriminator may take into account spatial and temporal aspects and penalize unreal temporal discontinuities in the result without excessively smoothing the image content. According to the method, the problem that the visual effect is remarkably reduced under diverse and fuzzy motions in overclocking and super-resolution is solved, meanwhile, time information in the video is fully utilized, and the spatio-temporal consistency of the video after super-resolution is guaranteed.

Description

technical field [0001] The invention relates to the field of video super-resolution, in particular to a video super-resolution method based on a spatio-temporal confrontation network. Background technique [0002] The spatial resolution of the video depends on the spatial density of the image sensor, motion and system noise, etc. The temporal resolution of the video depends on the camera's frame rate and exposure time. When the temporal resolution is low, the video suffers from motion blur and motion aliasing. In recent years, with the application and development of deep learning in computer vision, CNN-based video object detection and action recognition have made remarkable progress. However, most neural networks for object detection and action recognition are trained with high-resolution videos, so directly applying the trained neural networks to low-resolution videos is not ideal and the performance drops significantly. In aerial and remote sensing videos, objects are ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08H04N7/01
CPCH04N7/0117G06N3/08G06V20/42G06N3/048G06N3/045G06F18/253
Inventor 侯兴松李瑞敏
Owner 苏州天必佑科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products