Intra-frame prediction method based on generative adversarial network

An intra-frame prediction and intra-frame prediction mode technology, applied in the video field, can solve the problem of inability to effectively deal with complex textures and curved edges, and achieve the effect of saving bit rate

Pending Publication Date: 2021-02-05
SUN YAT SEN UNIV
View PDF5 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] In order to overcome the problem that the intra-frame prediction method in the prior art cannot effectively deal with complex

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Intra-frame prediction method based on generative adversarial network
  • Intra-frame prediction method based on generative adversarial network
  • Intra-frame prediction method based on generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0040] Such as figure 1 As shown, an intra prediction method based on generating confrontation network includes the following steps:

[0041] S1: Collect the original video frame image and perform resampling processing, and combine the resampled image and the original video frame image together to form an image data set;

[0042] In a specific embodiment, first enough video frame images are collected, and for high-resolution video frame images, such as images with a resolution of 1000*1000, a resampling method based on the regional pixel relationship is used to downsample and scale them to 1 / 2 of the original pixel size, and then the resampled image and the original video frame image together form an image dataset.

[0043] S2: Convert the images in the image data set to YUV format, and perform size cropping processing on the converted images to obtain the training data set;

[0044] It should be noted that converting the images in the image data set into the YUV format real...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an intra-frame prediction method based on a generative adversarial network, and the method comprises the steps: collecting an original video frame image, carrying out the resampling, and constructing an image data set; converting an image in the image data set into a YUV format, and performing size cutting processing on the image after format conversion to obtain a trainingdata set; training the generative adversarial network by taking the image of the training data set as the input of the generative adversarial network; embedding a generator part of the trained generative adversarial network into HEVC reference software; selecting a prediction reference block and inputting the prediction reference block into the trained generative adversarial network, outputting aprediction value, obtaining a prediction result by using an intra-frame prediction mode of an HEVC coding standard, comparing the rate distortion costs of the two modes, and selecting an optimal intra-frame prediction mode; and generating a prediction mode flag bit in the code stream according to the optimal intra-frame prediction mode, and continuously encoding according to the original HEVC standard encoding mode. According to the method, accurate prediction can be provided for complex textures and bent edges of the image.

Description

technical field [0001] The present invention relates to the field of video technology, and more specifically, to an intra prediction method based on a generative confrontation network. Background technique [0002] Video coding technology is committed to compressing video to obtain a code stream that is easy to transmit, especially for high-definition video transmission under limited network bandwidth. In video coding and decoding, prediction methods such as intra-frame and inter-frame can effectively reduce the spatial and temporal redundancy of video. The traditional video intra-frame prediction method is based on the preset that the texture in the image is directional, and uses several predefined fixed modes. When predicting, each intra-frame mode is enumerated one by one to select the best encoding cost. one of. This intra-frame prediction method can predict the texture structure of monotonic angle more effectively. However, based only on the assumption of directional...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): H04N19/593H04N19/159H04N19/132H04N19/176H04N19/70
CPCH04N19/593H04N19/159H04N19/132H04N19/176H04N19/70
Inventor 王军钟光宇胡纪元
Owner SUN YAT SEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products