Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An Intra-frame Coding Optimization Method Based on Deep Learning

A technology of intra-frame coding and optimization method, which is applied in the field of video coding, can solve the problem that the coding quality and coding complexity cannot be balanced, and achieve the effect of reducing coding complexity, realizing real-time coding, and reducing the burden

Active Publication Date: 2021-08-27
BEIJING EASY AI TECHNOLOGY CO LTD
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The purpose of the present invention is to provide an intra-frame coding optimization method based on deep learning in order to solve the problem that the coding quality and coding complexity cannot be balanced in the prior art

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An Intra-frame Coding Optimization Method Based on Deep Learning
  • An Intra-frame Coding Optimization Method Based on Deep Learning
  • An Intra-frame Coding Optimization Method Based on Deep Learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0038] Such as figure 1 As shown, the deep learning-based intra-frame coding optimization method of this embodiment performs texture analysis on the input video data before intra-frame prediction, and directly assigns the corresponding prediction mode to the video data that can determine the prediction mode after the texture analysis. The video data with uncertain mode is put into the neural network for prediction and then given the corresponding prediction mode, and then the code corresponding to the mode is obtained, and finally the data after intra-frame prediction is obtained through these mode codes.

[0039] First, texture analysis is performed on the input video data before intra prediction, such as figure 2 As shown, this step includes the following steps:

[0040] S1: Divide the input video data into multiple prediction units (PU, Predict Unit);

[0041] S2: Normalize the brightness component in each prediction unit; normalize the image, mainly to reduce the influe...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention provides an intra-frame coding optimization method based on deep learning, which relates to the field of video coding technology. Before intra-frame prediction, texture analysis is performed on input video data, and the video data whose prediction mode can be determined after texture analysis is directly assigned to the corresponding Prediction mode, for the video data with uncertain mode, put it into the neural network for prediction and then give it the corresponding prediction mode, then obtain the code corresponding to the mode, and finally obtain the data after intra-frame detection through these mode codes, the present invention The neural network method is used to adaptively select the intra-frame prediction mode. In the case of high accuracy of the neural network, this method can reduce the coding complexity and greatly reduce the coding time under the premise of satisfying the coding performance, and realize real-time coding.

Description

technical field [0001] The present invention relates to the technical field of video coding, in particular to an intra-frame coding optimization method based on deep learning. Background technique [0002] Video coding technology is committed to compressing video to obtain a code stream that is convenient for transmission, especially for high-definition video transmission under the current network bandwidth. In recent years, with the continuous advancement of hardware and technology for shooting videos, 2K, 4K and even 8K videos have appeared. In order to meet the transmission requirements of ultra-high-definition video, in 2013, the Joint Collaborative Team on Video Coding (JCT-VC) proposed a new generation of video coding standard HEVC (High Efficiency Video Coding). Compared with the previous generation of AVC, the performance of HEVC has improved About 50%, especially when encoding high-definition video. HEVC improved technologies include quadtree-based coding unit dat...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04N19/103H04N19/126H04N19/186
CPCH04N19/103H04N19/126H04N19/186
Inventor 徐枫陈建武肖谋
Owner BEIJING EASY AI TECHNOLOGY CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products