A Feature Preserving Video Coding Method

A video coding and video frame technology, which is applied in the field of image and video coding, can solve the problems of inaccurate extraction of feature areas, easy compression and loss, etc., and achieve the effect of small key point range, accurate feature area, and guaranteed subjective quality

Active Publication Date: 2019-12-24
SYSU CMU SHUNDE INT JOINT RES INST +1
View PDF5 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0009] The present invention provides a feature-preserving video coding method to solve the technical defects of easy compression and loss of feature information in video frames and inaccurate extraction of feature regions existing in the video coding method provided by the above prior art

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Feature Preserving Video Coding Method
  • A Feature Preserving Video Coding Method
  • A Feature Preserving Video Coding Method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0032] Such as figure 1 , 2 As shown, the method provided by the invention specifically includes the following steps:

[0033] 1. Extract key points

[0034] In the FG-SIFT feature extraction method, the specific process of extracting key points is as follows:

[0035] 1) Detection of scale space extreme points

[0036] 2) Accurate positioning of key points,

[0037] 3) Key point descriptor generation.

[0038] First briefly introduce the algorithm in one layer (octave). Such as image 3 shown.

[0039] First, the Gaussian difference DoG_X(x, y, kσ) in the x direction in DoG is calculated. In equation (1), DoG_X(x, y, kσ) is the difference of G_X at two nearby scales, where G_X(x, y, σ) is the input image I(x, y) and the image I(x , y) Convolution with a 1-D Gaussian kernel G(x, σ) (1×n vector) in x-dimension. From Eq. (1) DoG_X(x, y, kσ) can be generated directly from the convolution of two Gaussian kernels with the difference of the input image. It can reduce the ...

Embodiment 2

[0062] In this embodiment, HEVC standard test video sequences with different resolutions (1080P, WVGA, WQVGA) are used to evaluate the algorithm proposed by the present invention. The tests are implemented based on the reference HEVC software HM16.5, all test sequences are intra-coded (I-frames) for the first frame, followed by inter-coded frames (P-frames).

[0063] Now the method of the present invention is compared with the own coding mode of HM16.5 in the following two aspects:

[0064] 1. Matching efficiency

[0065] The method of the present invention obtains the key point according to the feature detection algorithm, and then generates the region of interest, reasonably adjusts the coding rate of the region of interest and the region of non-interest according to the region of interest, and appropriately changes the ratio QP (Quantitative Parameters, quantitative parameters), to retain the feature information in each frame of the original video, to meet the high-quality...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

A video coding method provided by the invention uses a fast Gaussian algorithm instead of a traditional SIFT method to extract feature points so that the extracted feature regions are more accurate and the range of key points is smaller; and meanwhile, and after the key points are extracted, corresponding regions of interest are generated, the coding rate of the regions of interest and regions of non-interest is reasonably adjusted and QP values of the two kinds of regions are properly debugged to retain feature information in each frame of an original video, the high quality requirements of the region of interest are achieved, and meanwhile the subjective quality of the entire video is ensured. The video coding method provided by the invention can retain the feature information as much as possible under the same code rate, so that a viewer (a machine) can recognize a specific target more accurately.

Description

technical field [0001] The present invention relates to the field of image and video coding, and more specifically, to a feature-preserving video coding method. Background technique [0002] With the in-depth research on machine vision CVS (Computer Visual System, Machine Vision Perception System) in recent years and the wide application of intelligent video processing, there has been a contradiction between video coding quality (bit rate) and machine recognition ability, that is, low High quality (low bitrate) video often makes machine vision difficult. Therefore, it is necessary for machine vision-oriented compression to use non-traditional image-quality-oriented compression. In addition, both machine vision and video compression have huge computing overhead, so video compression methods for machine vision need to pay attention to the complexity of computing. [0003] HEVC (High Efficiency Video Coding, high-definition video coding standard) is the latest generation of v...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): H04N19/167H04N19/176H04N19/154
CPCH04N19/154H04N19/167H04N19/176
Inventor 王军杨青沈学林
Owner SYSU CMU SHUNDE INT JOINT RES INST
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products