Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Perceptual video coding method based on area just-noticeable distortion

A video coding and region technology, applied in the field of perceptual video coding, can solve the problems of high complexity and lack of universality of the JND model, and achieve the effect of low complexity, extensive and rich image content, and high coding efficiency

Active Publication Date: 2019-07-26
TONGJI UNIV
View PDF13 Cites 17 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, most of the current JND models are built under the condition of a fixed code rate. When the target quantization parameters are updated, they need to be recalculated. It can be seen that the traditional JND models lack universality and high complexity; in addition, such models will JND The threshold is described as a continuous function of quantization parameters, and the latest research shows that the human eye has a stepwise perception of distortion, so the traditional JND model has certain limitations in simulating the perception process of HVS and guiding perceptual coding

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Perceptual video coding method based on area just-noticeable distortion
  • Perceptual video coding method based on area just-noticeable distortion
  • Perceptual video coding method based on area just-noticeable distortion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0043] The present invention will be described in detail below with reference to the drawings and specific embodiments. This embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation mode and specific operation process are given, but the protection scope of the present invention is not limited to the following embodiments.

[0044] Such as figure 1 As shown, this embodiment provides a perceptual video coding method based on just perceptible distortions in regions. The method includes: obtaining all image blocks of each frame of the video to be compressed, and obtaining the image blocks through a trained JND prediction model The predicted JND threshold is based on the target code rate and the predicted JND threshold to perform perceptual redundancy removal to obtain the optimal quantization parameter, and realize perceptual video coding based on the optimal quantization parameter.

[0045] The JND prediction model is ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a perceptual video coding method based on area just-noticeable distortion. The method comprises the following steps: obtaining all image blocks of each frame of image of a to-be-compressed video, obtaining a prediction JND threshold value of the image blocks through a trained JND prediction model, carrying out perception redundancy removal based on a target code rate and the prediction JND threshold value to obtain an optimal quantization parameter, and realizing perception video coding based on the optimal quantization parameter. Under the constraint that the subjective perception quality of the video is kept unchanged, under the condition of any target code rate, the code rate is saved to the maximum extent, and compared with the prior art, the method has the advantages of being low in complexity, high in robustness, high in efficiency and the like.

Description

Technical field [0001] The present invention relates to the field of video coding, in particular to a perceptual video coding method based on just perceivable distortions in regions. Background technique [0002] With the gradual increase in the ability of portable hardware devices to acquire rich multimedia, high-definition and 4K ultra-high-definition video have emerged. In order to facilitate the storage and transmission of large-capacity videos, it is necessary to further improve the video encoding performance. The High Efficiency Video Coding Standard (HEVC) proposed in 2012 has become the current mainstream advanced coding standard, but it still uses traditional objective evaluation standards to measure compression quality, such as mean square error (MSE) and peak signal-to-noise ratio (PSNR), etc. . However, such standards cannot accurately measure the subjective perception results of the human eye, because the human visual system (HVS) is different in sensitivity to dis...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N19/14H04N19/124H04N19/176H04N19/146H04N19/154
CPCH04N19/14H04N19/124H04N19/176H04N19/146H04N19/154
Inventor 王瀚漓张鑫宇
Owner TONGJI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products