Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Visual Perceptual Coding Method Based on Multi-Domain JND Model

A technology of visual perception and coding method, which is applied in the field of video information processing, can solve problems such as immature basic theory, inability to explain human eye characteristics well, and coding standards that do not take human eye characteristics into account.

Active Publication Date: 2020-06-16
XIAMEN UNIV
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Many years ago, people began to pay attention to the human eye system, but because it involves many disciplines such as physiology and psychology, the corresponding basic theories are not yet mature, and some characteristics of the human eye cannot be well explained. In the field of digital signal processing, there is still room for further improvement in the coding compression rate, so so far, all coding standards have not considered the characteristics of the human eye to improve compression efficiency

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Visual Perceptual Coding Method Based on Multi-Domain JND Model
  • A Visual Perceptual Coding Method Based on Multi-Domain JND Model
  • A Visual Perceptual Coding Method Based on Multi-Domain JND Model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0084] The invention provides a visual perception coding algorithm based on a multi-domain JND model, which includes two parts: a multi-domain JND model and a coding suppression strategy. The entire multi-domain JND model includes three parts: space-time and frequency, in which the frequency domain model is only related to the spatial frequency and viewing angle of the different position coefficients of the transform block, and is used to calculate the basic JND threshold; the spatial domain model includes two parts: brightness masking modulation factor and contrast masking modulation factor In the part, the brightness masking modulation factor is related to the average brightness and spatial frequency of the transform block, which is used to correct the distortion sensitivity of the human eye to different brightness, and the contrast masking modulation factor is related to the average texture intensity and spatial frequency of the transform block, which is used for Correct the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a visual perceptual coding method based on a multi-domain JND (Just Noticeable Difference) model and relates to video information processing. The method comprises the following steps: respectively calculating a space-domain basic JND threshold value, a luminance masking modulation factor, a contrast masking modulation factor and a time domain masking modulation factor of each transformation coefficient in a DCT (Discrete Cosine Transform) block by utilizing a time-space-domain multi-domain JND model so as to obtain the time-space-domain multi-domain JND threshold value of each transformation coefficient; introducing a block perception-based distortion probability evaluative criteria in the transform coding process, and searching a correction factor of each coefficient relative to the JND threshold value through an adaptive searching algorithm so as to obtain a transformation coefficient suppression value; and finally, subtracting the original transformation coefficient from the most appropriate suppression value obtained through corresponding calculation, and taking the coefficient as a novel coefficient to be put at an entropy coding stage. According to the coded suppression strategy of the multi-domain JND model and the block perception-based distortion probability, the coding rate can be effectively reduced on the premise of guaranteeing certain subjective quality, and the compression ratio of the current coding standard is further improved.

Description

technical field [0001] The invention relates to video information processing, in particular to a visual perception coding method based on a multi-domain JND model. Background technique [0002] With the development of multimedia technology, people have higher and higher requirements for video resolution. 2K, 4K and even 8K videos will be popularized in the near future. In order to solve the storage and transmission requirements of these huge video data, video coding standards Came into being. At present, the latest video coding technology is based on Shannon's information theory. By searching for a variety of coding modes, one can find the optimal coding method. This process requires the introduction of a large number of calculations to improve the accuracy. The improvement of the effect tends to be flat, indicating that the coding method based on this coding idea has entered the bottleneck period of development. It is particularly important for the development of future co...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04N19/60H04N19/147H04N19/61
CPCH04N19/147H04N19/60H04N19/61
Inventor 郭杰锋胡巩黄联芬
Owner XIAMEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products