Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image JND Threshold Calculation Method Based on Visual Attention Mechanism in DCT Domain

A visual attention, image technology, applied in the field of image/video coding, can solve the problem of not considering the JND model, and achieve the effect of fine and accurate segmentation

Inactive Publication Date: 2016-11-23
TONGJI UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Since most image / video coding standards are based on the DCT domain (such as JPEG, H.261 / 3 / 4, MPEG-1 / 2 / 4), the JND model based on the DCT domain has attracted the attention of many researchers, such as Document 2 (see Z.Wei and K.N.Ngan, "Spatial just noticeable distortion profile for image inDCT domain," In Proc.IEEE Int.Conf.Multimeda and Expo, pp.925-928, 2008.) combined with the brightness of the image from Adaptive characteristics, spatial contrast effect and contrast masking effect based on block classification, but this model does not consider the influence of the human eye's visual attention mechanism on the JND model, so the calculation accuracy needs to be further improved

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image JND Threshold Calculation Method Based on Visual Attention Mechanism in DCT Domain
  • Image JND Threshold Calculation Method Based on Visual Attention Mechanism in DCT Domain
  • Image JND Threshold Calculation Method Based on Visual Attention Mechanism in DCT Domain

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029] Below in conjunction with accompanying drawing, the present invention will be further described with specific example:

[0030] The example that the present invention provides adopts MATLAB7 as the emulation experiment platform, uses the bmp grayscale image Airplane of 512 * 512 as the selected test image, describes this example in detail below in conjunction with each step:

[0031] Step (1), select the bmp grayscale image of 512 * 512 as the image of input test, carry out the DCT transformation of 8 * 8 to it, transform it into DCT domain by space domain;

[0032] In step (2), in the DCT domain, the perceivable distortion JND value is obtained by calculating the product of the basic threshold value of the spatial contrast and the adaptive modulation factor of the brightness, and the calculation formula is as follows:

[0033] T JND (n,i,j)=T Basic (n,i,j)×F lum (n) (1)

[0034] Among them, T Basic (n,i,j) represent the spatial contrast sensitivity threshold, F l...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A visual attention mechanism based image JND threshold calculation method in the DCT domain. The present invention proposes two schemes combining saliency and block classification, one is to combine the visual attention masking factor of a single point with the masking factor of block classification in a point-to-point manner, and the other is to use each block The average saliency of represents the saliency of the whole block, and then combines the visual attention masking factor based on each block and the masking factor of the block classification in a block-to-block manner. The traditional JND threshold is modulated using the value calculated by the integrated contrast masking function, resulting in a more accurate JND threshold. Both methods can effectively improve the accuracy of the JND threshold, thus making the JND threshold more closely match the human visual system. The model realized by the image JND threshold calculation method proposed by the present invention can accommodate more noise, and in terms of PSNR, the average model can be improved by 0.54DB.

Description

technical field [0001] The invention relates to the technical field of image / video coding. technical background [0002] Traditional image / video coding technology mainly compresses and codes spatial redundancy, time domain redundancy, and statistical redundancy, but rarely considers the characteristics and psychological effects of the human visual system, so a large amount of visual redundant data is encoded and transmitted , in order to further improve the efficiency of coding, researchers have started research on removing visual redundancy. At present, an effective method to represent visual redundancy is the least detectable distortion model based on psychology and physiology, referred to as the JND model, which can also be called the just detectable distortion model, that is, the changes that the human eye cannot perceive, due to various human eyes. The shielding effect means that the human eye can only perceive noise above a certain threshold, which is the just detecta...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04N19/61H04N19/154
Inventor 张冬冬高利晶臧笛孙杳如
Owner TONGJI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products