Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for constructing JND model of screen image

A technology of screen image and construction method, applied in the field of screen image visual redundancy estimation, can solve the problem of not being able to estimate the visual redundancy information of screen image well.

Active Publication Date: 2019-11-01
HUAQIAO UNIVERSITY
View PDF3 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, most of the current JND models are only for natural images, and cannot better estimate the visual redundancy information of screen images.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for constructing JND model of screen image
  • Method for constructing JND model of screen image
  • Method for constructing JND model of screen image

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] The present invention will be further described below through specific embodiments.

[0039] The present invention accurately estimates the visual redundant information in the screen image, and provides a screen image JND model construction method, such as figure 1 As shown, the specific implementation steps are as follows:

[0040] 1) Enter the screen image.

[0041] 2) Using the text segmentation technology to obtain the text area of ​​the screen image.

[0042] 3) Using the Gabor filter to extract the edge of the text area, the screen image is divided into a text edge area and a non-text edge area.

[0043] 4) Using the edge width and edge contrast of the text edge pixels to calculate the edge structure distortion sensitivity and edge contrast masking correspondingly, and obtain the visual masking model of the text edge area.

[0044] Specifically, calculate the strong edge contrast T c+ (x,y) and weak edge contrast T c- (x,y), as follows:

[0045]

[0046] ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a method for constructing a JND model of a screen image. Firstly, a text region of a screen image is obtained by using a text segmentation technology; secondly, extracting edge pixels of the text area, and dividing the screen image into a text edge area and a non-text edge area; calculating edge structure distortion sensitivity and edge contrast masking by utilizing the edge width and the edge contrast to obtain a text edge region visual masking model; calculating the brightness self-adaption and contrast masking effect of the non-text edge area to obtain a visual masking model of the non-text edge area; and finally, combining the visual masking models of the text edge area and the non-text edge area to obtain a screen image JND model. According to the method, factors such as screen image characteristics and different visual perception characteristics of human eyes to different areas of the screen image are fully considered, visual redundancy information of thescreen image is accurately estimated, and the method can be widely applied to the technical field of screen images.

Description

technical field [0001] The invention relates to the field of image processing, in particular to a method for estimating screen image visual redundancy. Background technique [0002] With the rapid development of mobile Internet and multimedia information technology, applications such as virtual screen sharing, distance education and online games are becoming more and more popular in people's real life, which also produces huge screen image / video data, and image / video encoding Technology presents enormous challenges. Considering that the human eye is the ultimate receiver of images / videos, how to use the visual characteristics of the human eye to remove visual redundancy in images / videos and improve perceptual coding efficiency has become a research hotspot in the current academic and industrial circles. Among them, the Just Noticeable Difference Model (JNDModel) can quantify and estimate the visual redundancy in images / videos more accurately. However, most of the current J...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/46G06T7/11G06T7/13
CPCG06T7/11G06T7/13G06V10/449
Inventor 曾焕强曾志鹏陈婧张云朱建清张联昌
Owner HUAQIAO UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products