Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video coding method based on JND model

A video coding and model technology, applied in the field of video coding, can solve problems such as coding and transmission waste

Active Publication Date: 2019-08-16
JINAN UNIVERSITY
View PDF6 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

It is a waste for the device terminal to encode and transmit information that cannot be perceived by the human eye

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video coding method based on JND model
  • Video coding method based on JND model
  • Video coding method based on JND model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0071] The present invention will be further described in detail below in conjunction with the embodiments and the accompanying drawings, but the embodiments of the present invention are not limited thereto.

[0072] A video coding method based on the JND model is a perceptual video coding method based on joint estimation of the pixel domain and the DCT domain, including:

[0073] S1. Establish a pixel-domain JND model. The pixel-domain JND model uses a nonlinear superposition model, combined with background brightness adaptation and texture masking effects, to obtain a pixel-domain JND threshold;

[0074] The pixel-domain JND model adopts the nonlinear superposition model (NAMM), which effectively combines background brightness adaptation and texture masking effects, and is applied to color images or videos, which is a relatively mature JND model. The spatial domain JND model of each pixel can be used as an approximation of the nonlinear model, and the pixel domain JND thresh...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of video coding, and relates to a video coding method based on a JND model. The method comprises the following steps of establishing a pixel domain JND model; establishing an improved DCT domain JND model, and introducing a time-space domain CSF function which better conforms to the human eye characteristics; preprocessing an original video by using thepixel domain JND model to remove the visual redundancy in the video; using the improved DCT domain JND model for processing a transform non-skip mode, and removing the distortion which cannot be perceived by human eyes; for a transform skip mode with very small prediction residual, using a brightness masking model that is simple to calculate to reduce the computational complexity. According to the present invention, by adopting the improved DCT domain JND model, a processing result is more appropriate to human eyes; different models are used for different modes, the perception redundancy during the video coding process can be further removed, and the video coding efficiency is greatly improved.

Description

technical field [0001] The invention belongs to the technical field of video coding, and relates to a video coding method based on a JND model. Background technique [0002] In recent years, with the rapid development of Internet technology and smart devices, multimedia video and images have been widely used, affecting and changing all aspects of human life. Early computer and communication systems were primarily focused on processing and transmitting text or voice messages. However, with the popularity of video applications, the problem of limited computer processing power has also emerged. Due to the huge amount of video information, the application value of the video without any processing is very low whether it is stored or transmitted. For example, if we want to transmit a standard-definition color video with a frame rate of 25 frames per second and a resolution of 720*576 over the network in real time, a bandwidth of 720*576*24*25=248832000 bits per second is require...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N19/85H04N19/154H04N19/625H04N19/80
CPCH04N19/85H04N19/154H04N19/625H04N19/80
Inventor 易清明范文卉石敏
Owner JINAN UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products