Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Perceptual video compression method based on JND and AR model

An AR model and video compression technology, applied in the direction of digital video signal modification, television, electrical components, etc., can solve problems such as robustness and insufficient effect, and achieve the effect of reducing bit rate, improving compression efficiency, and improving real-time performance.

Inactive Publication Date: 2010-09-22
SOUTHEAST UNIV
View PDF2 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

These literatures only consider the characteristics of static color or edge information when performing image segmentation, ignoring HVS, and the robustness and effect are not good enough for different texture regions.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Perceptual video compression method based on JND and AR model
  • Perceptual video compression method based on JND and AR model
  • Perceptual video compression method based on JND and AR model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0043] Below in conjunction with accompanying drawing, the technical scheme of invention is described in detail:

[0044] For stereoscopic video with a huge amount of information, removing the perceptual redundancy has a more obvious effect on improving the coding efficiency. The research on HVS started from psycho-physiology, and later widely applied to vision-related fields. In stereoscopic video processing, in addition to temporal and spatial redundancy, the elimination of perceptual redundancy cannot be ignored. The invention proposes a perceptual video compression method based on JND and AR models, including a segmentation algorithm of texture regions and a synthesis algorithm based on an autoregressive model. We first use the JND-based segmentation algorithm to segment the texture area in the video, and then the autoregressive model synthesizes the texture area, such as figure 1 shown.

[0045] The present invention's perceptual video compression method based on JND a...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention publishes a perceptual video compression method based on a just noticeable distortion (JND) and auto-regressive (AR) model, comprising a texture region dividing algorithm and a synthesis algorithm based on the AR model. The texture region in a video is divided by using the dividing algorithm based on the JND and then the texture region is synthesized by using the AR model. The invention provides a space-time JND model and accurately and effectively divides the texture region. An AR model is designed, which ensures the video quality and greatly improves the real-time performance through simple calculation. A video compression technology in combination with eye vision characteristic is developed, which improves the video compression efficiency and reduces the bit rate.

Description

technical field [0001] The invention relates to the technical field of multimedia signal processing, especially in the field of video compression coding development. Background technique [0002] In the past few decades, the technology of image and video compression coding has been greatly developed. JPEG2000 and MPEG-4 AVC / H.264, which represent the current technical level, both reflect their high efficiency in encoding. In existing literature, stereoscopic video coding and compression technology based on H.264 has also appeared. These technologies all perform compression coding by removing redundancy in time and space. However, a common problem is that everyone focuses on the static redundant information and completely ignores the perceived redundancy. In other words, most of the previous compression criteria were rate-distortion performance, which, although widely adopted, does not reflect the characteristics of human vision. Therefore, we want to study how to combine...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N7/50H04N7/26H04N19/29
Inventor 邹采荣王翀赵力王开戴红霞包永强余华
Owner SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products