Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Three-dimensional mixed minimum perceivable distortion model based on depth image rendering

A depth image and distortion model technology, applied in image communication, stereo systems, electrical components, etc., can solve the problems of visual redundancy, not removed, unable to accurately reflect the human eye's perception of stereo video distortion, etc., to save bits rate effect

Inactive Publication Date: 2014-07-30
WUHAN UNIV
View PDF2 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The existing JND models of stereoscopic video do not consider the influence of visual characteristics such as competition or fusion between the two eyes on the masking effect, and are only effective for a single viewpoint image in stereoscopic video, and only consider a certain factor that affects the subjective perception of quality , cannot accurately reflect the distortion perception of the human eye to stereoscopic video, so that there is still a large amount of visual redundancy that has not been removed when it is applied to coding

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Three-dimensional mixed minimum perceivable distortion model based on depth image rendering
  • Three-dimensional mixed minimum perceivable distortion model based on depth image rendering
  • Three-dimensional mixed minimum perceivable distortion model based on depth image rendering

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] In order to facilitate those of ordinary skill in the art to understand and implement the present invention, the present invention will be described in further detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the implementation examples described here are only used to illustrate and explain the present invention, and are not intended to limit this invention.

[0034] please see figure 1 , figure 2 and Fig. 3, the technical solution adopted by the present invention is: a stereo mixing minimum perceivable distortion model based on depth image rendering, comprising the following steps:

[0035] Step 1: Calculate the 2D minimum perceivable distortion value of the input image, which is implemented by nonlinearly adding the brightness adaptive factor and texture masking factor of the input image, and subtracting the difference between the brightness adaptive factor and the texture masking factor The overlap effect of ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a three-dimensional mixed minimum perceivable distortion model based on depth image rendering. According to the three-dimensional mixed minimum perceivable distortion model, the minimum perceivable distortion value is modulated through a depth masking effect and a geometric distortion effect. The three-dimensional mixed minimum perceivable distortion model comprises the steps of calculating a 2D minimum perceivable distortion value, calculating depth masking degree and geometric distortion of an image, performing modeling on exponential relationship between the depth masking degree and geometric distortion and the mixed minimum perceivable distortion, and multiplying the 2D minimum perceivable distortion value by the exponential relationship to obtain a three-dimensional mixed minimum perceivable distortion value. By means of the new three-dimensional mixed minimum perceivable distortion model, the shortcomings that three-dimensional perception effects such as the depth masking effect and the geometric distortion effect are ignored by the existing three-dimensional minimum perceivable distortion model. In addition, the new three-dimensional mixed minimum perceivable distortion mode is applied to three-dimensional video coding to perform residual filter, and therefore, three-dimensional video coding efficiency is increased.

Description

technical field [0001] The invention belongs to the technical field of stereoscopic image processing, and relates to a depth image-based rendering (DepthImage-basedRendering, DIBR) video coding technology, in particular to a depth-image-based rendering-based hybrid just noticeable distortion (HybridJustNoticeableDistortion, HJND) model. Background technique [0002] In stereoscopic TV and free-viewpoint video, stereoscopic perception brings people a more immersive and comfortable visual experience, and at the same time introduces a large amount of video data, which makes the current stereoscopic video coding technology face great challenges. . In order to break through the challenge of coding efficiency, a virtual view synthesis technology based on depth map is proposed. The color decoded video and corresponding depth video are used as input, and the color video of virtual view is synthesized by 3Dwarping method to realize free-viewpoint stereoscopic video display. [0003]...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N13/00
Inventor 胡瑞敏李志向瑞胡文怡钟睿蔡旭芬
Owner WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products