Self-adaptive space-time filter for human vision system model

A human visual system, spatiotemporal filtering technology, applied in the field of video processing, can solve the problems of gross error of perceived contrast, no consideration of time factor, no consideration of nonlinearity, etc.

Inactive Publication Date: 2007-08-29
TEKTRONIX INC
View PDF1 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, Foley's model shown by M.Canon ("Perceived Contrast in the Fovea and Periphery", Journal of the Optical Society of America, Vol.2, No.10pp.1760-1768, 1985) is useful in predicting Perceived contrast at average mid-contrast levels for thresholding with coarser error
Additionally, the model proposed by Lubin does not take into account nonlinearities such as those responsible for doubling the spatial frequency and hallucinatory pulse optical illusions
Many other models based on human vision, such as S. Daly, "The Visible Differences Predictor: an Algorithm for the Assessment of ImageFidelity", Digital Images and Human Vision, ed. Andrew B. Watson, MIT Press, Cambridge, MA 1993, pp. The model proposed in 162-206 does not take into account the time factor at all

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Self-adaptive space-time filter for human vision system model
  • Self-adaptive space-time filter for human vision system model
  • Self-adaptive space-time filter for human vision system model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0013] Referring now to the adaptive three-dimensional (3-D) filter 10 shown in FIG. 1 , it passes the principal of the accuracy of the HVS model with the equivalent non-HVS model, as according to the weighted SNR as described by T. Hamada et al. Algorithm (Picture Quality Assessment System by Three-Layered Bottom-Up NoiseWeighting Considering Human Visual Perception, SMPTE journal, January 1999, pgs20-26), combined with improvements to existing filters for the Human Visual System (HVS) model . The filter 10 has a pair of adaptive 3-D filters 12, 14, "center" and "surround" filters, which receive coefficients from a filter adaptation controller 16 as a coefficient generator. The input video signal is input to filters 12 , 14 and filter adaptive controller 16 . Outputs of the filters 12 , 14 are input to a differential circuit 18 . The output of the center filter 12 or the surround filter 14 is also input to a filter adaptive controller 16 . A filter adaptive controller 16 g...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

An adaptive spatio-temporal filter for use in video quality of service instruments based on human vision system models has a pair of parallel, lowpass, spatio-temporal filters receiving a common video input signal. The outputs from the pair of lowpass spatio-temporal filters are differenced to produce the output of the adaptive spatio-temporal filter, with the bandwidths of the pair being such as to produce an overall bandpass response. A filter adaptation controller generates adaptive filter coefficients for each pixel processed based on a perceptual parameter, such as the local average luminance, contrast, etc., of either the input video signal or the output of one of the pair of lowpass spatio-temporal filters. Each of the pair of lowpass spatio-temporal filters has a temporal IIR filter in cascade with a 2-D spatial IIR filter, and each individual filter is composed of a common building block,5 i.e., a first order, unity DC gain, tunable lowpass filter having a topology suitable for IC implementation. At least two of the building blocks make up each filter with the overall adaptive spatio-temporal filter response having a linear portion and a non-linear portion, the linear portion being dominant at low luminance levels and the non-linear portion being consistent with enhanced perceived brightness as the luminance level increases.

Description

technical field [0001] The present invention relates to a video processing technique, and more specifically, to an adaptive spatio-temporal filter of a human visual system model for determining video service quality. Background technique [0002] Video is recorded and transmitted through error-prone methods such as lossy compression systems. This makes sense for objective measures used to predict human perception of these errors. The perceptibility of these errors at and above the threshold is a function of many factors or parameters of the error and the video context in which it occurs. Perceived sensitivity to video errors as a function of spatial frequency varies with local average brightness, local stimulus duration (temporal continuation), associated temporal frequency content, area (spatial / angular continuation) and original, error-free or fiducial, local change by contrast. Likewise, perceptual sensitivity to video errors as a function of temporal frequency varies ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): H04N17/00H04N5/205G06K9/40G06T5/00G06T5/50G06T7/00H04N5/14
CPCG06T2207/20182G06T2207/10016G06T5/20G06T5/50G06T5/001G06T7/0004G06T5/002
Inventor K·M·费尔古森
Owner TEKTRONIX INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products