Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Time domain prediction-based saliency extraction method

A technology of attention degree and attention value, which is applied in the field of video analysis, can solve the problems of high complexity of attention degree extraction method and the inability to apply real-time video coding, so as to reduce computational complexity, balance accuracy and real-time performance, and overcome computational complexity high degree of effect

Inactive Publication Date: 2010-04-21
WUHAN UNIV
View PDF5 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] In order to solve the problem that the existing attention degree extraction method has high complexity and cannot be aimed at the real-time application of video coding, the present invention provides an attention degree extraction method based on time domain prediction

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Time domain prediction-based saliency extraction method
  • Time domain prediction-based saliency extraction method
  • Time domain prediction-based saliency extraction method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0016] The invention discloses a attention degree extraction method based on time-domain prediction. The basic principle is to use the time-domain correlation of the texture visual features and motion parameter features of the attention area, according to the current frame and at least one adjacent previous frame. The attention map calculates the attention prediction map of the next frame to reduce the computational complexity.

[0017] Embodiments of the invention are described below with reference to the drawings. Through the detailed description of the embodiments in conjunction with the accompanying drawings, the advantages and features of the present invention, as well as its implementation methods, will be clearer to those skilled in the art. However, the scope of the present invention is not limited to the embodiments disclosed in the specification, and The invention can also be implemented in other forms. Texture visual features may include various features, among whi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a time domain prediction-based saliency extraction method, which predicts a next frame saliency map from a current frame or at least one adjacent previous frame saliency map by using the time domain correlation of the saliency map. The time domain prediction-based saliency extraction method comprises the following steps: firstly, extracting features and saliency to obtain saliency submaps; secondly, performing time domain prediction on the saliency submaps; and finally combining the predicted saliency submaps to obtain the saliency prediction map of the next frame map. Through regional prediction technology of the saliency, the time domain prediction-based saliency extraction method greatly reduces the computational complexity of a saliency model greatly, and solves the difficult problem that the computational complexity of the prior saliency extraction method is too high to aim at the real time application of video coding.

Description

technical field [0001] The invention belongs to the field of video analysis, in particular to an attention degree extraction method using image features. Background technique [0002] The human visual system requires both the ability to process a large amount of input information and the ability to respond in real time. Visual psychology studies have shown that when analyzing complex input scenes, the human visual system adopts a serial computing strategy, that is, using The selective attention mechanism selects a specific area of ​​the scene according to the local characteristics of the image, and moves the area to the central fovea with high resolution through rapid eye movement scanning to realize attention to the area so that it can Carry out more detailed observation and analysis. The selective attention mechanism is a key technology for human beings to select specific information from a large amount of information input from the outside world. If this attention mecha...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N7/36H04N19/136H04N19/149H04N19/196H04N19/31H04N19/503
Inventor 胡瑞敏夏洋张岿王中元王啟军陈皓毛丹钟睿汪欢
Owner WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products