Dynamic vision caution region extracting method based on characteristic

A technology of visual attention and region extraction, applied in the field of image processing, which can solve the problems of discontinuous attention time, inability to consider multiple frames, and inability to be well implemented.

Inactive Publication Date: 2009-07-29
SHANGHAI JIAO TONG UNIV
View PDF0 Cites 82 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although this method and the subsequent space-based analysis techniques have good performance in many scenes, they almost inevitably face the following problems: 1) only pay attention to a certain part of visual cues, 2) the distribution of attention is not continuous in time
For example, when observing continuous images, the system cannot consider the situation of multiple frames, which leads to the need to re-analyze the salient map separately at each moment, which greatly reduces the continuity and reliability of the system
Moreover, when the viewing angle and the position of the object change, since there is no tracking mechanism for the feature, the prediction of the new salient map is likely to be offset from the previous frame.
Furthermore, a range of visual attentional behaviors, such as inhibition of return, and viewpoint shift, are not well represented by spatially based analysis techniques

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Dynamic vision caution region extracting method based on characteristic
  • Dynamic vision caution region extracting method based on characteristic
  • Dynamic vision caution region extracting method based on characteristic

Examples

Experimental program
Comparison scheme
Effect test

example 1

[0054] Example 1: Saliency Maps for Still Images

[0055] 8×8 RGB patches are used to train the basis functions (A, W), and their dimensionality is 192.

[0056] For an input picture with a size of 800×640, it is divided into 8000 8×8 RGB color blocks, that is, n=8000, forming a sampling matrix X=[x 1 , x 1 ,...,x 8000 ]. And the corresponding coefficients of the basis functions, that is, the characteristics of X, are calculated by the formula S=WX.

[0057] The activation rate p of each feature is obtained by formula (2.1), and the incremental coding length index of each feature is measured according to p and formula (2.3).

[0058] Divide the salient feature set SF according to the incremental coding length index of each feature and the formula (3.1), and use the formula (3.2) to redistribute the energy of each feature in the salient feature set. Then for image patch x k , according to the formula (3.3), deal with its significance m k , and finally use the formula (3....

example 2

[0060] Example 2: Salient map in video

[0061] Compared with previous similar methods, a great advantage of the method of the present invention is that it is continuous. Incremental encoding length is a process of continuous updating. The change of the distribution of the feature activation rate can be based on the space domain or the time domain. If the time domain variation is considered to be a Laplace distribution, assuming p t is the tth frame, then it can be considered that p t is the cumulative sum of previous feature responses:

[0062] p t = 1 Z Σ τ = 0 t - 1 exp ( τ - t λ ) p ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a dynamic visual attention area extraction method based on features in the technical field of machine vision. The method comprises the following steps: first, an independent component analysis method is adopted for carrying out sparse decomposition to mass natural images so as to obtain a group of filtering base functions and a group of corresponding reconfigurable base functions and the input images are divided into small RGB blocks of m multiplied by m and projected on a group of the base to obtain the features of the images; second, effective encoding principle is utilized to measure an incremental encoding length index for each feature; third, according to the incremental encoding length index, the remarkable degree of each small block is processed through the energy reallocation of each feature and finally a remarkable map is obtained. The method can reduce a 'time slice', realize continuous sampling, therefore, data of different frames can direct the processing of remarkable degrees together, and the problem that the remarkable degrees of different frames require independent process is solved so as to realize the dynamic performance.

Description

technical field [0001] The present invention relates to a method in the technical field of image processing, in particular to a feature-based dynamic visual attention region extraction method. Background technique [0002] With the continuous development of artificial intelligence technology, machine vision is used more and more in real life. It mainly uses computers to simulate human visual functions, but it is not just a simple extension of the human eye, but more importantly, it has the human brain. A part of the function - to extract information from the image of objective things, process and understand it, and finally use it for actual detection, measurement and control. Because machine vision has the characteristics of fast speed, large amount of information, and multiple functions, it is widely used in quality inspection, identity authentication, object detection and recognition, robots, and automatic cars. [0003] At present, engineering has been able to produce se...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/46G06T7/20
Inventor 侯小笛祁航张丽清祝文骏
Owner SHANGHAI JIAO TONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products