Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video saliency detection method based on Bayesian fusion

A detection method and remarkable technology, applied in the field of image processing, can solve problems such as weak sequence relationship, low accuracy and efficiency, and simple time domain structure, and achieve the effects of enhancing representation ability, reducing influence, and good experimental results

Active Publication Date: 2018-01-09
XIDIAN UNIV
View PDF9 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0009] The above-mentioned video salient region extraction methods all add motion features to the salient model in the image domain, their time domain structure is too simple, and the relationship between the sequences is weak. Although the video salient region can be extracted, the accuracy and efficiency are relatively low. Low

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video saliency detection method based on Bayesian fusion
  • Video saliency detection method based on Bayesian fusion
  • Video saliency detection method based on Bayesian fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] Embodiments and effects of the present invention will be further described in detail below in conjunction with the accompanying drawings.

[0033] refer to figure 1 , the implementation steps of the present invention are as follows:

[0034] Step 1, obtain the static saliency map of the video sequence.

[0035] 1.1) Obtain static boundary probability saliency map

[0036] For the original video sequence F={F 1 , F 2 ,…F k ,…F l} in each frame image F k , k=1,2...l, k is the frame number of image in the video sequence, adopts the method for multi-scale morphology estimation to obtain static boundary map, obtains the boundary of target in the image;

[0037] Will F k Medium pixel The corresponding boundary probability value is expressed as i represents the image F k The number of pixels in the middle, using the SLIC superpixel segmentation algorithm to process F k , get its corresponding superpixel block set is the set of superpixel blocks N k The jth...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video saliency detection method based on Bayesian fusion and mainly solves the problem that an existing video saliency detection method cannot detect small targets. The implementation scheme is that 1) extracting a video sequence static boundary probability saliency map, a color mean value saliency map and a color contrast saliency map, and carrying out weight fusion to generate a static saliency map; 2) extracting a video sequence dynamic boundary probability saliency map, a PCA prior saliency map and a background prior saliency map, and carrying out weight fusion togenerate a dynamic saliency map; and 3) carrying out fusion on the static saliency map and the dynamic saliency map through a Bayesian model to obtain a video sequence saliency map. Compared with a conventional video saliency algorithm, the method enhances characteristic space and time representation ability, reduces influence of complex backgrounds on the detection effect, can detect the small targets in a video effectively, and can be used for early-stage pretreatment of video target tracking and video segmentation.

Description

technical field [0001] The invention belongs to the technical field of image processing, and further relates to a video saliency detection method, which can be used for target tracking, object recognition and video segmentation. Background technique [0002] When the computer is dealing with complex scene problems, the complexity of the background makes some existing methods unable to deal with the scene better. Studies have found that the human visual system can easily understand various complex scenes, so the working principle of the human system can be used for reference when dealing with problems related to complex scenes. Scholars have conducted in-depth research and reasoning on the mechanism of human visual attention selection and obtained the theory of visual saliency. This theory believes that the human visual system only processes some parts of the image in detail, while almost turning a blind eye to the rest of the image. Based on this theory, relevant scholars i...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/12G06T7/11G06T7/194G06K9/00G06K9/32
Inventor 韩冰张景滔韩怡园仇文亮魏国威
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products