Motion-feature-fused space-time significance detection method

A technology of motion features and detection methods, applied in image data processing, instruments, calculations, etc., can solve problems such as difficulty in obtaining salient regions, ignoring dynamic and spatial features, and highlighting only motion features.

Inactive Publication Date: 2016-04-13
JIANGNAN UNIV
View PDF6 Cites 46 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The above methods simply linearly fuse static saliency and dynamic saliency or only highlight motion features, ignoring the dynamic and spatial characteristics of the scene, making it difficult to obtain accurate salient regions.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Motion-feature-fused space-time significance detection method
  • Motion-feature-fused space-time significance detection method
  • Motion-feature-fused space-time significance detection method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029] attached figure 1 For the implementation flowchart of this invention, its concrete steps are:

[0030] (1) Image preprocessing: Each frame of input image is divided into a series of uniform and compact superpixels by SLIC superpixel segmentation algorithm as the basic processing unit of saliency detection.

[0031] (2) Extraction of color features: For each frame of image, the superpixel is used as a unit to calculate the mean value of all pixels in the superpixel in the Lab space, and quantify to obtain the color histogram CH t and normalized such that

[0032] (3) Calculation of temporal salience: F calculated by optical flow motion estimation method t relative to the previous frame F t-1 The motion vector field of (u (x,y) , v (x,y) ), and calculate the superpixel Inner mean vector field size Then find the superpixel and its related superpixel set that best match the current frame in the previous frame through the block matching method, and use formula (1)...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the field of image and video processing, in particular to a space-time significance detection method which fuses space-time significance and motion features. The space-time significance detection method comprises the following steps: firstly, utilizing a superpixel partitioning algorithm to express each frame of image as one series of superpixels, and extracting a superpixel-level color histogram as features; then, obtaining a spatial salient map through the calculation of the global comparison and the spatial distribution of colors; thirdly, through optical flow estimation and block matching methods, obtaining a temporal salient map; and finally, using a dynamic fusion strategy to fuse the spatial salient map and the temporal salient map to obtain a final space-time salient map. The method fuses the space significance and the motion features to carry out significance detection, and the algorithm can be simultaneously applied to the significance detection in dynamic and static scenes.

Description

1. Technical field [0001] The invention belongs to the field of image and video processing, and in particular relates to a spatio-temporal saliency detection method integrating motion features. The present invention is based on a region-based saliency detection model. First, each frame of image is represented as a series of superpixels with a superpixel segmentation algorithm and the superpixel-level color histogram is extracted as a feature; then, through optical flow estimation and The block matching method is used to obtain the motion saliency map, and the spatial saliency map is obtained according to the global contrast and spatial distribution of colors; finally, a dynamic fusion strategy is used to fuse the motion saliency map and the spatial saliency map into the final spatio-temporal saliency map. The method combines motion features for saliency detection and can be applied to both static and dynamic scenes. 2. Background technology [0002] Saliency detection refer...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/20
Inventor 于凤芹
Owner JIANGNAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products