Space-time significance region detection-based human body behavior analysis method

A behavior analysis and area detection technology, applied in the field of video analysis, can solve the problems of inaccurate expression of significant motion areas, affecting the accuracy of human behavior recognition, and lack of associated semantic analysis.

Active Publication Date: 2018-01-09
桂林安维科技有限公司 +1
View PDF3 Cites 29 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The advantage of static image target detection is that it can use the saliency of spatial structure to extract various targets. The disadvantage is the lack of time series analysis, resulting in inaccurate expression of significant moving areas of interest, and lack of correlation between specific targets (such as human bodies or objects) Semantic analysis, which affects the accuracy of human behavior recognition

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Space-time significance region detection-based human body behavior analysis method
  • Space-time significance region detection-based human body behavior analysis method
  • Space-time significance region detection-based human body behavior analysis method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in combination with specific examples and with reference to the accompanying drawings.

[0038] A human behavior analysis method based on spatio-temporal salient region detection, such as figure 1 As shown, it specifically includes the following steps:

[0039] Step 1: Use the data set as the training set to train the Faster R-CNN model based on the convolutional neural network.

[0040] Step 1.1: Prepare the data set PACSAL VOC 2007 as a training set;

[0041] Step 1.2: Adjust the training set image obtained in step 1.1 into a grid image block of M×N pixel size and put it into the first 5 layers of the ZF network (short for Zeiler&Fergus Net) for feature extraction, and output 256 M / 16* N / 16 size feature map;

[0042] Step 1.3: Use a 3*3 convolution kernel to convolve with the feature map to obtain a 256-dim...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a space-time significance region detection-based human body behavior analysis method. According to the invention, a data set is adopted to rain a Faster R-CNN model. A multi-channel video is input and a single-channel video is segmented into video image frames. The segmented video image frames are subjected to target detection by adopting the Faster R-CNN model. A target detection result is analyzed and a target candidate box is re-calculated. The single-channel video is subjected to box matching for constructing a motion vector field. Based on the motion vector field,the motion vector of the region of interest is calculated. A foreground remarkable motion region is selected based on a probability calculated by utilizing a Gaussian mixture model. According to the target candidate box and the remarkable motion region, a space-time significance region is synthesized. The target space-time significance region is subjected to feature sampling and feature pretreatment. The target space-time significance region of the video is subjected to encoding and pooling. The space-time significance region of the video is subjected to human body behavior analysis and identification. An obtained analysis and identification result is written into a space-time significance region box. According to the invention, the categories of human body behavior activities in a video can be reasonably analyzed.

Description

technical field [0001] The invention relates to the technical field of video analysis, in particular to a human behavior analysis method based on spatiotemporal salient region detection. Background technique [0002] Human behavior analysis based on video belongs to the field of video analysis. Because human behavior has variability such as posture deformation, perspective change, time difference, and scale, and video images are easily affected by camera shake and illumination changes, video-based human behavior Analytics becomes a hard problem to solve. [0003] The methods of human behavior analysis can be roughly divided into three categories: methods based on human body model tracking, methods based on optical flow histograms, and methods based on local spatiotemporal features. The methods based on human body model tracking require the extraction of accurate human body templates, and the robustness of this method is relatively poor. Optical flow histogram-based methods...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/32G06K9/62G06N3/08G06T7/254
Inventor 滕盛弟徐增敏蒙儒省丁勇赵汝文李春海
Owner 桂林安维科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products