Video motion recognition method based on fusion of sorting pooling and spatial features

A technology of action recognition and basic features, applied in the field of video recognition, can solve problems such as loss, and achieve the effect of improving recognition accuracy and high description performance

Active Publication Date: 2018-08-17
NANJING UNIV OF SCI & TECH
View PDF7 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In the traditional video action recognition method, the three-dimensional space-time domain of the video is usually taken as a whole to capture the dynamic change characteristics of the video. This approach is one-sided and will result in the loss of a large number of unique change characteristics belonging to the two-dimensional image space domain or the one-dimensional

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video motion recognition method based on fusion of sorting pooling and spatial features
  • Video motion recognition method based on fusion of sorting pooling and spatial features
  • Video motion recognition method based on fusion of sorting pooling and spatial features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0016] combine figure 2 , a video action recognition method based on sorting pooling fusion spatial features, comprising the following steps:

[0017] Step 1, using the video local feature descriptor algorithm to extract the basic visual feature vector set for each video;

[0018] Step 2, perform multi-scale segmentation on the two-dimensional space of each frame of each video, and construct a two-dimensional space pyramid model;

[0019] Step 3, arrange the video basic feature vector set in each subspace in the pyramid model according to the time sequence of the frame sequence;

[0020] Step 4, perform a smoothing operation on the sequence of ordered basic feature vectors in each subspace;

[0021] Step 5, separately apply the sorting pooling algorithm to the sequence of ordered feature vectors after the smoothing operation in each subspace, and learn the model parameters belonging to the subspace;

[0022] Step 6, concatenate the model parameters obtained from all subspa...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a video motion recognition method based on fusion of sorting pooling and spatial features. The method comprises: on the basis of a video local feature descriptor algorithm, a basic visual feature vector set is extracted for each video; multi-scale segmentation is carried out on two-dimensional space for each frame of image in each video to construct a two-dimensional spatialpyramid model; video basic feature vector sets in each sub space in the pyramid model are arranged according to a frame sequence time sequence; smooth operation is carried out on an ordered basic feature vector sequence in each sub space independently; the ordered basic feature vector sequence after the smooth operation in each sub space is processed by using a sorting pooling algorithm and learning is carried out to obtain model parameters belonging to the sub space; the model parameters obtained in all sub space in the pyramid model are connected in series to obtain a feature vector as a final video feature vector; and a classifier is used for classifying the video feature vector and thus the motion type of the video is identified.

Description

technical field [0001] The invention relates to a video recognition technology, in particular to a video action recognition method based on sorting pooling and fusion of spatial features. Background technique [0002] Today's video action recognition technology has been widely used in multimedia content analysis, human-computer interaction, intelligent real-time monitoring and other fields. This technology can be realized by extracting video features to generate feature vectors, and classifying feature vectors with classifiers. In the traditional video action recognition method, the three-dimensional space-time domain of the video is usually taken as a whole to capture the dynamic change characteristics of the video. This approach is one-sided and will result in the loss of a large number of unique change characteristics belonging to the two-dimensional image space domain or the one-dimensional time series domain. Therefore, , the video action recognition technology needs to...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/215G06T7/246G06T7/269
CPCG06T7/215G06T7/251G06T7/269G06T2207/20081G06T2207/10016G06T2207/20016
Inventor 项欣光赵恒颖
Owner NANJING UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products