Video behavior identification method based on space-time adversarial generative network

A recognition method and network technology, applied in the field of computer vision and pattern recognition

Active Publication Date: 2019-10-29
HUAQIAO UNIVERSITY
View PDF4 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] With the development of deep learning methods and the substantial improvement of computing power, deep learning

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video behavior identification method based on space-time adversarial generative network
  • Video behavior identification method based on space-time adversarial generative network
  • Video behavior identification method based on space-time adversarial generative network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0017] The present invention will be further described below through specific embodiments.

[0018] In order to solve the problem that most of the behavior recognition methods in the prior art still need to mark the data set and the existing database scale, the present invention provides a video behavior recognition method based on spatio-temporal confrontation generation network, such as figure 1 As shown, the inventive method comprises a feature extraction process and a recognition process, and the specific steps are as follows:

[0019] Feature extraction process:

[0020] 1) Extract keyframes and optical flow maps from video sequences. The keyframe is used as the input of the spatio-temporal generation adversarial network, and the optical flow map is used as the input of the temporal generation adversarial network.

[0021] Specifically, the present invention extracts the key frames of the video sequence through an inter-frame difference method. The inter-frame differen...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a video behavior identification method based on a space-time adversarial generative network. The method comprises the following steps: extracting spatial features of an inputvideo containing human behaviors based on a spatial domain adversarial generation network, extracting time characteristics of an input video containing human behaviors based on a time domain adversarial generation network; and splicing the two dimension features extracted by the spatial adversarial generative network and the temporal adversarial generative network to obtain spatial-temporal fusionfeatures, and classifying the fused feature vectors through an SVM support vector machine to identify video behaviors. Based on a space-time generative adversarial network, learning characteristics,video characteristics and human action characteristics of the video are fully considered. M ain spatial-temporal characteristic information contained in the video is extracted by effectively combininghuman behavior characteristics for fusion, and spatial-temporal characteristics with higher representation capability are obtained based on complementarity among the spatial-temporal characteristic information, so that accurate behavior recognition is performed on the input video.

Description

technical field [0001] The invention relates to the fields of computer vision and pattern recognition, in particular to a video behavior recognition method based on spatio-temporal confrontation generation network. Background technique [0002] In recent years, with the explosive growth of image and video data in real life, it has become almost impossible to completely rely on manual processing of massive visual information data, and relying on computers to simulate human vision to complete tasks such as target tracking, target detection and behavior recognition Computer vision has become a research hotspot in academia. Among them, video behavior recognition has great application requirements in human-computer interaction, intelligent surveillance video system, video retrieval and other intelligent security, smart life and other scenarios, but due to practical problems such as occlusion, angle change, scene analysis, etc. It is still a challenging problem to analyze the beh...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/46G06V20/41G06F18/2411G06F18/241
Inventor 曾焕强林溦曹九稳朱建清陈婧张联昌
Owner HUAQIAO UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products