Method and system for recognizing behaviors in videos on basis of visual-semantic features

A recognition method and semantic feature technology, applied in the field of computer vision, can solve the problems of low accuracy of behavior recognition, inability to extract long-term behavior features, high computational complexity, etc., achieve high-efficiency behavior recognition, improve recognition accuracy, and computational complexity low effect

Inactive Publication Date: 2018-10-12
CHANGSHA UNIVERSITY
View PDF1 Cites 39 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In view of the above defects or improvement needs of the prior art, the present invention provides a method and system for behavior recognition in video based on visual-semantic features, the purpose of which is to solve the c

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and system for recognizing behaviors in videos on basis of visual-semantic features
  • Method and system for recognizing behaviors in videos on basis of visual-semantic features
  • Method and system for recognizing behaviors in videos on basis of visual-semantic features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other.

[0036] The present invention proposes a long-short-term spatio-temporal visual model (Long-ShortTerm Spatio-Temporal Visual Model with Human-Object Visual Relationship, the present invention), which first uses a three-dimensional convolutional neural network to extract short-term spatio-temporal Visual features, avoiding the high computational complexity caused by the use of optical flow or dens...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a method for recognizing behaviors in videos on basis of visual-semantic features. The method comprises the following steps of: firstly extracting short-term spatial-temporal visual features by utilizing a three-dimensional convolutional neural network, so as to avoid the high calculation complexity caused by a light stream or intensive trajectory method; extracting semantic and spatial position information of a person and an object by utilizing a convolutional neural network-based object detector, constructing person-object spatial position features, fusing the person-object spatial position features with the spatial-temporal visual features, and improving the recognition correctness for interaction behaviors in a video by utilizing extra semantic information; andfinally, on the basis of the extracted short-term spatial-temporal visual features with universality, extracting specific long-term behavior features through a recurrent neural network so as to improve the behavior recognition correctness. The method is capable of solving the technical problem that existing video-oriented behavior recognition methods are high in calculation complexity and low in behavior recognition correctness and cannot extract long-term behavior features which run through the whole video time dimensionality.

Description

technical field [0001] The technical field of computer vision of the present invention, more specifically, relates to a method and system for behavior recognition in videos based on visual-semantic features. Background technique [0002] The problem of action recognition for video data types has become a popular research area in the field of computer vision. At present, there are three main methods for behavior recognition in videos: optical flow method, recurrent neural network method, and three-dimensional convolutional neural network method. [0003] For the optical flow method, the accuracy of behavior recognition is relatively high, but because of the high computational complexity of the optical flow method, it cannot achieve real-time calculation; the input data of the cyclic neural network mainly includes two types: one is to use The feature of a single frame image extracted by the convolutional neural network lacks time-domain correlation information, resulting in a...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62G06N3/04
CPCG06V20/46G06V20/41G06N3/045G06F18/2415G06F18/253
Inventor 李方敏尤天宇刘新华旷海兰张韬栾悉道阳超
Owner CHANGSHA UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products