In-depth learning network-based visual angle independent behavior identification method

A technology of deep learning network and recognition method, which is applied in the field of view-independent behavior recognition, can solve the problems of insufficient effect and poor algorithm robustness, and achieve the effect of comprehensive features, complete behavior description and good robustness

Active Publication Date: 2017-06-30
青岛圣瑞达科技有限公司
View PDF5 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The behavior analysis method based on spatio-temporal feature points and skeleton information has achieved remarkable results in the traditional single-view or single-person mode, but it is not suitable for areas with relatively large pedestrian traffic such as streets, airports, and stations,

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • In-depth learning network-based visual angle independent behavior identification method
  • In-depth learning network-based visual angle independent behavior identification method
  • In-depth learning network-based visual angle independent behavior identification method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0044] The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.

[0045] Such as figure 1 and figure 2 As shown, the perspective-independent behavior recognition method based on the deep learning network of the present invention includes a training process of using a training sample set to obtain a classifier and a recognition process of using a classifier to recognize a test sample;

[0046] The training process is as figure 1 As shown, including the following steps:

[0047] S1) Input the video frame im...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention puts forward an in-depth learning network-based visual angle independent behavior identification method comprising the following steps: a video frame image under a certain visual angle is input, bottom layer features are extracted and processed via an in-depth learning mode, obtained bottom layer features are subjected to modeling operation, a cube model is obtained based on a chronological order, cube models under all visual angles are converted into visual angle independent cylinder characteristic space mapping, the cylinder characteristic space mapping is input into a classifier and is trained, and a video behavior visual angle independent classifier is obtained. According to the behavior identification method disclosed in a technical solution of the invention, an in-depth learning network is used for analyzing human behaviors under a plurality of visual angles, robustness of a classifying model is improved, the method is particularly suitable for training and learning based on big data, and full play can be given to advantages of the method.

Description

Technical field [0001] The field of computer vision technology of the present invention particularly refers to a method for identifying perspective-independent behaviors based on a deep learning network. Background technique [0002] With the rapid development of information technology, computer vision has ushered in the best period of development with the emergence of concepts such as VR, AR, and artificial intelligence. As the most important video behavior analysis in the field of computer vision, it has also been increasingly received by domestic and foreign scholars. Favor. In a series of fields such as video surveillance, human-computer interaction, medical care, and video retrieval, video behavior analysis occupies a large proportion. For example, in the popular driverless car project, video behavior analysis is very challenging. Due to the complexity and diversity of human actions, coupled with the influence of factors such as human body self-occlusion, multi-scale, and ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62G06N3/04
CPCG06N3/045G06F18/24G06F18/214
Inventor 王传旭胡国锋刘继超杨建滨孙海峰崔雪红李辉刘云
Owner 青岛圣瑞达科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products