Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Video-Based Pedestrian and Crowd Behavior Recognition Method

A recognition method and behavior technology, applied in the field of deep learning, can solve the problems of limited accuracy and low robustness, and achieve the effect of improving performance

Active Publication Date: 2021-05-14
CHINA JILIANG UNIV
View PDF9 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Interference factors such as illumination changes, camera motion, and target occlusion in real scenes make this type of behavior recognition method less robust and less accurate.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Video-Based Pedestrian and Crowd Behavior Recognition Method
  • A Video-Based Pedestrian and Crowd Behavior Recognition Method
  • A Video-Based Pedestrian and Crowd Behavior Recognition Method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] The present invention will be further described below in conjunction with accompanying drawing.

[0028] Such as figure 1 As shown, the video-based pedestrian and crowd behavior recognition method of the present invention comprises the following steps:

[0029] 1. The video with a frame number of about 150 frames is processed by the human body pose estimation algorithm, and an overall skeleton sequence in the shape of 150 (frame number)×18 (key point number)×3 (3-dimensional skeleton coordinates is 3) is obtained. At the same time, the 18 key points of the human body are distinguished according to the human body parts of the head, arms, torso, and feet, and four groups of limb part skeleton sequences in the shape of 150 (frame number) × corresponding key points × 3 (3-dimensional skeleton coordinates are 3) are obtained .

[0030] 2. The types of single-person whole-body behaviors are divided into single-person falls, squats, jumps, etc., and group behaviors involve ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video-based pedestrian and crowd behavior recognition method. The whole framework includes a single-person limb part network, a single-person overall and limb joint network, and a multi-person network. The framework aims to jointly learn joint co-occurrence and temporal evolution in an end-to-end manner, by exploiting the ability of CNN global aggregation, the CNN model can be used simply and effectively to learn joint co-occurrence features of skeleton sequence information. In this method, the point-level features of each joint are learned independently, and then the features of each joint are regarded as channels of convolutional layers to learn hierarchical co-occurrence features. The most important thing is to use multi-part limb network features to integrate into single pedestrian motion features in the designed single pedestrian behavior recognition joint network structure to strengthen the behavior recognition of a single pedestrian. In addition, in the designed crowd interaction behavior recognition network, the characteristics of single-person behavior are used to strengthen the characteristics of group behavior. Group behavior involves the behavior of multiple people, such as hugging and shaking hands.

Description

technical field [0001] The invention belongs to the field of deep learning for extracting facial features by deep neural network, relates to technologies such as neural network and pattern recognition, and particularly relates to a training and testing method for pedestrian and crowd behavior recognition models based on human skeleton information. Background technique [0002] The analysis of human actions such as action recognition and detection is one of the fundamental and challenging tasks in computer vision. In the behavior recognition technology with the human body as the main research object, most motion recognition methods use the method of target segmentation, but limited by factors such as the number of human bodies in the image and the size of the target, the effect is not ideal, resulting in unsatisfactory follow-up recognition work . Therefore, many research works often omit the process of moving target detection, and directly extract behavioral features from t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/23G06V40/103G06V20/53G06F18/2413G06F18/241
Inventor 章东平郑寅束元
Owner CHINA JILIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products