Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video-based pedestrian and crowd behavior identification method

A recognition method and behavior technology, applied in the field of deep learning, which can solve problems such as low robustness and limited accuracy

Active Publication Date: 2019-11-19
CHINA JILIANG UNIV
View PDF9 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Interference factors such as illumination changes, camera motion, and target occlusion in real scenes make this type of behavior recognition method less robust and less accurate.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video-based pedestrian and crowd behavior identification method
  • Video-based pedestrian and crowd behavior identification method
  • Video-based pedestrian and crowd behavior identification method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] The present invention will be further described below in conjunction with accompanying drawing.

[0028] like figure 1 As shown, the video-based pedestrian and crowd behavior recognition method of the present invention comprises the following steps:

[0029] 1. The video with a frame number of about 150 frames is processed by the human body pose estimation algorithm, and an overall skeleton sequence in the shape of 150 (frame number)×18 (key point number)×3 (3-dimensional skeleton coordinates is 3) is obtained. At the same time, the 18 key points of the human body are distinguished according to the human body parts of the head, arms, torso, and feet, and four groups of limb part skeleton sequences in the shape of 150 (frame number) × corresponding key points × 3 (3-dimensional skeleton coordinates are 3) are obtained .

[0030] 2. The types of single-person whole-body behaviors are divided into single-person falls, squats, jumps, etc., and group behaviors involve inte...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video-based pedestrian and crowd behavior identification method. The whole framework comprises a single-person limb part network, a single-person whole and limb joint networkand a multi-person network. The framework aims to jointly learn joint co-occurrence and time evolution in an end-to-end mode, and joint co-occurrence characteristics of skeleton sequence informationcan be learned simply and effectively by utilizing a CNN model by utilizing the global aggregation capability of the CNN. In the method, point-level features of each joint are learned independently, and then the features of each joint are regarded as channels of a convolution layer to learn hierarchical symbiotic features. Most importantly, in a designed single pedestrian behavior recognition joint network structure, multi-part limb network features are fused into single pedestrian motion features to enhance behavior recognition of a single pedestrian. Besides, in the designed crowd interaction behavior recognition network, the characteristics of the group behaviors are enhanced by utilizing the characteristics of the single person behaviors, and the group behaviors relate to activities such as embracing and handshaking of behaviors of multiple persons.

Description

technical field [0001] The invention belongs to the field of deep learning for extracting facial features by deep neural network, relates to technologies such as neural network and pattern recognition, and particularly relates to a training and testing method for pedestrian and crowd behavior recognition models based on human skeleton information. Background technique [0002] The analysis of human actions such as action recognition and detection is one of the fundamental and challenging tasks in computer vision. In the behavior recognition technology with the human body as the main research object, most motion recognition methods use the method of target segmentation, but limited by factors such as the number of human bodies in the image and the size of the target, the effect is not ideal, resulting in unsatisfactory follow-up recognition work . Therefore, many research works often omit the process of moving target detection, and directly extract behavioral features from t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/23G06V40/103G06V20/53G06F18/2413G06F18/241
Inventor 章东平郑寅束元
Owner CHINA JILIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products