Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Deep learning-based quick dynamic human body action extraction and identification method

A human motion and deep learning technology, applied in the field of motion recognition, can solve the problems of large hardware occupation and reduce hardware requirements, and achieve the effect of reducing hardware requirements and solving excessive hardware occupation.

Inactive Publication Date: 2016-11-09
HARBIN MAX TELEGENT SCI & TECH DEV CO LTD
View PDF5 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In order to quickly acquire and identify human body behaviors and overcome the shortcomings of the above-mentioned prior art, the present invention provides a recognition method based on computer deep learning, which can extract and identify human bodies in a real-time, dynamic, fast, and large-scale Actions can be better applied to dangerous action alarms of enterprises, schools, and government agencies, and film and television production systems, reducing hardware requirements. Traditional methods have very strict hardware requirements in action recognition. In the recognition of ten thousand people, the general The computer can't meet the operation requirements at all. This method effectively solves the problem of excessive hardware occupation through single floating point and double floating point operations. In the random screening test of 100,000 people, the identification of people can be done in a very short time. real-time recognition

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep learning-based quick dynamic human body action extraction and identification method
  • Deep learning-based quick dynamic human body action extraction and identification method
  • Deep learning-based quick dynamic human body action extraction and identification method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0016] A fast dynamic human action extraction and recognition method based on deep learning. First, it describes the overall information of the size, color, edge, outline, shape and depth of the human target, provides useful clues for action recognition, and extracts effective information from video sequences. In the long-range case, the target’s trajectory is used for trajectory analysis; in the close-up case, it is necessary to use the information extracted from the image sequence to model the target’s limbs and torso in 2D or 3D.

Embodiment 2

[0018] According to the deep learning-based fast dynamic human body action extraction and recognition method described in Embodiment 1, by searching for features such as human body size, color, edge, outline, shape, etc., it is determined that the moving object is a human being, and then the human body image is intercepted by screening, and then in Mark points that can be identified and tracked are set on the main joint positions or more positions on the human body, and the movement of the same human body is captured by the camera, and then according to the spatial geometric parameters, combined with some digital models of human motion, it is possible to calculate the The position of each mark point at each moment, the combination of multiple mark point positions constitutes the overall position of the human body, and continuous position recognition is performed to identify human body movements.

Embodiment 3

[0020] According to the method for extracting and recognizing fast dynamic human body movements based on deep learning described in embodiment 1 or 2, the image collected by the camera of the video camera is processed in real time, first collects and distinguishes the people in the image, and frames the area where pedestrians are located Select it, and then compare each frame of the area with its previous frame and the next frame to calculate the movement changes of pixels in the three frames. By calculating the OpticalFlow of pixel movement, the displacement vector of pixel movement (Fx , Fy), and then decompose the vector so that: After being filtered by a Gaussian filter, the feature representation of the pedestrian's action of interest is obtained;

[0021] Motion feature image see attached figure 2

[0022] Feature calculation formula:

[0023] .

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a deep learning-based quick dynamic human body action extraction and identification method. At present, existing human body identification technology and application have the following deficiencies in multiple aspects: a human skeleton is a complicated structure body, and different human action habits correspond to different human action modes, so that human body identification is difficult generally. The method comprises the following steps of firstly describing whole information of size, color, edge, contour, shape and depth of a human body target; providing useful clues for action identification; extracting effective motion features from a video sequence; performing trace analysis by utilizing a motion trace of the target under the long-distance condition; and performing two-dimensional or three-dimensional modeling on four limbs and trunk of the target by utilizing information extracted from an image sequence under the short-distance condition. The method is used for extracting and identifying quick dynamic human body actions based on deep learning.

Description

Technical field: [0001] The invention relates to the field of action recognition, in particular to a method for extracting and identifying fast dynamic human action based on deep learning. Background technique: [0002] At present, the existing human body recognition technology and application have the following deficiencies. The human skeleton itself is a complex structure. Different people have different action habits, and their action methods are also different, which makes it difficult to identify the universality of the human body. Secondly, the position of the recognized target is limited and mandatory. The recognized target must adjust its position so that its front is aligned with the camera, and the side may not be recognized. Furthermore, the response speed and efficiency, for continuous human actions, frame and frame before There is redundancy in the data, which not only occupies a large storage space, but also increases the amount of calculation. [0003] Action...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/46G06T7/20
CPCG06T2207/10016G06T2207/10024G06T2207/20084G06T2207/30196G06V40/20G06V40/10G06V10/443G06V10/56
Inventor 姚一鸣
Owner HARBIN MAX TELEGENT SCI & TECH DEV CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products