A human posture recognition method based on depth learning

A deep learning and recognition method technology, applied in the field of human action and gesture recognition based on deep learning, can solve the problems of unfavorable action and gesture feature extraction, too much RGB image information, affecting recognition accuracy, etc., to reduce network complexity, reduce Network parameters, the effect of high accuracy

Pending Publication Date: 2018-12-25
TIANJIN UNIV OF SCI & TECH
View PDF4 Cites 19 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, both methods have their own shortcomings. The RGB image contains too much information, which is not conducive to the extraction of gesture features.
In the depth image, it is easy for the limbs to occlude each other, which affects the recognition accuracy.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A human posture recognition method based on depth learning
  • A human posture recognition method based on depth learning
  • A human posture recognition method based on depth learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0025] The present invention is described below in conjunction with accompanying drawing.

[0026] Flowchart such as figure 1 The shown human action gesture recognition method based on deep learning mainly includes the following steps.

[0027] Step 1: Install the Kinect for Window SDK on the PC, and fix the Kinect V2.0 depth sensor at a horizontal position with a certain height from the horizontal ground. The effective acquisition area of ​​Kinect V2.0 is the horizontal 70° range in front of Kinect V2.0 Inside, a trapezoidal area 0.5m-4.5m away from it. Determine on the display of the PC that Kinect can capture most of the human targets.

[0028] Step 2: The person to be collected enters the field of view of Kinect V2.0 to display their actions and postures. The PC side collects 8-12 times for each action of each person. The objects to be collected should include as many body shapes as possible. In this invention, there are 15 people to be collected, with a height range o...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a human posture recognition method based on depth learning, which mainly solves the problems of large calculation amount and low accuracy of the current posture recognition technology. Firstly, the kinect V2.0 depth sensor is used to collect the motion and posture characteristics of several human body samples; RGB data and bone data of human motion posture are stored; skeletal images obtained from bone data after image preprocessing are used as training set and test set; the training set is inputted into a Posture based on Convolution Neural Network (CNN), which is specially used in the field of human posture recognition. In CNN, the classification result is obtained after training and adjusting the network structure and network parameters. The motion and posture characteristics of different human body samples are input into the classification network as a test set, and the most probable motion is the recognition result. The invention uses the convolution neuralnetwork to improve the identification accuracy, reduce the identification time, has low operation cost, and is simple and convenient to be applied in places such as intelligent home, safety monitoring, motion analysis and the like.

Description

technical field [0001] The present invention relates to the field of deep learning and biometric feature recognition, in particular to a method for recognizing human body movements and postures based on deep learning. Background technique [0002] With the development of society and the advancement of science and technology, the state and society have given more attention and investment to the field of artificial intelligence technology in recent years. As an important part of the field of artificial intelligence, computer vision also attracts the attention of the majority of people. People hope that computers can have a pair of eyes like humans to understand the actual scene, so as to help humans complete a series of tasks and obtain better human-computer interaction examples. [0003] The most popular direction in the field of computer vision is biometric recognition, which identifies people through video or images, including face recognition, fingerprint recognition, pal...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04
CPCG06V40/20G06N3/048G06N3/045G06F18/214
Inventor 林丽媛刘冠军周卫斌尹宏轶陈静瑜周圆刘建虎申川
Owner TIANJIN UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products