Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Real-time human body action analysis method based on vision

A technology of human body movements and analysis methods, applied in the field of computer vision, to achieve the effects of strong adaptability, improved performance, and high accuracy of results

Active Publication Date: 2019-11-26
ZHENGZHOU UNIV
View PDF7 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] For above-mentioned situation, in order to overcome the defect of prior art, the object of the present invention is to provide a kind of real-time human motion analysis method based on vision, has the characteristics of ingenious design, humanized design, solves the real-time motion analysis and Movement Quality Assessment Questions

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Real-time human body action analysis method based on vision
  • Real-time human body action analysis method based on vision
  • Real-time human body action analysis method based on vision

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0030] Embodiment one, a kind of real-time human motion analysis method based on vision, comprises the following steps:

[0031] Step 1: Obtain a large amount of video stream data of human body movement through mobile phones, record and save the basic information of the target object, including name, gender, age, and name of the action;

[0032] Step 2: Preprocess the video data, and estimate the pose of the human body in each frame of the video to obtain the key point coordinates. The steps are as follows:

[0033] Step 2-1: convert the video data taken by different mobile phones into a unified scale;

[0034] Step 2-2: Use the Open-pose method to obtain the nose, neck, right shoulder, right elbow, right wrist, right face, left shoulder, left elbow, left wrist, left hand face, and right hip of each frame of the human body in the video through transfer learning , right knee, right sole, right ankle, left hip, left knee, left ankle and left sole are the coordinate positions of...

Embodiment 2

[0043] Embodiment two, on the basis of embodiment one, when executing step 2, through the Labelme image annotation tool, obtain the position information of the four coordinates of the left sole, the right sole, the left hand face, and the right hand face, on the basis of the original key points Add four key points, and use Open-pose to obtain the required coordinate positions of the 18 key points through transfer learning; when performing step 3, get the left hip C 11 =(cx 11 ,cy 11 ), right hip C 15 =(cx 15 ,cy 15 ) and neck C 2 =(cx 2 ,cy 2 ) coordinates, define the coordinate origin, that is, C 2 、C 11 、C 15 The center of gravity of the three points C 0 =(cx 0 ,cy 0 ),in And update all coordinate points with reference to the origin coordinates. Convert the Cartesian coordinate system to polar coordinates, polar coordinates pc i =(ρ i ,θ i ), where i ranges from 1 to 18, Where ρ>0, -π<θ≤π.

Embodiment 3

[0044] Embodiment 3, on the basis of Embodiment 1, when step 7 is executed, the GRU model is a variant of the LSTM long-short-term memory network, which synthesizes the forget gate and the input gate into a single update gate, and BiGRU is a two-way GRU, two-way GRU consists of two GRUs stacked up and down, the output is determined by the status of two GRUs, and one of the recursive networks calculates the hidden vector from front to back Another recurrent neural network calculates hidden vectors from back to front final output

[0045] When constructing the network model, in order to make the distribution of input data of each layer in the network relatively stable and accelerate the learning speed of the model, a batch normalization layer BatchNormalization is added before the BiGRU layer;

[0046] In order to achieve multi-label classification, the activation function of the last layer is set to the sigmoid activation function, and the loss function is binary_crossentr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a real-time human body action analysis method based on vision. The method comprises: transplanting the pre-trained model to a mobile phone terminal; capturing a human body front single-shaking double-leaping video through a mobile phone camera; inputting the video into a pre-training model to analyze whether the body is kept upright, whether the left big arm tightens the body, whether the right big arm tightens the body, whether the wrist swings the rope, whether the two feet are folded and whether the left and right arms are kept horizontal in real time in the rope skipping process; and embedding the analysis result into the video, and storing the video locally, so that the problems of real-time action analysis and action quality evaluation in the exercise trainingprocess are solved, and a reference basis is provided for action analysis in the exercise process.

Description

technical field [0001] The invention relates to the technical field of computer vision, in particular to a vision-based real-time human motion analysis method. Background technique [0002] In recent years, with the development and application of computer technology and artificial intelligence, human motion analysis technology based on vision has risen rapidly and received extensive attention. At present, human motion analysis based on vision is still a very challenging topic in computer vision, involving multiple disciplines such as image processing, pattern recognition, artificial intelligence, etc. Wide application prospects. [0003] A core of motion analysis is human body pose estimation, the accuracy and speed of human body pose estimation will directly affect the results of the follow-up work of the motion analysis system. At present, there are mainly two types of human body pose estimation: top-down and bottom-up. The top-down method is to separate human body detec...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06N3/08G16H20/30
CPCG06N3/08G16H20/30G06V40/20
Inventor 赵红领崔莉亚李润知刘浩东
Owner ZHENGZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products