Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A vision-based real-time human motion analysis method

A technology of human movement and analysis methods, applied in the field of computer vision, to achieve high accuracy, strong adaptability, and improve performance

Active Publication Date: 2022-03-11
ZHENGZHOU UNIV
View PDF7 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] For above-mentioned situation, in order to overcome the defect of prior art, the object of the present invention is to provide a kind of real-time human motion analysis method based on vision, has the characteristics of ingenious design, humanized design, solves the real-time motion analysis and Movement Quality Assessment Questions

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A vision-based real-time human motion analysis method
  • A vision-based real-time human motion analysis method
  • A vision-based real-time human motion analysis method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0030] Embodiment one, a kind of real-time human motion analysis method based on vision, comprises the following steps:

[0031] Step 1: Obtain a large amount of video stream data of human body movement through mobile phones, record and save the basic information of the target object, including name, gender, age, and name of the action;

[0032] Step 2: Preprocess the video data, and estimate the pose of the human body in each frame of the video to obtain the coordinates of the joint points. The steps are as follows:

[0033] Step 2-1: convert the video data taken by different mobile phones into a unified scale;

[0034] Step 2-2: Use the Open-pose method to obtain the nose, neck, right shoulder, right elbow, right wrist, right face, left shoulder, left elbow, left wrist, left hand face, and right hip of each frame of the human body in the video through transfer learning , right knee, right sole, right ankle, left hip, left knee, left ankle and left sole are the coordinate po...

Embodiment 2

[0043] Embodiment 2, on the basis of Embodiment 1, when step 2 is performed, the position information of the four coordinates of the left sole, right sole, left hand face, and right hand face is obtained through the Labelme image annotation tool, and the Open -pose obtains the required coordinate positions of the 18 joint points; when step 3 is executed, the left hip C is obtained 11 =(cx 11 ,cy 11 ), right hip C 15 =(cx 15 ,cy 15 ) and neck C 2 =(cx 2 ,cy 2 ) coordinates, define the coordinate origin, that is, C 2 、C 11 、C 15 The center of gravity of the three points C 0 =(cx 0 ,cy 0 ),in And update all coordinate points with reference to the origin coordinates, convert the Cartesian coordinate system into polar coordinates, polar coordinates pc i =(ρ ii ,θ i ), where i ranges from 1 to 18, where ρ i >0,,-πi ≤ π.

Embodiment 3

[0044] Embodiment 3, on the basis of Embodiment 1, when step 7 is executed, the GRU model is a variant of the LSTM long-short-term memory network, which synthesizes the forget gate and the input gate into a single update gate, and BiGRU is a two-way GRU, two-way GRU consists of two GRUs stacked up and down, the output is determined by the state of the two GRUs, and one of the recursive networks calculates the hidden vector from front to back Another recurrent neural network calculates hidden vectors from back to front final output

[0045] When constructing the network model, in order to make the distribution of input data of each layer in the network relatively stable and accelerate the learning speed of the model, a batch normalization layer Batch Normalization is added before the BiGRU layer;

[0046] In order to achieve multi-label classification, the activation function of the last layer is set to the sigmoid activation function, and the loss function is binary_cross...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a visual-based real-time analysis method of human body movements. The pre-training model is transplanted to the mobile phone terminal, and the video of the front of the human body is captured by the camera of the mobile phone, and the video is input into the pre-training model for real-time analysis. Whether the body remains upright during the rope skipping process, whether the left arm tightens the body, whether the right arm tightens the body, whether the wrist is shaking the rope, whether the feet are closed, whether the left and right arms are kept horizontal, embed the analysis results into the video, and save the video To the local area, it solves the problems of real-time motion analysis and motion quality evaluation in the process of sports training, and provides a reference for motion analysis in the process of sports.

Description

technical field [0001] The invention relates to the technical field of computer vision, in particular to a vision-based real-time human motion analysis method. Background technique [0002] In recent years, with the development and application of computer technology and artificial intelligence, human motion analysis technology based on vision has risen rapidly and received extensive attention. At present, human motion analysis based on vision is still a very challenging topic in computer vision, involving multiple disciplines such as image processing, pattern recognition, artificial intelligence, etc. Wide application prospects. [0003] A core of motion analysis is human body pose estimation, the accuracy and speed of human body pose estimation will directly affect the results of the follow-up work of the motion analysis system. At present, there are mainly two types of human body pose estimation: top-down and bottom-up. The top-down method is to separate human body detec...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V40/20G06V10/82G06N3/08G16H20/30
CPCG06N3/08G16H20/30G06V40/20
Inventor 赵红领崔莉亚李润知刘浩东
Owner ZHENGZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products