Human body posture estimation method based on motion feature constraints

A technology of motion characteristics and human body posture, applied in biometric recognition, computing, computer components, etc., can solve problems such as low accuracy, camera shake, motion blur, etc., and achieve improved accuracy, good posture estimation, and enhanced reasoning effect of ability

Pending Publication Date: 2021-02-09
ZHEJIANG GONGSHANG UNIVERSITY
View PDF0 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The bottom-up method focuses on human body parts, first extracts the location information of key parts of all people in the picture and the affinity between parts, and then groups and clusters key points according to the topology of human body poses, and finally achieves The attitude estimation of all people, the bottom-up running time is not affected by the number of people and can run faster, but the accuracy is not high
[0004] Most of the current human pose estimation methods are for pictures. However, video is the main carrier in real application scenarios. Existing methods decompose video into single-frame pictures, and then perform human pose estimation frame by frame. This approach often ignores single frames. The difference between images and static pictures, for example, there are often situations such as motion blur and lens shake in a single frame, and the rich correlation information between video frames is not fully considered, resulting in the inaccuracy of human pose estimation in video Satisfactory

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Human body posture estimation method based on motion feature constraints
  • Human body posture estimation method based on motion feature constraints
  • Human body posture estimation method based on motion feature constraints

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0042] In order to describe the present invention more specifically, the technical solutions of the present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0043] (1) Use a video dataset with multi-person pose annotations, and build a human spatiotemporal window on the video.

[0044] This implementation method selects PoseTrack as the data set, which is a large-scale video data set for multi-person pose estimation and multi-person pose tracking. It contains more than 1356 video sequences and more than 276K human body pose annotations. The key points of the human body and the sequence numbers of the key points in this data set are as follows: figure 1 As shown, it includes right ankle, right knee, right hip, left hip, left knee, left ankle, right shoulder, right elbow, right wrist, left shoulder, left elbow, left wrist, thoracic spine, head, and nose. Personal key points.

[0045] The present invention bel...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a human body posture estimation method based on motion feature constraints. According to the method, a video is divided into a plurality of human body space-time windows, image features of each frame under the windows are extracted, human body motion features of the windows are extracted according to the image features contained in the windows, and then human body postureestimation of a single frame of picture is restrained through the motion features. According to the invention, the architecture of the full convolutional neural network is adopted, and the dynamic convolution is used to adaptively adjust the single-frame attitude estimation according to the situation information contained in the video, so that the common problems of motion blur, limb occlusion andthe like in a human body attitude estimation task can be better solved, the attitude estimation can be better carried out on the video, and the human body posture estimation accuracy in a video sceneis improved.

Description

technical field [0001] The invention belongs to the field of human body pose estimation, and in particular relates to a human body pose estimation method based on motion feature constraints. Background technique [0002] Human pose estimation is an interesting research field in computer vision. It has important application value in security monitoring, automatic driving, human-computer interaction, video understanding and other fields. The goal of human pose estimation is to estimate the pose of the human body in a picture or video image, by locating the key parts of the human body, and then connecting these key points to realize the prediction of the human body pose. [0003] Current approaches to human pose estimation generally fall into two categories: top-down and bottom-up. From top to bottom, starting from the whole world, first use the target detection technology to locate the position of each person in the picture or video image, and then perform single-person pose ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04
CPCG06V40/20G06V40/10G06N3/045G06F18/214
Inventor 陈豪明杨柏林刘振广王津航田端正封润洋王勋
Owner ZHEJIANG GONGSHANG UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products