Method for positioning three-dimensional human body joints in monocular color videos

A technology of human joints and color video, applied in neural learning methods, instruments, biological neural network models, etc., to achieve the effect of improving accuracy and high accuracy

Active Publication Date: 2017-11-24
SUN YAT SEN UNIV
View PDF8 Cites 51 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] Most of the existing 3D pose recognition methods rely on artificially designed prior conditions and human joint st

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for positioning three-dimensional human body joints in monocular color videos
  • Method for positioning three-dimensional human body joints in monocular color videos
  • Method for positioning three-dimensional human body joints in monocular color videos

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0057] The technical solutions of the present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0058] Such as figure 1 As shown, the present invention provides a three-dimensional human body joint point positioning method of a monocular color video, which mainly includes the following steps:

[0059] S1. Construct a configurable depth model, and introduce timing information into the depth model;

[0060] S2. Collect training samples, and use the training samples to learn the parameters of the depth model;

[0061] S3, using the parameters learned in S2 to initialize the depth model, converting the monocular color video data that needs to be positioned for three-dimensional human joint points into a picture stream (that is, continuous multi-frame two-dimensional images), and inputting the depth model for analysis; For each frame of 2D image, output the coordinates of 3D human body joint points of the person ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a method for positioning three-dimensional human body joints in monocular color videos. The method comprises the following steps of: S1, constructing a configurable depth model and importing time sequence information in the depth model; S2, collecting training samples and learning parameters of the depth model by utilizing the training samples; and S3, initializing the depth model by utilizing the parameters learnt in S2, and converting monocular color video data needing three-dimensional human body joint positioning into a plurality of frames of continuous two-dimensional images, inputting the images into the depth model to carry out analysis, and aiming at each frame of two-dimensional image, outputting three-dimensional human body joint coordinates of figures in the image. According to the method, a deep-level convolutional neural network is constructed by utilizing deep learning so as to automatically learn effective spatial-temporal features from a lot of training samples without depending on prior conditions of artificial design and human body joint structural constraints; and through the learnt effective features, the human body joint positions are directly regressed.

Description

technical field [0001] The invention relates to the fields of three-dimensional human body posture recognition, computer vision, pattern recognition and human-computer interaction, in particular to a three-dimensional human body joint point positioning method based on a convolutional neural network and a long-short-term memory network for a monocular color video. Background technique [0002] Pose estimation is an important field of computer vision research. Its main task is to enable the computer to automatically perceive the "where" of the person in the scene and judge the "what" of the person. Its applications include intelligent monitoring, patient monitoring and some human-related computer-interactive system. The goal of human pose is to be able to automatically infer the pose parameters (for example, joint point coordinates) of various parts of the human body from an unknown video (for example, an image frame). Through these pose parameters, the human body's movements...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/46G06K9/62G06N3/04G06N3/08
CPCG06N3/084G06V40/23G06V20/46G06V10/44G06N3/045G06F18/214
Inventor 聂琳王可泽林木得成慧王青
Owner SUN YAT SEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products