Real-time human face attitude estimation method based on depth video streaming

A technology of deep video streaming and facial gestures, applied in the field of recognition, can solve problems such as manual intervention, decreased accuracy rate, and easy noise interference of collected data

Inactive Publication Date: 2013-07-10
SOUTHEAST UNIV
View PDF4 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] 1) The collected data is susceptible to noise interference
Traditional two-dimensional images and videos are easily affected by lighting, which will completely change the gray value and texture information of certain areas in the picture, making it impossible for the sample set in the training phase to take care of all possible situations, so that the accuracy...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Real-time human face attitude estimation method based on depth video streaming
  • Real-time human face attitude estimation method based on depth video streaming
  • Real-time human face attitude estimation method based on depth video streaming

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0022] The preferred embodiments of the present invention will be described in detail below in conjunction with the accompanying drawings, so that the advantages and features of the present invention can be more easily understood by those skilled in the art, so as to define the protection scope of the present invention more clearly.

[0023] see Figure 1 to Figure 3 , figure 1 It is a structural schematic diagram of a preferred embodiment of the real-time face pose estimation method based on depth video stream in the present invention; figure 2 is a schematic diagram of sliced ​​samples and test selection; image 3 is a schematic diagram of a slice with too much white space.

[0024] The present invention provides a real-time face pose estimation method based on depth video stream, the steps include: sampling and training stage and real-time estimation stage; in the sampling and training stage, the steps include: obtaining the depth of field of the face at each angle of th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a real-time human face attitude estimation method based on depth video streaming. The real-time human face attitude estimation method comprises two stages of sampling and training and estimating in real time. The sampling and training stage comprises the following steps of obtaining filed depth images of all human face angles; then conducting random sampling on the filed depth images of all angles and obtaining a training sample set; and using a supervised learning method to conduct training, and obtaining a classifier. The real-time estimation stage comprises the following steps of firstly extracting real-time human face filed depth images in the depth video streaming output by collection equipment and converting the real-time human face filed depth images into integral images; conducting random slice sampling on the integral images, utilizing the classifier obtained by training to conduct classification on samples, and obtaining a plurality of estimation results; and removing abnormal results for the results and conducting weighted average, and obtaining final human face attitude results. The real-time human face attitude estimation method based on the depth video streaming avoids influences of factors like illumination on the final results, and is good in instantaneity and precision.

Description

technical field [0001] The present invention relates to a recognition method, in particular to a real-time face pose estimation method based on depth video stream. Background technique [0002] At present, the user interacts with the computer mainly through the keyboard, mouse and touch screen, all of which rely on specific hardware input devices. Natural human-computer interaction methods have become the focus of current research, such as: human body posture, facial posture, facial expression analysis, etc. In addition, in face recognition, face pose estimation also has a very important application. After the face pose is estimated, the photo can be deformed according to the face pose before recognition, and then the recognition can be performed, which can greatly improve the accuracy of face recognition. [0003] The existing face pose estimation methods are all based on two-dimensional images and videos, and such methods still have the following problems: [0004] 1) T...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/66
Inventor 姚莉肖阳
Owner SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products