Real-time Face Pose Estimation Method Based on Depth Video Stream

A technology of in-depth video streaming and face pose, applied in the field of recognition, can solve problems such as impossible to take care of, decrease in accuracy, inability to judge face pose, etc., and achieve the effect of increasing accuracy

Inactive Publication Date: 2016-08-17
SOUTHEAST UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] 1) The collected data is susceptible to noise interference
Traditional two-dimensional images and videos are easily affected by lighting, which will completely change the gray value and texture information of certain areas in the picture, making it impossible for the sample set in the training phase to take care of all possible situations, so that the accuracy rate in the generalization phase linear decrease
[0005] 2) The accuracy rate drops seriously under the condition of missing features
However, in the case where the feature is unrecognizable or occluded, this type of method cannot judge the face pose or give an extremely inaccurate result
[0006] 3) Manual intervention is required during system operation
This type of method needs to initialize the face position when the system starts running, and it is easy to lose the target when the face moves quickly or encounters an occluder

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Real-time Face Pose Estimation Method Based on Depth Video Stream
  • Real-time Face Pose Estimation Method Based on Depth Video Stream
  • Real-time Face Pose Estimation Method Based on Depth Video Stream

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0022] The preferred embodiments of the present invention will be described in detail below in conjunction with the accompanying drawings, so that the advantages and features of the present invention can be more easily understood by those skilled in the art, so that the protection scope of the present invention can be more clearly defined.

[0023] See Figure 1 to Figure 3 , figure 1 It is a schematic structural diagram of a preferred embodiment of a method for real-time face pose estimation based on a deep video stream of the present invention; figure 2 It is a schematic diagram of slice samples and test selection; image 3 It is a schematic diagram of a slice with too many blank areas.

[0024] The present invention provides a real-time face pose estimation method based on a depth video stream. The steps include: sampling and training stage and real-time estimation stage; in the sampling and training stage, the steps include: obtaining the depth of field of the face from various...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a real-time human face attitude estimation method based on depth video streaming. The real-time human face attitude estimation method comprises two stages of sampling and training and estimating in real time. The sampling and training stage comprises the following steps of obtaining filed depth images of all human face angles; then conducting random sampling on the filed depth images of all angles and obtaining a training sample set; and using a supervised learning method to conduct training, and obtaining a classifier. The real-time estimation stage comprises the following steps of firstly extracting real-time human face filed depth images in the depth video streaming output by collection equipment and converting the real-time human face filed depth images into integral images; conducting random slice sampling on the integral images, utilizing the classifier obtained by training to conduct classification on samples, and obtaining a plurality of estimation results; and removing abnormal results for the results and conducting weighted average, and obtaining final human face attitude results. The real-time human face attitude estimation method based on the depth video streaming avoids influences of factors like illumination on the final results, and is good in instantaneity and precision.

Description

Technical field [0001] The invention relates to a recognition method, in particular to a real-time face posture estimation method based on a deep video stream. Background technique [0002] At present, the user's interaction with the computer is mainly through the keyboard, mouse, and touch screen, and this interaction must rely on specific hardware input devices. Natural human-computer interaction has become the focus of current research, such as: human body posture, face posture, facial expression analysis, etc. In addition, in face recognition, face pose estimation also has very important applications. After the face pose is estimated, the photo can be deformed according to the face pose before recognition, and then the recognition is performed, which can greatly improve the accuracy of face recognition. [0003] The existing face pose estimation methods are based on two-dimensional images and videos. Such methods also have the following problems: [0004] 1) The collected data...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/66
Inventor 姚莉肖阳
Owner SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products