Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Human face feature point positioning method and device thereof

A facial feature, point positioning technology, applied in the field of image processing, can solve the problems of uneven movement, slow processing speed, large amount of calculation, etc., to achieve the effect of good visual effect, smooth movement, and improved processing speed

Active Publication Date: 2014-12-03
SHENZHEN TENCENT COMP SYST CO LTD
View PDF4 Cites 34 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, performing face detection and face feature point location processing on each frame of video data will make the amount of calculation too large and the processing speed very slow, which is difficult to meet some application scenarios that require high processing speed.
At the same time, detect the face and locate the feature points of the face for each frame of image, regardless of the correlation between the upper and lower frame images and the movement range of the face, which will make the located face feature points in the upper and lower frame images The movement in the center is not smooth and shakes, and the visual effect is poor

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Human face feature point positioning method and device thereof
  • Human face feature point positioning method and device thereof
  • Human face feature point positioning method and device thereof

Examples

Experimental program
Comparison scheme
Effect test

no. 1 example

[0028] refer to figure 1 As shown, it is a schematic diagram of the environment when the method for locating facial feature points provided by the first embodiment of the present invention is applied. In this embodiment, the method for locating facial feature points is applied to an electronic device 1, and the electronic device 1 further includes a screen 10 for outputting images. The electronic device 1 is, for example, a computer, a mobile electronic terminal or other similar computing devices.

[0029] Below in conjunction with specific embodiment the above-mentioned human face feature point localization method is described in detail as follows:

[0030] figure 2 The flowchart of the face feature point location method provided by the first embodiment, the face feature point location method includes the following steps:

[0031] Step S1, obtaining video data, and using video tracking technology to obtain a real-time background image during the playback of the video data...

no. 2 example

[0052] According to the facial feature point positioning method provided in the first embodiment, to locate the coordinates of the facial feature points in the current frame image of the video data, the iterative calculation process needs to be performed on the current frame image. However, in reality, if the face does not move, the current frame image may hardly change compared to the previous frame image. Therefore, if the current frame image is almost unchanged relative to the previous frame image, the iterative calculation is performed on the current frame image to obtain the coordinates of the face feature points in the current frame image, which will increase unnecessary The amount of calculation reduces the processing speed.

[0053] To further address the above issues, see Figure 7 As shown, the second embodiment of the present invention provides a method for locating facial feature points. Compared with the method for locating facial feature points in the first embo...

no. 3 example

[0058] According to the facial feature point positioning method provided in the first embodiment or the second embodiment, to locate the coordinates of the facial feature points in the current frame image of video data, the iterative calculation process needs to be performed on the current frame image as a whole. In fact, since the moving range of the human face is limited, it is only necessary to perform the iterative calculation process on the area where the moving range of the human face is located. Therefore, if the iterative calculation is performed on the current frame image as a whole, unnecessary calculation amount will be increased and the processing speed will be reduced.

[0059] In order to further solve the above problems, the third embodiment of the present invention provides a method for locating facial feature points, which is compared to the method for locating facial feature points in the first embodiment or the second embodiment, see Figure 8 As shown (in t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A method for identifying facial features comprises following steps: An image tracing step is performed to receive video data of a plurality of face images and to obtain a real-time background image from the video data by a video tracing technique.A data calculating step is performed to calculate a video data difference between a current face image and the real-time background image.A process setting step is performed to set an iteration number according to the video data difference.A coordinate requesting step is performed to obtain facial feature coordinates of a previous face image,the previous one of the current face image,serving as initial facial feature coordinates.A localization step is performed to obtain current facial feature coordinates of the current image by conducting an iterative calculation according to the iteration number and based on the initial facial feature coordinates.

Description

technical field [0001] The specific embodiments of the present invention relate to the technical field of image processing, and in particular to a method and device for locating facial feature points on a video image. Background technique [0002] The technology of locating multiple parts of the face in the image (that is, the feature points of the face, including the outline of the face, forehead, eyebrows, nose, eyes, mouth, etc.) foundation, has a wide range of practical value. For example, in an image containing a human face, paste some props that can improve interactivity and entertainment, such as hats, glasses, masks, etc., on the corresponding positions according to the coordinates of the located facial feature points. In addition, facial feature points in the played video data can also be located. [0003] At present, the method of locating face feature points for video data is usually to detect faces and locate face feature points for each frame of video data, so...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00
CPCG06V40/168G06V40/172
Inventor 何金文龙彦波
Owner SHENZHEN TENCENT COMP SYST CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products