Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Eye-positioning method

A human eye positioning and eye technology, which is applied to the acquisition/recognition of eyes, instruments, characters and pattern recognition, etc. It can solve the problems of inaccurate face positioning and complex calculations

Inactive Publication Date: 2018-12-18
HARBIN UNIV OF SCI & TECH
View PDF5 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The common disadvantage of these methods is that they are often inaccurate for images with mixed backgrounds and tilted faces.
The accuracy of the template matching method is high, but the operation is complicated due to the need to calculate the fractal dimension many times

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0012] The human eye positioning method of the present embodiment is based on template matching. After the eyes are roughly positioned, the human face area is determined to verify the hypothetical eyes. The human eye positioning method is realized through the following steps:

[0013] Step 1. Preliminary work: firstly obtain the template and feature space for determining the eyes, and determine the face used in the verification, and use the principal component analysis method in the verification process; and use the K-L transformation to obtain the eigenface;

[0014] Step 2. Detect possible eye points in a picture, and these eye points appear as valleys in the image space;

[0015] Step 3. Combine all candidate eye points on the image according to the set standard. For each pair of eyes, select a binocular window from the original image and match it with the template in step 1. Those that meet the matching conditions are determined as candidate eyes right;

[0016] Step 4. V...

specific Embodiment approach 2

[0017] Different from Embodiment 1, in the human eye positioning method of this embodiment, the process of obtaining the template and feature space when determining the eyes described in step 1 is that the size of the eye template used in this method is fixed, although In the actual image, the size of the face is different, and the size of the eye window is different, but after the position of the eyes is given, the eye window can be determined and the input eye window is scaled to the template size as required for matching. The generation of the eye template uses multiple people The face sample is averaged, select a standard ID photo, draw the face area manually as a face sample, and then get the eyes from the face sample; or use automatic or manual methods to locate the eyes of the standard photo, according to the two eyes The distance between the eye window and the face area is determined, the eye point is given manually, and the eye window and face area are intercepted by t...

specific Embodiment approach 3

[0018] The difference from Embodiment 1 or Embodiment 2 is that in the human eye positioning method of this embodiment, the process of detecting possible eye points in a picture described in step 2 is that the possible eye points are candidate eyeballs, The eyeball is the part of the human face with the smallest grayscale, and it is in obvious contrast with the surrounding area. On the grayscale function f(x,y) of the image space, a trough will appear at the part of the eye, and the average gradient operator can be used for this trough area After detection, normalize the mean and variance of the input image to eliminate the influence of illumination. In order to reduce the detected points, the image is first binarized. The position of the eyes and hair on the binary image is black, and the background or human The clothes may also be black, because the grayscale contrast between the eyes and the surrounding eye sockets is large, and the gradient changes near the eyes are large. ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

An eye-positioning method is provided. At present, the disadvantage of the human eye location method is that the image of the miscellaneous background and the inclined face location are inaccurate. The template matching method has high accuracy, but it needs to calculate fractal dimension many times, so the calculation is complex. The method of the invention firstly obtains a template and a feature space when determining an eye, and determines a face used in verification; the K-L transform is used to obtain characteristic face; possible eye points are detected in a picture; all candidate eye points on the image are combined, binocular windows are selected from the original image and matched with the template of step 1, and candidate eye pairs are determined to satisfy matching conditions;the face region determined by each pair of candidate eyes is projected onto the eigenface space, and the vector coefficients are obtained. The image is reconstructed according to the vector coefficients, and the correctness of the presumed eye pairs can be verified by comparing the original image with the reconstructed image. The invention does not need to reduce the size of obtaining each objectin proportion to the input picture for many times, and greatly reduces the complexity of operation from two aspects.

Description

Technical field: [0001] The invention relates to a human eye positioning method. Background technique: [0002] Eyes are the most visible organ of the human face and contain many useful features. By extracting the eyes from a given image, the face required for recognition can be obtained according to the relationship between it and the face, and then other facial features can be extracted. Therefore, eye positioning is often the first step in a face recognition system, and is extremely important for a high-performance automatic face recognition system. The existing eye positioning methods often require a lot of calculations. For example, after binarizing the input image, the largest black area is the hair area, and the lower boundary line of this area is set as the direction of the two eyes. The approximate position of the eye is obtained by symmetry, and the eyeball, eye corner, and upper and lower eyelids can be extracted by combining the edge map to improve the recognit...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00
CPCG06V40/171G06V40/193G06V40/197
Inventor 崔志斌陈宝远
Owner HARBIN UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products