Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Depth-auto-encoder-based human eye detection and positioning method

A self-encoder and human eye detection technology, which is applied in the direction of instruments, character and pattern recognition, acquisition/recognition of eyes, etc., can solve the problem of not being able to handle the deformation, viewing angle change and occlusion of the target detection target well, and it is difficult to achieve real-time Sexuality, it is difficult to achieve real-time problems, to avoid construction and non-maximum suppression operations, fast human eye detection and positioning, and improve the speed of detection

Active Publication Date: 2015-12-30
INST OF AUTOMATION CHINESE ACAD OF SCI
View PDF3 Cites 25 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although this method has achieved good detection results in simple backgrounds and can achieve real-time performance on ordinary computers, this method cannot handle target detection in complex backgrounds well, as well as objects with deformation, viewing angle changes and occlusions. problems, and difficult to achieve real-time on embedded and mobile platforms
In addition, the current target detection method based on the deep convolutional neural network has achieved high detection accuracy, and can handle complex backgrounds and problems with deformation and viewing angle changes very well. However, due to its huge amount of calculation, even with the help of parallel computing technology, it is also difficult to meet the real-time requirements

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Depth-auto-encoder-based human eye detection and positioning method
  • Depth-auto-encoder-based human eye detection and positioning method
  • Depth-auto-encoder-based human eye detection and positioning method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0019] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be described in further detail below in conjunction with specific embodiments and with reference to the accompanying drawings.

[0020] The invention proposes a target detection and positioning method based on a deep self-encoder, and applies it to the detection and positioning of human eyes. In this method, the small image blocks randomly cut from the training image and the corresponding small label image blocks cut from the label map are used to train and learn the depth autoencoder, and the mapping relationship between the small image blocks and the small label map is obtained. Then, the learned deep autoencoder is used to generate a label map corresponding to the image to be tested, and the position of the human eye is finally determined by binarizing and coordinate projection of the label map. The key steps involved in the method of the presen...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a depth-auto-encoder-based human eye detection and positioning method. The method comprises: for all images with calibrated human eye rectangular frame positions in a training set, binary label graphs are generated by using the calibrated human eye rectangular frame positions; small image blocks are selected randomly from the images in the training set and a plurality of auto-encoders are training with supervision in a layered mode to construct a depth auto-encoder, and the depth auto-encoder is initialized based on weights of all layers of the auto-encoder; small original image blocks and small label image blocks are selected randomly at same positions of the original images and the label graphs, wherein the small label image blocks are used as supervision information and the small original image blocks are used as inputs to optimize the depth auto-encoder; and a plurality of small to-be-measured image blocks are generated at a to-be-measured image in a window sliding mode, small to-be-measured label image blocks of all small to-be-measured image blocks are obtained by using the depth auto-encoder and are combined to obtain a to-be-measured label graph of the to-be-measured image, and binaryzation is carried out on the to-be-measured label graph and a position of a human eye is obtained based on coordinate projection or contour searching.

Description

technical field [0001] The invention relates to the field of pattern recognition and machine learning, in particular to image target detection. More specifically, the present invention relates to a method for human eye detection and localization based on a deep autoencoder. Background technique [0002] The explosive growth of biometric technology applications and the huge demand for transplanting biometric algorithms to embedded and mobile platforms make rapid human eye detection and positioning increasingly important. The traditional target detection algorithm is to construct the feature pyramid of the image, and extract the window slidingly on the pyramid, classify the extracted windows, and finally obtain the position of the target through the non-maximization suppression operation. Although this method has achieved good detection results in simple backgrounds and can achieve real-time performance on ordinary computers, this method cannot handle target detection in comp...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/66
CPCG06V40/18G06V40/193G06V30/194
Inventor 王亮黄永祯唐微
Owner INST OF AUTOMATION CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products