Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Eye and mouth state recognition method based on convolutional neural network

A convolutional neural network and state recognition technology, which is applied in the field of image recognition, can solve the problems of unbreakable recognition accuracy, few applicable scenarios, and low detection efficiency. It is friendly to transplantation and promotion, has a wide application range, and improves The effect of robustness

Pending Publication Date: 2017-03-08
TIANJIN POLYTECHNIC UNIV
View PDF5 Cites 31 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Using multiple template matching, the detection efficiency is low and the real-time performance is poor
The second type of method uses the gray projection curve of the iris area of ​​the eye to judge the state of the eye, which has higher requirements on lighting and is less applicable to the scene.
The third type of method uses eye opening and closing detection based on the combination of LBP features and SVM classifiers, which has certain limitations for drivers wearing sunglasses and posture changes, and has poor robustness.
The fourth type of method uses eye state recognition based on multi-feature fusion. This method requires multiple classifiers for decision fusion, and the real-time performance is poor.
[0005] The convolutional neural network has better expressive power for features and avoids the manual feature selection process. Before the convolutional neural network was proposed, due to the lack of similar technical means, the field of image recognition has been limited to "extract features first, then pattern recognition". framework, the accuracy of recognition cannot break through the bottleneck of the technical framework, and the progress is slow

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Eye and mouth state recognition method based on convolutional neural network
  • Eye and mouth state recognition method based on convolutional neural network
  • Eye and mouth state recognition method based on convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0030] In order to enable your examiners to further understand the structure, features and other purposes of the present invention, the attached preferred embodiments are now described in detail as follows. The described preferred embodiments are only used to illustrate the technical solutions of the present invention, not to limit the present invention. invention.

[0031] Process flow of the present invention such as figure 1 As shown, firstly, the face area of ​​interest is detected based on the haar feature combined with the AdaBoost algorithm (or other methods), and the face feature points are detected based on the preliminary face detection results by a combination of random forest and linear regression, and Extract the eyes and mouth area; then according to the basic structure of the convolutional neural network convolutional layer, downsampling layer and fully connected layer and the Lenet5 network structure, the neural network is optimized by convolution of the local ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an eye and mouth state recognition method based on a convolutional neural network. SR-Net (State recognition nets) designed through the method learn a large amount of different eye and mouth state samples. Face state recognition can be regarded as eye state recognition and mouth state recognition. Eye and mouth states can be recognized and classified more accurately. Due to the convolutional neural network, extraction of artificial features is avoided, and recognition on eye and mouth states has high robustness. Due to the method of the invention, the recognition rate in a sunglass wearing condition is enhanced, the average accuracy for eye state recognition is improved to more than 98.41%, the average recognition rate for eye states without glasses is 98.92%, and the average recognition rate for the mouth states is 99.33%.

Description

technical field [0001] The invention relates to an eye and mouth state recognition method based on a convolutional neural network. The method can adapt to illumination changes and glasses blocking situations, belongs to the technical field of image recognition, and can be applied to determine the driver's fatigue state. Background technique [0002] Eye and mouth state recognition can be considered equivalent to the recognition of human face and face state. It is an important content in the field of image recognition and has a direct impact on information security, automatic driving and other technologies. According to the report of the National Center for Statistics and Analysis of the United States, fatigue driving is one of the important causes of traffic accidents. Therefore, the research on driver fatigue detection technology has important significance for preventing traffic accidents. In recent years, with the improvement of computer hardware level, the fatigue detecti...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06N3/02G06K9/32
CPCG06N3/02G06V40/165G06V40/171G06V10/25
Inventor 耿磊梁晓昱肖志涛张芳吴骏苏静静
Owner TIANJIN POLYTECHNIC UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products