Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Facial expression recognition method of improved MobileNet model

A facial expression recognition and facial expression technology, applied in the field of facial expression recognition of the MobileNet model, can solve problems such as the complexity of the network model, the large number of model parameters, and the difficulty in meeting hardware requirements for mobile terminals and embedded devices

Active Publication Date: 2020-10-30
SICHUAN UNIV
View PDF7 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, with the continuous development of the deep neural network model, its shortcomings have gradually emerged.
The complexity of the network model and the large number of model parameters make these models only applicable in some specific occasions, and it is difficult for mobile terminals and embedded devices to meet their required hardware requirements
The high hardware requirements required by complex network models limit its application scenarios

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Facial expression recognition method of improved MobileNet model
  • Facial expression recognition method of improved MobileNet model
  • Facial expression recognition method of improved MobileNet model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0016] The present invention will be further described below in conjunction with accompanying drawing:

[0017] The specific method of facial expression recognition of the improved MobileNet model is as follows:

[0018] First follow the attached figure 2 The human facial expression image preprocessing process preprocesses the input facial expression image. In this preprocessing process, first judge whether to convert to a single-channel grayscale image according to the type of the input image. If the image is already a single-channel grayscale image, go to the next step directly, otherwise, convert it. Then perform face detection on the output image of the previous step to determine the face area. Finally, the image is cropped according to the obtained face area, and cropped to a single-channel grayscale image with a size of 48*48, so far the preprocessing process is completed.

[0019] Then input the preprocessed image into the network model built as shown in Table 1 for...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

According to the improved MobileNet network model provided by the invention, the network is further simplified by combining the characteristics of facial expression recognition under the condition ofkeeping the lightweight integral structure of the MobileNet, so that the network receives a 48 * 48 single-channel grayscale picture. In order to reduce the network calculation amount, a depth separable convolution layer in the MobileNetV1 model is reserved. Meanwhile, in order to solve the problem of information loss possibly caused by introduction of a nonlinear activation function after a deepconvolution layer, the nonlinear activation function is directly abandoned after the deep convolution layer, and linear output mentioned in MobileNetV2 is adopted. The network model uses a linear support vector machine to classify facial expressions into a network model. Finally, compared with MobileNetV1 and MobileNetV2, the parameters of the network model are greatly reduced. According to the model, an experiment is carried out on a CK + data set, and good recognition performance is achieved on a test set.

Description

technical field [0001] The invention relates to the problem of static human facial expression recognition in the field of computer vision, in particular to a human facial expression recognition method of an improved MobileNet model. Background technique [0002] Facial expression recognition is a hot topic in the field of computer vision. As a direct expression of human emotions, facial expressions are a form of non-verbal communication. At present, the application fields of facial expression recognition technology are very wide, including human-computer interaction (HCI), security, robot manufacturing, medical treatment, communication, automobile and so on. Automatic facial expression recognition systems are necessary in emerging applications such as human-computer interaction (HCI), online distance education, interactive games, and intelligent transportation. [0003] The focus of facial expression recognition is the extraction of facial expression features. For the ext...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04
CPCG06V40/174G06V40/168G06N3/045G06F18/2411
Inventor 何小海王韦祥周欣卿粼波王正勇吴小强吴晓红滕奇志
Owner SICHUAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products