A real-time face detection method based on a deep convolutional neural network

A neural network and deep convolution technology, applied in the field of real-time face detection based on deep convolutional neural network, can solve the problems of low efficiency of training and testing, poor adaptability to extreme conditions, and high network complexity, achieving both time and performance, ease of implementation, enhanced anti-interference effect

Active Publication Date: 2019-06-14
四川电科维云信息技术有限公司
View PDF3 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Even though the cascaded neural network has advantages over traditional methods in terms of performance and time, there are still problems such as high network complexity, low efficiency of training and testing, serious false detection and missed detection, weak generalization ability, and poor adaptability to extreme conditions.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A real-time face detection method based on a deep convolutional neural network
  • A real-time face detection method based on a deep convolutional neural network
  • A real-time face detection method based on a deep convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0044] In order to overcome the defects of the prior art, the present invention discloses a real-time face detection method based on a deep convolutional neural network, such as figure 1 Shown, described face detection method comprises the following steps:

[0045] Step 1. Fuse the data set information, create face data and divide the face data into training set, test set and verification set in proportion;

[0046] Step 2. Label the data set obtained in step 1, and convert the real labels of the data set into txt files. The txt file name has the same name as the matching picture;

[0047] Step 3, performing data enhancement on the data after labeling in step 2;

[0048] Step 4. Construct an end-to-end non-cascaded deep convolutional neural network. The deep convolutional neural network includes a backbone and two feature extraction branches. The backbone and feature extraction branches contain a total of 26 convolutional layers. and 5 max pooling layers;

[0049] Step 5. Put...

Embodiment 2

[0052] On the basis of Example 1, this embodiment discloses a preferred structure of the training data set. This method uses three existing standard data sets in the field of face detection: WIDER FACE, FDDB, and CelebA. WIDER FACE has a total of 32,203 images and 393,703 labeled faces. Currently, it is the most difficult, and the various difficulties are relatively comprehensive: scale, posture, occlusion, expression, makeup, lighting, etc. FDDB has a total of 2845 images, 5171 labeled faces, faces in an unconstrained environment, and faces are more difficult, including facial expressions, double chins, lighting changes, clothing, exaggerated hairstyles, occlusions, low resolution, and out-of-focus; CelebA , is currently the largest and most complete dataset in the field of face detection, and is widely used in various computer vision training tasks related to faces. It contains 202,599 face pictures of 10,177 celebrity identities, and each picture is marked with features, in...

Embodiment 3

[0061] On the basis of Example 1, this embodiment discloses the optimal structure of data enhancement. In practice, label data is very precious, and the amount may not be enough to allow you to train a model that meets the requirements. At this time, data enhancement will become particularly important. Secondly, data augmentation can effectively improve the generalization ability of the model, improve the robustness of the model, and make the model performance more stable and the effect more brilliant. In the present invention, a total of 5 types of data enhancement methods are used:

[0062] (1) Color data enhancement, including saturation, brightness, exposure, hue, contrast, etc. Enhanced color transformation can make the model better adapt to force majeure factors such as weather and light in real scenes.

[0063] (2) Scale transformation, the size of the pictures sent to the model for training in each round will be randomly changed to an integer multiple of 32, with a t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a real-time face detection method based on a deep convolutional neural network, and the method comprises the steps of fusing data set information, creating face data, and dividing the face data into a training set, a test set, and a verification set according to a ratio; carrying out label manufacturing on the data set obtained in the first step, and changing a real label of the data set into a txt file; carrying out data enhancement on the data set obtained in the step 2; constructing a deep convolutional neural network with an end-to-end non-cascade structure; puttingthe data processed in the step 3 into the convolutional neural network constructed in the step 4 for training, and optimizing a loss function of the whole model by combining a random gradient descentmethod; and setting a category confidence threshold, and inputting the test part data set output in the step 5 and actual video data into a deep convolutional neural network for performance test. Themethod better integrates two advantages of time and performance, has better adaptability to the face angle, the illumination intensity and the shielding degree, and effectively improves the robustness of face detection and the generalization ability of the network.

Description

technical field [0001] The invention relates to a target detection method in the fields of computer vision and deep learning, in particular to a real-time face detection method based on a deep convolutional neural network. Background technique [0002] The face recognition system takes face recognition technology as the core. It is an emerging biometric technology and a high-tech technology in the international scientific and technological field. It widely adopts the regional feature analysis method, integrates computer image processing technology and biostatistics principles, uses computer image processing technology to extract portrait feature points from videos, and uses biostatistics principles to analyze and establish mathematical models. Prospects. Face detection is a key link in automatic face recognition system. However, due to the complex detail changes of the human face, different appearances such as face shape, skin color, etc., different expressions such as the...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62
Inventor 殷光强向凯王志国王春雨
Owner 四川电科维云信息技术有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products