Multi-channel network-based video human face detection and identification method

A face detection and recognition method technology, applied in the field of video face recognition based on deep learning, can solve the problems of time-space connection without consideration, low accuracy and so on

Active Publication Date: 2017-06-13
ENJOYOR COMPANY LIMITED
View PDF6 Cites 48 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] Judging from the existing methods, it is mainly to extract the face information of some frames in the video image, and use deep learning for training and detection and recognition. The time-space connection between the frames in the video image has not been considered. lead to lower accuracy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-channel network-based video human face detection and identification method
  • Multi-channel network-based video human face detection and identification method
  • Multi-channel network-based video human face detection and identification method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0071] The present invention will be further described below in conjunction with the accompanying drawings.

[0072] refer to figure 1 and figure 2 , a kind of video face detection and recognition method based on multi-channel network, described method comprises the steps:

[0073] S1: Video preprocessing

[0074] Receive the video data collected by the monitoring equipment and decompose it into a frame-by-frame image, and add time information to each frame image, specifically: the first frame image in the received video is image 1, and then press Time sequence sets the t-th frame image in the video as image t. In the following narration, with I t Represents the t-th frame image, and I represents the frame image collection of the same video. After completing the preprocessing of the video, the decomposed images are passed to the face target detection module in the order of time from front to back.

[0075] S2: Target face detection and attitude coefficient calculation ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multi-channel network-based video human face detection and identification method. The method comprises the following steps of S1, performing video preprocessing: adding time information to each frame image; S2, detecting a target human face and calculating a pose coefficient; S3, correcting a human face pose: for m human faces obtained in the step S2, performing pose adjustment; S4, extracting human face features based on a deep neural network; and S5, comparing the human face features: for an input human face, obtaining eigenvectors by utilizing the step S4, matching a matching degree of an eigenvector of the input human face and a vector in a feature library by utilizing a cosine distance, and adding a class to alternative classes, and if the cosine distances between a feature of the to-be-identified human face and central features of all classes are all smaller than a set threshold phi, regarding that a database does not store information of a person, and ending the identification, wherein the cosine distance between the class and the to-be-identified human face is greater than the set threshold phi. The multi-channel network-based video human face detection and identification method with relatively high accuracy is provided.

Description

technical field [0001] The invention relates to the field of video face detection and recognition, in particular to a video face recognition method based on deep learning. Background technique [0002] Video surveillance is an important part of a security system. With the development of video sensor technology and corresponding supporting technologies, from the initial analog monitoring system, the subsequent digital-analog monitoring system, to the ip monitoring system that is now being applied, the application range of video monitoring is getting wider and wider, especially for public security The system deploys a large number of video surveillance systems to be used in public security management and suspect tracking. [0003] The rapid development of video surveillance systems has produced a large amount of surveillance video data. One of the main processing of these video data in the field of public security management and suspect tracking is to find the video files tha...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/16G06V40/165G06V40/168G06F18/22G06F18/214G06F18/25
Inventor 钱小鸿车志聪吴越陈涛李建元
Owner ENJOYOR COMPANY LIMITED
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products