Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video face recognition method based on aggregation adversarial network

A face recognition and aggregation network technology, applied in character and pattern recognition, instruments, computer components, etc., can solve the problems of low accuracy and low efficiency of video face recognition, and achieve improved efficiency, high recognition efficiency, and improved performance effect

Active Publication Date: 2020-08-14
JIANGNAN UNIV
View PDF6 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] In order to solve the problems of low efficiency and low precision in the existing video face recognition technology, the present invention provides a video face recognition method. In the recognition process, multiple low-quality video The sequence is aggregated into a single high-quality frontal face image, and the quality of the generated frontal face image is improved through confrontation learning during the aggregation process, so as to accurately perform video face recognition;

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video face recognition method based on aggregation adversarial network
  • Video face recognition method based on aggregation adversarial network
  • Video face recognition method based on aggregation adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0045] This embodiment provides a video face recognition method based on aggregation confrontation network, see figure 1 , the method includes:

[0046] Step 1. Obtain the training set, including the video sequence dataset V and the corresponding static image dataset S:

[0047] Step 1.1, obtain the training video sequence data set, denoted as V={v 1 ,v 2 ,...,v i ,...,v N}, where v i Represent the i-th category video sequence, i=1,2,...,N, N is the category number of the video sequence;

[0048] In practical applications, N represents the number of different people appearing in V, and the video sequences corresponding to the same person are called a class.

[0049] Step 1.2, obtain the static image data set corresponding to V, denoted as S={s 1 ,s 2 ,...,s i ,...,s N}, where s i Indicates the static image corresponding to the i-th category;

[0050] In practical applications, S can be obtained by shooting with a high-definition camera, but in some actual video sur...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video face recognition method based on an aggregation adversarial network, and belongs to the technical field of video face recognition. According to the method, an aggregation adversarial network constructed by an aggregation network, a discrimination network and an identification network is adopted, the aggregation network and the discrimination network form adversariallearning, and a generated image is closer to a static image of a target set in a competitive mode; and the perception loss is calculated in the high-dimensional feature space through the recognitionnetwork, so that the generated image is closer to the static image of the corresponding target set in perception performance, and the performance of the aggregation network is improved. And the discrimination network adopts a softmax multi-dimensional output form, so that not only can the authenticity of the image be judged, but also the identity category of the image can be discriminated, the identity of the generated image is enabled to be closer to a real value, the subsequent identification is enabled to be more accurate, and the identification efficiency is enabled to be higher.

Description

technical field [0001] The invention relates to a video face recognition method based on an aggregation confrontation network, and belongs to the technical field of video face recognition. Background technique [0002] Video face recognition technology, as the name suggests, is based on video for face recognition. With the increasing development of technology and demand, video face recognition technology has been applied in many fields, such as intelligent security, video surveillance, public security investigation and many other fields. [0003] Video face recognition is different from face recognition based on a single image. The query set of video face recognition is a video sequence, and its target set is usually a high-definition face image. By extracting the face features of the video sequence and Targets are matched centrally to identify the identity of the person in the video. [0004] However, in the most common video surveillance scene for video face recognition,...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00
CPCG06V40/168G06V40/172G06V20/40G06V20/46Y02T10/40
Inventor 陈莹金炜
Owner JIANGNAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products