Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Lip Recognition Method and Device Based on Dual Discriminator Generative Adversarial Network

A technology of dual discriminators and recognition methods, applied in biological neural network models, character and pattern recognition, instruments, etc., to achieve the effects of improving conversion quality, reducing angle range, and improving accuracy

Active Publication Date: 2021-09-28
NAT UNIV OF DEFENSE TECH
View PDF12 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in the feature extraction stage, most methods only use simple data preprocessing, such as random cropping, horizontal flipping, increasing contrast, etc. These preprocessing methods can only alleviate the overfitting problem to a certain extent, and cannot solve it well. Effect of speaker state such as face deflection on feature extraction

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Lip Recognition Method and Device Based on Dual Discriminator Generative Adversarial Network
  • A Lip Recognition Method and Device Based on Dual Discriminator Generative Adversarial Network
  • A Lip Recognition Method and Device Based on Dual Discriminator Generative Adversarial Network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0059] In one embodiment, such as figure 1 As shown, a lip recognition method based on a double discriminator generation confrontation network is provided, including the following steps:

[0060] Step 101, deriving face pictures of different angles from the video, and obtaining lip multi-angle data sets according to different head deflection angles in the face pictures;

[0061] Step 102, obtaining a generator data set according to the lip multi-angle data set, and extracting an identity discriminator data set, an angle discriminator data set, and an angle classification data set from the lip multi-angle data set;

[0062] Step 103, training the generator data set, identity discriminator data set, and angle discriminator data set to obtain an adversarial network data model, and training the angle classification data set to obtain an angle classifier;

[0063] Step 104, using the angle classifier to perform lip recognition on the video to be recognized to obtain a first lip im...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present application relates to a lip language recognition method and device based on a double discriminator generation confrontation network, the method includes: deriving face pictures from different angles from the video, and obtaining lip recognition according to different head deflection angles in the face pictures part multi-angle data set; and according to the generator data set, identity discriminator data set, and angle discriminator data set obtained from the lip multi-angle data set, regenerate the confrontation network data model; use the confrontation network data model to be identified Lip recognition is performed on the video, and a 0° lip image is converted; a lip feature vector is extracted from the 0° lip image, and the lip feature vector is modeled and classified to obtain a lip classification result. Partial classification results are exported to recognize the language. The embodiment of the present invention has a visual effect similar to the real environment, can well guide the model to adapt to the actual application environment, and further improve the accuracy of the lip language recognition model.

Description

technical field [0001] The present application relates to the field of artificial intelligence, in particular to a lip recognition method and device based on a dual-discriminator generative adversarial network. Background technique [0002] Lip recognition is a complex task combining computer vision and natural language processing. It can be used to automatically infer the text content contained in visual and auditory information. It has a wide range of applications, such as recovering speech from silent surveillance videos or movies. In recent years, the development of lip language recognition is mainly driven by the following two aspects: First, the rapid development of deep learning technology, which is a technology derived from neuroscience, has made great achievements in image processing, language models and other fields. success. The second is the proposal of large data sets, which provide a large amount of training data and complex environmental changes for lip recog...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/62G06T17/00G06N3/04
CPCG06T17/00G06V40/171G06V40/20G06N3/045G06F18/214
Inventor 刘丽张成伟张雪毅薛桂香赵雨
Owner NAT UNIV OF DEFENSE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products