Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Lip reading method based on multi-granularity spatiotemporal feature perception of event camera

A spatio-temporal feature and multi-granularity technology, applied in the field of lip reading, can solve the problems of low video time resolution, visual redundant information, high power consumption of equipment, etc., achieve good recognition ability, improve accuracy, and avoid spatio-temporal information lost effect

Pending Publication Date: 2022-08-09
UNIV OF SCI & TECH OF CHINA
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In order to solve the shortcomings of the above-mentioned prior art, the present invention proposes a lip-reading method based on multi-granularity spatio-temporal feature perception of event cameras, in order to perform lip recognition more accurately through event stream signals, thereby solving the problem of Lip reading based on traditional RGB cameras has the problems of low temporal resolution of video, a lot of visual redundant information, poor performance under extreme lighting conditions, and high power consumption of devices during actual deployment.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Lip reading method based on multi-granularity spatiotemporal feature perception of event camera
  • Lip reading method based on multi-granularity spatiotemporal feature perception of event camera
  • Lip reading method based on multi-granularity spatiotemporal feature perception of event camera

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0042] In this embodiment, the process of a lip-reading method based on event camera-based multi-granularity spatiotemporal feature perception is referenced figure 1 , specifically, follow the steps below:

[0043] Step 1. Event camera-based lip-reading data collection and preprocessing:

[0044] Volunteers were recruited, lip-reading data was collected using an event camera, and the collected data was segmented into word-level samples, and the spatial extent of each sample was cropped to the size of H×W, where H and W are height and width, respectively. The event data contained in the i-th sample is where x ik ,y ik ,t ik ,p ik Respectively represent the abscissa, ordinate, generated timestamp and polarity of the kth event in the ith sample, n i Indicates the total number of events contained in the i-th sample; repeat the shooting of the i-th sample multiple times, and record all the captured samples as the word set w i ; where w i ∈{1,2,...,m v ,...,V}, V is the nu...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a lip reading method based on multi-granularity spatiotemporal feature perception of an event camera. The lip reading method comprises the following steps: 1, providing a lip reading technical scheme based on the event camera for the first time; 2, according to the characteristics of an event stream signal, converting original asynchronous signal stream data into a multi-time resolution event frame; 3, a double-flow network is constructed to extract spatial and temporal features of different granularities, fine temporal features are extracted from a high-temporal-resolution branch, and complete spatial features are extracted from a low-temporal-resolution branch; and 4, constructing a sequence model to carry out feature sequence decoding, and decoding the multi-granularity spatio-temporal features extracted by the feature extraction network into the probability of words corresponding to event stream signals. According to the lip reading scheme based on the event camera, the problems of low video time resolution, much visual redundant information, poor performance under an extreme illumination condition and high power consumption of equipment during actual deployment when a traditional camera is used for lip reading can be solved.

Description

technical field [0001] The invention belongs to the field of lip reading, in particular to a lip reading method based on event camera-based multi-granularity spatiotemporal feature perception. Background technique [0002] Lip-reading technology aims to decode the textual content of a speaker's speech from visual information about the movement of the speaker's lips. It has important applications in the fields of health care, auxiliary speech recognition in noisy environments, public security, and human-computer interaction. Lip-reading technology has attracted great attention from academia and industry in the past 40 years. The lip reading task is a very challenging task, which is embodied in the following five aspects: 1. The video based on traditional RGB has low temporal resolution and contains a lot of visual redundant information such as background; 2. Different speakers’ pronunciation habits and faces 3. Words with similar pronunciation are visually ambiguous; 4. Vide...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06V40/20G06V10/82G06N3/04G06N3/08
CPCG06V40/20G06V10/82G06N3/08G06N3/047G06N3/048G06N3/045
Inventor 查正军曹洋王洋吴枫谭赣超
Owner UNIV OF SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products