Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Dynamic vision sensor-oriented brain-like gesture sequence identification method

A visual sensor and sequence recognition technology, applied in neural learning methods, character and pattern recognition, instruments, etc., can solve the problems of lack of biological interpretability, inability to find extreme points, etc.

Active Publication Date: 2021-04-02
ZHEJIANG LAB +1
View PDF6 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In recent years, biologically-inspired SNN-based supervised learning algorithms have emerged, which can be classified as threshold-based or membrane-potential-based plasticity rules. Although membrane-potential-based methods have shown better learning effects than threshold-based methods, such method still lacks biological interpretability, and the rule sometimes fails to find extreme points like that

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Dynamic vision sensor-oriented brain-like gesture sequence identification method
  • Dynamic vision sensor-oriented brain-like gesture sequence identification method
  • Dynamic vision sensor-oriented brain-like gesture sequence identification method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0145] Select the three most similar gestures in the GESTURE-DVS data set, randomly form gesture sequences, and recognize them according to the above recognition methods. The recognition results are as follows image 3 As can be seen from the figure, the present invention can successfully recognize each gesture in the gesture sequence.

[0146] Further analyzing the model performance, it can be seen that the present invention has the following advantages:

[0147] (1) Noise robustness: Figure 4 , the first row is the reconstructed frame map of the original event stream, the second row is the method based on the time plane, the third row is the method based on the denoising time plane, and the fourth row is the pulse space-time based method proposed in the present invention. flat method. like Figure 4 As shown in the fourth row of figures, the feature extraction method proposed in the present invention can not only effectively filter the noise events in the event stream, b...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a dynamic vision sensor-oriented brain-like gesture sequence identification method. The method comprises the following steps of address event expression data capture, space-time convolution, pulse pooling, space-time feature learning and gesture sequence identification by using a learned DoubleSTS model. The method has the characteristics of high noise robustness, high precision, high efficiency, rapid convergence, time sensitivity, brain imitation and the like, and each gesture in the gesture sequence can be successfully recognized.

Description

technical field [0001] The invention relates to a brain-like gesture sequence recognition method, in particular to a brain-like gesture sequence recognition method oriented to a dynamic visual sensor. Background technique [0002] In recent years, brain-like computing has gradually become a major research hotspot. In the field of neuromorphic vision, by simulating the biological retina, researchers have developed a series of neuromorphic vision sensors, also known as silicon eyes, event cameras, and more. Unlike traditional cameras that output frame images, neuromorphic vision sensors only respond to dynamic changes in the scene and encode light intensity changes into asynchronous spatiotemporal event stream data, which has the advantages of low power consumption, low latency, and high dynamic range. Therefore, most existing methods cannot be directly used to process this kind of event stream data. SNNs are expected to enable low-power asynchronous event information integr...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/46G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/113G06V10/449G06N3/045G06F18/253
Inventor 唐华锦董峻妃潘纲
Owner ZHEJIANG LAB
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products