Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Convolution neural network training method, gesture recognition method, device and apparatus

A technology of convolutional neural network and training method, which is applied in the field of convolutional neural network training method, gesture recognition method and device, can solve the problems of low processing efficiency and high complexity, and achieve simplification of complexity, reduction of data, and improvement of The effect of processing efficiency

Active Publication Date: 2019-02-19
GCI SCI & TECH +1
View PDF7 Cites 29 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] When the inventor implemented the embodiment of the present invention, the inventor found that in the prior art, gestures are various gestures and actions produced by the combination of human hands or hands and arms. To recognize and track gestures in a high-dimensional observation space, it is necessary to There is a lot of gesture feature information to be processed. When gesture recognition training is performed in the early stage or gesture recognition is performed later, there are often too many gesture feature information data, resulting in high processing complexity and low processing efficiency.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Convolution neural network training method, gesture recognition method, device and apparatus
  • Convolution neural network training method, gesture recognition method, device and apparatus
  • Convolution neural network training method, gesture recognition method, device and apparatus

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0053] see figure 1 , a schematic flowchart of a convolutional neural network training method provided in Embodiment 1 of the present invention;

[0054] A training method for a convolutional neural network, comprising:

[0055] S11. Obtain the gesture image to be trained;

[0056] S12. Segment and extract the gesture image according to Mask R-CNN target detection, so as to obtain key point coordinates corresponding to each gesture in the gesture image;

[0057] S13. For each key point, correspondingly mark according to the visibility of the key point to obtain the marked feature information, wherein the feature information includes the key point coordinates and the corresponding visibility flag ;

[0058] S14. For each gesture image, perform dimensionality reduction on the marked feature information based on a manifold learning algorithm, and obtain a feature point distribution image after dimensionality reduction;

[0059] S15. For each feature point distribution image, ...

Embodiment 2

[0089] see figure 2 , a schematic flow chart of a convolutional neural network-based gesture recognition method provided by Embodiment 2 of the present invention, including:

[0090] S21. Obtain the trained convolutional neural network; wherein, the trained convolutional neural network performs convolutional neural network training on the initial convolutional neural network according to the feature point distribution image and the corresponding gesture instruction label; wherein, the The gesture command label is based on the combination of corresponding feature points in the feature point distribution image, and the gesture command label after gesture semantic annotation is obtained, wherein the feature point distribution image is identified in the gesture image to be trained based on the manifold learning algorithm The feature information is obtained by dimensionality reduction, wherein the feature information includes key point coordinates obtained by segmenting the gestur...

Embodiment 3

[0097] see image 3 , a schematic structural diagram of a convolutional neural network training device provided in Embodiment 3 of the present invention;

[0098] A training device for a convolutional neural network, comprising:

[0099] Gesture acquisition module 31, used to acquire gesture images to be trained;

[0100] The coordinate acquisition module 32 is used to segment and extract the gesture image according to the Mask R-CNN target detection, so as to obtain the key point coordinates corresponding to each gesture in the gesture image;

[0101] The feature information acquisition module 33 is configured to identify each key point according to the visibility of the key point to obtain the identified feature information, wherein the feature information includes the key point coordinates and corresponding visibility signs;

[0102] A dimensionality reduction module 34, configured to perform dimensionality reduction on the identified feature information based on a manif...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a training method of a convolution neural network. The method includes: firstly, obtaining a gesture image to be trained; according to Mask R-CNN target detection, segmenting and extracting the gesture image to obtain the coordinates of key points corresponding to each gesture in the gesture image; for each key point, performing corresponding identification according to thevisibility of the key point, so as to obtain the marked characteristic information, wherein, the characteristic information comprises the coordinates of the key point and the corresponding visibilitymark; for each gesture image, reducing the dimensionality of the identified feature information based on a manifold learning algorithm, and obtaining the reduced dimensionality feature point distribution image. For each feature point distribution image, according to the combination of corresponding feature points in the feature point distribution image, obtaining the gesture instruction label after the gesture semantic labeling. According to the feature point distribution image and the corresponding gesture instruction label, the initial convolution neural network is trained to obtain the trained convolution neural network, which simplifies the processing complexity and improves the processing efficiency.

Description

technical field [0001] The invention relates to the technical field of information processing, in particular to a convolutional neural network training method, gesture recognition method and device. Background technique [0002] At present, human-computer interaction technology has gradually shifted from computer-centered to user-centered, and it is an interactive technology with multiple channels and multiple media. Gesture is a natural, intuitive and easy-to-learn means of human-computer interaction. From the traditional mouse and keyboard input to the current infrared, wireless, etc., to the human hand directly as the input device of the computer, the communication between man and machine will no longer need the intermediate media, and the user can simply define an appropriate gesture to Control surrounding machines. This makes human-computer interaction more convenient and rich. At present, the more active human-computer interaction mainly includes speech recognition,...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/28G06V40/113G06F18/2413
Inventor 杜翠凤周冠宇温云龙杨旭周善明张添翔叶绍恩梁晓文
Owner GCI SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products