Computer vision-based dynamic gesture recognition method

A computer vision and dynamic gesture technology, applied in the field of image processing, can solve the problems of high background color requirements, cumbersome recognition steps, time-consuming and other problems, and achieve the effect of fast recognition speed, simple steps, and overcoming single requirements.

Active Publication Date: 2018-03-16
XIDIAN UNIV
View PDF4 Cites 58 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this method needs to preprocess the image and has high requirements on the background color, and the detection and recognition of the gesture is carried out in two steps, that is, the position of the gesture is first obtained, and then the current gesture is classified to obtain the state. The recognition step is cumbersome and time-consuming

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Computer vision-based dynamic gesture recognition method
  • Computer vision-based dynamic gesture recognition method
  • Computer vision-based dynamic gesture recognition method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0035] As a natural and intuitive way of communication, gestures have a good application prospect: using prescribed gestures to control smart devices in virtual reality; used as sign language interpreters to solve the communication problems of deaf-mute people; unmanned driving automatically recognizes traffic police Gestures etc. At present, the traditional method is generally adopted for vision-based gesture recognition technology, that is, to segment gestures first, and then classify gestures. This method requires high quality photos and is difficult to handle gestures in complex backgrounds. Therefore, the development of gesture recognition applications is limited. The present invention has carried out research and innovation aiming at the above-mentioned status quo, and proposes a dynamic gesture recognition method based on computer vision, see figure 1 , including the following steps:

[0036] (1) Collect gesture images: Divide the collected gesture images into a train...

Embodiment 2

[0054] The dynamic gesture recognition method based on computer vision is the same as embodiment 1, and the real data frame clustering of manual labeling in the step (2) of the present invention specifically includes the following steps:

[0055] (2a) Read the artificially labeled real frame data of the training set and test set samples;

[0056] (2b) Set the number of clustering centers, use the k-means clustering algorithm, and perform clustering according to the loss measure d(box,centroid) of the following formula to obtain the prior box:

[0057] d(box,centroid)=1-IOU(box,centroid)

[0058] Among them, centroid represents the randomly selected cluster center frame, box represents other real frames except the center frame, and IOU(box, centroid) represents the similarity between other frames and the center frame, that is, the ratio of the overlapping area of ​​the two frames, Calculated by dividing the intersection of the center box and the other boxes by the union.

[0...

Embodiment 3

[0061] The dynamic gesture recognition method based on computer vision is the same as embodiment 1-2, and the construction convolutional neural network in step (3) of the present invention includes the following steps:

[0062] (3a) Based on the GoogLeNet convolutional neural network, use simple 1*1 and 3*3 convolution kernels to construct a convolutional neural network containing G convolutional layers and 5 pooling layers. In this example, G is taken as 25.

[0063] (3b) Train the constructed convolutional network according to the loss function of the following formula:

[0064]

[0065] Among them, the first item of the loss function is the coordinate loss of the center point of the predicted target frame, where λ coord is the coordinate loss coefficient, 1≤λ coord ≤5, take it as 3 in this example, this is to ensure that the location information of the predicted gesture is accurate; S 2 Indicates the number of grids the picture is divided into, and B indicates the num...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a computer vision-based dynamic gesture recognition method, and aims at solving the gesture recognition problems under complicated backgrounds. The method is realized through the following steps of: acquiring a gesture data set and carrying out artificial labelling; clustering a labelled image set real frame to obtain a trained prior frame; constructing an end-to-end convolutional neural network which is capable of predicting a target position, a size and a category at the same time; training the network to obtain a weight; loading the weight to the network; inputting agesture image to carrying out recognition; processing an obtained position coordinate and category information via a non-maximum suppression method so as to obtain a final recognition result image; and recording recognition information in real time to obtain a dynamic gesture interpretation result. According to the method, the defect that hand detection and category recognition in gesture recognition are carried out in different steps in the prior art is overcome, the gesture recognition process is greatly simplified, the recognition correctness and speed are improved, the recognition systemrobustness is strengthened, and a dynamic gesture interpretation function is realized.

Description

technical field [0001] The invention belongs to the technical field of image processing, and further relates to image target recognition technology, in particular to a computer vision-based dynamic gesture recognition method. It can be used for position detection and state recognition of gestures in images, so as to provide more accurate information for gesture recognition subsequent applications such as sign language translation and game interaction. Background technique [0002] In recent years, with the development of related disciplines such as computer vision and machine learning, human computer interaction technology is gradually changing from "computer-centric" to "human-centric". The natural user interface that uses the human body itself as a communication platform provides operators with a more intuitive and comfortable interactive experience, including face recognition, gesture recognition, and body gesture recognition. Among them, gestures in daily life, as a nat...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04
CPCG06V40/28G06N3/045G06F18/23G06F18/2415
Inventor 王爽焦李成方帅王若静杨孟然权豆孙莉侯彪马晶晶刘飞航
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products