First-person-view gesture recognition method based on dynamic image and video subsequence

A dynamic image and first-person technology, applied in the field of first-person perspective gesture recognition, can solve the problems of complex extraction process and low precision, and achieve the effect of improving recognition accuracy, improving precision, and improving processing speed

Inactive Publication Date: 2018-12-18
SICHUAN UNIV
View PDF6 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The purpose of the present invention is: the present invention provides a first-person perspective gesture recognition method based on dynamic im

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • First-person-view gesture recognition method based on dynamic image and video subsequence
  • First-person-view gesture recognition method based on dynamic image and video subsequence
  • First-person-view gesture recognition method based on dynamic image and video subsequence

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0085] Such as Figure 1-5 As shown, the implementation is as follows:

[0086] Step 1: Collect color video and depth video, and preprocess the video to obtain video data;

[0087] Step 2: Construct dynamic images and extract video representative sequences based on video data;

[0088] Step 3: Input dynamic images and video representative sequences into the fine-tuned neural network model to obtain features, and input the features into the SVM classifier to obtain category probabilities;

[0089] Step 4: Fuse category probabilities to complete gesture recognition.

[0090] Fine-tuning of neural network modules for dynamic image features:

[0091] After the network is built, it is trained. Since the sample data for gesture recognition is insufficient, the fine-tuning model is used. The fine-tuning model selects a model that is pre-trained on a large-scale data set such as ImageNet, and then fine-tunes the pre-trained model on the new data set. ;The pre-training model contai...

Embodiment 2

[0096] Such as Figure 1-5 As shown, the implementation is as follows:

[0097] The preprocessing in step 1 includes conventional preprocessing of the depth video and preprocessing of the color video after gesture segmentation; the preprocessing includes grayscale of the video frame to obtain a grayscale image, dilation and hole filling of the grayscale image, and Convert a grayscale image to a binary image.

[0098] Step 2 includes the following steps:

[0099] Step 2.1: Weight and sum each frame of video data to obtain a dynamic image;

[0100] Step 2.2: The video data adopts the extraction method based on the frame difference to obtain the video representative sequence.

[0101] The extraction method based on frame difference adopted in step 2.2, the specific steps are as follows:

[0102] Step a: Calculate the frame difference and obtain the maximum value of the frame difference;

[0103]Step b: Determine whether the two frames corresponding to the maximum frame diffe...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a first-person-view gesture recognition method based on a dynamic image and a video subsequence, which relates to the field of first person angle of view gesture recognition. The method comprises the steps of: 1, collecting color video and depth video, and preprocessing the video to obtain video data; 2, constructing a dynamic image and extracting a video representative sequence according to the video data; 3, inputting the representative sequences of the dynamic image and the video into the fine-tuned neural network model to obtain the features, and inputting the feature into the SVM classifier to obtain the category probability; 4, carrying out fusion on that category probability to complete gesture recognition. The present invention solves the problems of complicated gesture recognition feature extraction process based on depth learning and low precision, and achieves the effects of simplifying gesture recognition process and improving recognition precision.

Description

technical field [0001] The invention relates to the field of first-person perspective gesture recognition, in particular to a first-person perspective gesture recognition method based on dynamic images and video subsequences. Background technique [0002] In recent years, the popular neural network structures include convolutional neural network CNN, time recurrent neural network RNN ​​and long short-term memory network LSTM. CNN is often used in the field of visual images, while RNN and LSTM are often used in sequence problems such as natural language processing; convolutional neural networks generally include an input layer, several convolutional layers, several pooling layers, and several fully connected layers; It has the characteristics of local connection and parameter sharing. Due to its two characteristics, the computational complexity of convolutional neural network and the number of network parameters are greatly reduced, making it more and more widely and convenie...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/28G06F18/2411G06F18/254
Inventor 杨震群魏骁勇吕华富于超王泽荣张世西
Owner SICHUAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products