Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Dynamic gesture tracking method based on convolutional neural network

A technology of convolutional neural network and dynamic gesture, which is applied in the field of dynamic gesture tracking based on convolutional neural network, can solve the problems of insufficient real-time tracking and poor tracking effect, and achieve good generalization ability, sufficient real-time performance, and tracking strong effect

Active Publication Date: 2019-11-22
HARBIN UNIV OF SCI & TECH
View PDF4 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The purpose of the present invention is to propose a dynamic gesture tracking method based on convolutional neural network in order to deal with the problems of poor tracking effect and insufficient real-time tracking due to skin color interference encountered in the gesture tracking process in complex scenes.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Dynamic gesture tracking method based on convolutional neural network
  • Dynamic gesture tracking method based on convolutional neural network
  • Dynamic gesture tracking method based on convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0020] Specific implementation mode one: refer to figure 1 and figure 2 This embodiment is specifically described. A convolutional neural network-based dynamic gesture tracking method described in this embodiment includes the following steps:

[0021] Step 1: Tracking dynamic gestures in complex backgrounds as a vision task;

[0022] Step 2: Select gesture image samples for filtering processing, and then make a gesture training set;

[0023] Step 3: Determine the YOLOv3-gesture gesture detection network structure;

[0024] Step 4: Use the planning area detection framework to complete dynamic gesture tracking;

[0025] Step 5: Train the YOLOv3-gesture model to obtain a dynamic gesture tracking model;

[0026] Step 6: Use the obtained model to complete dynamic gesture tracking.

[0027] According to the obtained dynamic gesture tracking model, new samples are used to test the dynamic gesture tracking model, and the detection results of the new samples are obtained.

[002...

specific Embodiment approach 2

[0032] Embodiment 2: This embodiment is a further description of Embodiment 1. The difference between this embodiment and Embodiment 1 is that the detailed steps of Step 3 are as follows: first, keep the residual module of Darknet-53 , add a 1×1 convolution kernel after each residual module, and use a linear activation function in the first convolution layer, and then in the residual module, adjust the number of residual network layers in each module.

[0033] This implementation method structurally improves the shortcomings of the traditional Darknet-53 network for the detection of single-type objects such as gestures, which are too complex and redundant. The specific implementation steps are as follows:

[0034] 1. Keep the residual module of Darknet-53, add a 1×1 convolution kernel after each residual module to further reduce the output dimension, and use a linear activation function in the first convolution layer to avoid low-dimensional convolution The problem of layer fe...

specific Embodiment approach 3

[0036] Embodiment 3: This embodiment is a further description of Embodiment 1. The difference between this embodiment and Embodiment 1 is that in step 4, the specific steps of using the planning area detection framework to complete dynamic gesture tracking are: First assume that there is a gesture target Object detected in the t-th frame image 1 , then the YOLOv3-gesture network predicts the output prediction box X 1 The center coordinates of (b x ,b y ), prediction frame width b w and height b h ; After entering the t+1th frame, generate a planning area near the center point of the tth frame for detection, that is, at the t+1th frame, the size of the input YOLOv3-gesture network is the planning area S * , where the planning area S * The width of S w and height S h The value is determined by the width of the predicted border b w and b h decision, then take the center point of the tth frame as the origin, and the upper left corner vertex (S x ,S y ) formula is as fol...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a dynamic gesture tracking method based on a convolutional neural network, relates to the technical field of computer vision, and aims to solve the problems of poor tracking effect, insufficient tracking real-time performance and the like caused by skin color interference in a gesture tracking process in a complex scene, and the method comprises the following steps: the step 1, taking dynamic gesture tracking under a complex background as a visual task; the step 2, selecting a gesture image sample for filtering processing, and then making a gesture training set; the step 3, determining a YOLOv3-gesture gesture detection network structure, wherein the structural formula of the YOLOv3-gesture gesture detection network is shown in the specification; the step 4, completing dynamic gesture tracking by using a planning area detection framework; the step 5, training the YOLOv3-gesture model to obtain a dynamic gesture tracking model; and the step 6, completing dynamicgesture tracking by utilizing the obtained model. When skin color interference is encountered in the gesture tracking process in a complex scene, the tracking effect is strong, and the tracking real-time performance is sufficient.

Description

technical field [0001] The invention relates to the technical field of computer vision, in particular to a dynamic gesture tracking method based on a convolutional neural network. Background technique [0002] Gesture-based human-computer interaction is the most natural way of human-computer interaction, and has attracted more and more attention from researchers in recent years. In the dynamic gesture interaction mode, the trajectory of the hand is one of the important components of the gesture command, so the tracking of the gesture is an important link. Although gesture tracking algorithms have been widely used in virtual reality systems and HCI systems, due to the gradual increase in the robustness and real-time requirements for gesture tracking in applications, gesture tracking is still an important issue in the field of vision-based research. challenging questions. Contents of the invention [0003] The purpose of the present invention is to propose a dynamic gestur...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06N3/04
CPCG06V40/28G06N3/045
Inventor 李东洁李东阁杨柳徐东昊
Owner HARBIN UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products