Gesture recognition method of small quantity of training samples based on RGB-D (red, green, blue and depth) data structure

An RGB-D, training sample technology, applied in the field of gesture recognition, can solve the problem of predicting gestures without a small amount of sample data, and achieve the effect of good robustness and good recognition effect.

Inactive Publication Date: 2014-01-22
BEIJING JIAOTONG UNIV
View PDF3 Cites 25 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Therefore, when there are only a small number of training samples, the difficulty faced by gesture recognition is how to extract effective features from depth information and color information
[0005] However, in existing methods, there is no small sample data based on RGB-D data to predict the performance of gestures.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Gesture recognition method of small quantity of training samples based on RGB-D (red, green, blue and depth) data structure
  • Gesture recognition method of small quantity of training samples based on RGB-D (red, green, blue and depth) data structure
  • Gesture recognition method of small quantity of training samples based on RGB-D (red, green, blue and depth) data structure

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0019] The method of the present invention will be further described below in conjunction with the accompanying drawings.

[0020] The gesture recognition method of the present invention is composed of a feature extraction unit, a training unit and a recognition unit.

[0021] Such as figure 1 As shown, in the present invention, the specific steps of the feature extraction unit are as follows:

[0022] Step (1). A pyramid is established for each frame in the input image sequence, including a grayscale image pyramid and a depth image pyramid. Among them, the grayscale image pyramid is obtained by converting the RGB image through grayscale, and the depth image pyramid is calculated from the depth image. The first layer of the pyramid is the original image, and the nth layer is obtained by downsampling the n-1th layer.

[0023] Step (2). For the depth map pyramid at time t, use a corner detector (such as Harris, Shi-Tomasi, etc.) to detect the corner points in each layer of th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a gesture recognition method of a small quantity of training samples based on an RGB-D data structure. The gesture recognition method is implemented by a feature extraction unit, a training unit and a recognition unit, wherein the feature extraction unit is used for extracting three-dimensional sparse SIFT (scale invariable feature transform) features in aligned RGB-D image sequences obtained by an RGB-D camera; the training unit is used for learning models by a small quantity of gesture training samples, and the recognition unit is used for recognizing input continuous gestures. The gesture recognition method can be applied to any camera or equipment, such as Kinect of Microsoft, Xtion PRO of ASUS or Leap Motion of the Leap company, which can provide RGB-D data; and a real-time recognition speed can be realized, so that the method can be used for man-machine interaction, sign language interpretation, smart home, game development and virtual reality.

Description

technical field [0001] The invention relates to a gesture recognition method, which can be applied to human-computer interaction, sign language translation, smart home, game development and virtual reality. Background technique [0002] In traditional gesture recognition, gestures are usually collected by a common camera, and then features are extracted from the RGB video stream. In monocular-based gesture recognition, since only RGB images can be provided, a large number of training samples are usually required to achieve better recognition results; in multi-eye vision, due to the need to calibrate multiple cameras and build 3D models, these Both require complex calculations and cannot achieve real-time results. [0003] In recent years, more and more companies have developed RGB-D cameras. The camera is characterized by the ability to provide RGB images and depth images in real time. For example, in 2010, Microsoft released a camera (Kinect) capable of collecting RGB-D ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/54
Inventor 万军阮秋琦安高云
Owner BEIJING JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products