Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Depth information static gesture segmentation method

A depth information and gesture technology, applied in image analysis, image enhancement, instruments, etc., can solve problems such as over-segmentation, complex gestures, difficult gesture segmentation, etc., to achieve the effect of simple method, accurate image of gesture area, and avoiding uneven lighting.

Active Publication Date: 2016-08-24
江苏思远集成电路与智能技术研究院有限公司
View PDF2 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The main technical problem of the above gesture segmentation method is that the gesture is complex, the interference of similar skin areas, other targets and noise is difficult to achieve gesture segmentation, and the technical problem of over-segmentation is prone to occur

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Depth information static gesture segmentation method
  • Depth information static gesture segmentation method
  • Depth information static gesture segmentation method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0047] The gesture images in this embodiment come from the American Sign Language dataset (American Sign Language, ASL), which includes 60,000 color images and 60,000 depth images collected by Kinect.

[0048] exist figure 1 Among them, this embodiment selects a depth image with a length of 184 and a width of 178 and the corresponding color image, and the segmentation steps of the depth information static gesture segmentation method are as follows:

[0049] 1. Convert the depth image to an equal-sized depth grayscale image

[0050] Adjust the depth value of each pixel in the depth image to a grayscale value of 0 to 255 to obtain a depth grayscale image. The specific steps are:

[0051] (1) Find the maximum depth value dmax of the pixel from the depth image.

[0052] Take the maximum value of each row in the 1-178 rows of the image matrix, and select a maximum value of 3277 from the 178 maximum values ​​as the dmax value.

[0053] (2) Use formula (1) to convert the depth ima...

Embodiment 2

[0078] The gesture images in this embodiment come from the American Sign Language dataset (American Sign Language, ASL), which includes 60,000 color images and 60,000 depth images collected by Kinect.

[0079] In this embodiment, a depth image with a length of 184 and a width of 178 and the corresponding color image are selected. The segmentation steps of the depth information static gesture segmentation method are as follows:

[0080] 1. Convert the depth image to an equal-sized depth grayscale image

[0081] This step is the same as in Example 1.

[0082] 2. Determine the grayscale of the gesture area in the depth grayscale image

[0083] This step is the same as in Example 1.

[0084] 3. Convert the depth grayscale image into a binary image

[0085] According to the relationship between the gray value of the pixel at (x, y) in the depth grayscale image, the grayscale d of the gesture area is 54, and the set threshold T is 5, use formula (6) to judge the depth grayscale i...

Embodiment 3

[0096] The gesture images in this embodiment come from the American Sign Language dataset (American Sign Language, ASL), which includes 60,000 color images and 60,000 depth images collected by Kinect.

[0097] In this embodiment, a depth image with a length of 184 and a width of 178 and the corresponding color image are selected. The segmentation steps of the depth information static gesture segmentation method are as follows:

[0098] 1. Convert the depth image to an equal-sized depth grayscale image

[0099] This step is the same as in Example 1.

[0100] 2. Determine the grayscale of the gesture area in the depth grayscale image

[0101] This step is the same as in Example 1.

[0102] 3. Convert the depth grayscale image into a binary image

[0103] According to the relationship between the gray value of the pixel at (x, y) in the depth grayscale image, the grayscale d of the gesture area is 54, and the set threshold T is 15, use formula (8) to judge the depth grayscale ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A depth information static gesture segmentation method includes the steps of converting a depth image into a depth grey-scale image of a same size, determining the gray scale of a gesture area in the depth grey-scale image, converting the depth grey-scale image into a binary image, conducting smooth processing for the binary image to obtain a mask image, determining a brightness component image, and segmenting the gesture area. The gesture area image segmented is accurate, and no segmentation problem exists. The influence of factors including non-uniform illumination, race difference, other human body parts and similar color background on gesture segmentation is prevented. The method is simple and rapid, and provides technical foundation for man-machine interaction works including gesture identification, control and medical surgeries.

Description

technical field [0001] The invention belongs to the technical field of image processing and pattern recognition, and in particular relates to image segmentation. Background technique [0002] As the key technology of gesture recognition system, the quality of gesture image segmentation directly affects the subsequent gesture recognition process. Gesture segmentation is the process of extracting meaningful gesture regions from images containing gestures. Its main feature is to select some features that are significantly different from uninteresting regions and separate gesture regions from non-gesture regions. Commonly used features are grayscale, texture, color, and edge information. Gesture segmentation is an aspect of image segmentation, and its process involves many image processing techniques, such as image morphology processing, edge detection, region detection, and gesture location extraction. [0003] At present, gesture segmentation methods at home and abroad mainl...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06T7/00
CPCG06T2207/10004G06V40/107
Inventor 马苗陈祖雪郭敏陈昱莅裴炤
Owner 江苏思远集成电路与智能技术研究院有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products