A Hand Region Segmentation Method with Deep Fusion of Saliency Detection and Prior Knowledge

A technology of area segmentation and prior knowledge, applied to computer parts, character and pattern recognition, instruments, etc., can solve problems such as unusable, single technical means, poor algorithm robustness, etc., and achieve the effect of eliminating interference

Inactive Publication Date: 2019-05-07
SHANDONG UNIV
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

For example, in the article "Research on Gesture Interaction Technology for Home Service Robots" by Yang Wenji, multi-scale color contrast and texture contrast are used to obtain the saliency map of the image, and then the saliency map is combined with the skin color probability map and object property measurement. fusion of empirical knowledge and threshold segmentation to obtain the final hand region detection result. This method has a very high segmentation accuracy in a relatively simple background, but due to over-consideration of the region-level contrast, resulting in its incomplex background. In the article "Saliency-guided improvement for hand posture detection and recognition", Chuang Yuelong proposed a saliency detection method that does not combine any prior information, which is mainly used for rough positioning of the hand, and then The obtained saliency map is fused with the skin color probability map to obtain hand area estimation. This method can eliminate the interference of large areas of close-skinned background, but it lacks the ability to exclude faces that are also more prominent areas and are near-skinned.
In short, compared with the traditional methods, the existing series of schemes that introduce saliency detection into the scene of hand region segmentation have greatly improved the accuracy and the ability to overcome some background interference, but because most of them are The saliency detection and the detection based on prior knowledge such as skin color are carried out separately, and finally fused, which makes the robustness of the algorithm poor, and it is difficult to completely overcome the interference of faces and other complex backgrounds, and it is still unable to be used in actual scenes. use in
[0005] In short, the traditional hand region segmentation method has a single technical method and relies too much on skin color detection technology. When there are interference factors such as uneven illumination and uneven illumination, these methods are almost unable to achieve accurate hand region segmentation.
Existing hand region detection methods fail to make full use of prior knowledge, and have relatively large defects in the ability to overcome complex background interference, especially near-skin background interference

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Hand Region Segmentation Method with Deep Fusion of Saliency Detection and Prior Knowledge
  • A Hand Region Segmentation Method with Deep Fusion of Saliency Detection and Prior Knowledge
  • A Hand Region Segmentation Method with Deep Fusion of Saliency Detection and Prior Knowledge

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0076] A hand region segmentation method that deeply fuses saliency detection and prior knowledge, such as Figure 9 As shown, the specific steps include:

[0077](1) Perform SLIC superpixel segmentation on the original image to obtain N regions of R1, R2...RN. The original image is as follows figure 1 Shown:

[0078] (2) Through the hand region detection method based on the fusion of the saliency detection of the color spread measure and the skin color prior knowledge, the preliminary detection of the hand region is realized, including:

[0079] a. In the RGB color space, quantize the color of each channel to obtain t different values, so that the total number of colors is reduced to t 3 kind;

[0080] b. Calculate the salience of each color separately; for any color i, i∈1,2...t 3 , its significance S e The calculation formulas (I) and (II) of (i) are as follows:

[0081] S e (i)=P skin (i)exp(-E(i) / σ e ) (I)

[0082]

[0083] In formulas (I) and (II), the param...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a hand region segmentation method which deeply integrates saliency detection and prior knowledge. The method fuses the pixel-level saliency map of the hand region with the region-level saliency map of the hand region, so that the entire hand region detection algorithm Has higher robustness and accuracy. Using the introduced Bayesian framework, the confidence of each pixel belonging to the hand region is obtained, and combined with related technical means such as threshold segmentation, a high-accuracy hand region segmentation result is finally obtained. The present invention overcomes the disadvantage that the traditional hand area method can only be applied to relatively simple backgrounds and no near-skin-color interference scenes, and can still be very accurate in the case of various interferences such as non-uniform illumination, near-skin-color backgrounds, and face noise. The segmented image of the hand region is obtained, so it has broad application prospects.

Description

technical field [0001] The invention relates to a hand region segmentation method that deeply integrates saliency detection and prior knowledge, and belongs to the technical fields of computer vision, image processing, pattern recognition and other fields Background technique [0002] Vision-based gesture recognition refers to the use of various cameras to continuously collect the shape and displacement of the hand to form a sequence frame of model information, and then convert them into corresponding instructions to control certain operations. This technology has been widely used in many scenarios such as human-computer interaction, robot control and virtual reality. Common gesture recognition technologies usually involve hand region segmentation, hand shape feature extraction, hand tracking and gesture recognition. Among them, the hand area segmentation technology is to eliminate the interference of other elements in the picture, accurately segment the human hand area, an...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/46
CPCG06V40/107G06V40/28G06V10/462
Inventor 杨明强张庆锐郑庆河张鑫鑫
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products