Multi-information fusion based gesture segmentation method under complex scenarios

A multi-information fusion and complex scene technology, applied in the field of gesture segmentation, can solve problems such as unsatisfied real-time requirements, user freedom restrictions, and real-time decline

Active Publication Date: 2015-01-28
ZHEJIANG UNIV
View PDF5 Cites 28 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] 2) User freedom is limited
[0005] 3) Real-time requirements cannot be met
In order to apply to complex scenes, many researchers have proposed complex segmentation algorithms, but the increase in complexity is accompanied by the decline in real-time performance
[0006] In view of the above technical difficulties, the usual practice of researchers is to choose a gesture segmentation method suitable for their own system according to the system they have developed and the experimental environment, which lacks certain versatility.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-information fusion based gesture segmentation method under complex scenarios
  • Multi-information fusion based gesture segmentation method under complex scenarios
  • Multi-information fusion based gesture segmentation method under complex scenarios

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0092] In this embodiment, a video sequence (640×480 pixels, 30ftps) captured by a Logitech C710 network camera is processed. The video was randomly shot in an indoor scene, which contains a complex background, background objects with similar skin color appear, there are changes in lighting, and other body parts such as the user's face and arms also appear in the video. figure 1 It is a schematic diagram of the overall process flow of the present invention, and the present embodiment includes the following steps:

[0093] Step 1: Image preprocessing: For each frame image of the video image sequence, perform smoothing filtering and output after averaging the pixel values ​​in the 3X3 window, and remove certain noises existing in the image. The kernel function used for filtering is:

[0094] h = 1 hsize . width * hsize . hei...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multi-information fusion based gesture segmentation method under complex scenarios. The multi-information fusion based gesture segmentation method comprises performing pre-detection after the image preprocessing operation according to an input video image sequence and namely screening out an area which is easily misjudged into a gesture portion through a screening device similar to a gesture; performing multi-color space component based skin color detection; combining with the skin color detection information and performing foreground detection by a spatial and temporal information improvement based mixed gaussian modeling method; fusing multiple detection results through a validation complementary mechanism to obtain a gesture segmentation result. According to the multi-information fusion based gesture segmentation method under the complex scenarios, the gesture segmentation process can be suitable for the different complex scenarios and accordingly the freedom of users is not limited any longer and the real-time requirements are met and accordingly the multi-information fusion based gesture segmentation method can be well applied to human-computer interaction.

Description

technical field [0001] The invention relates to a gesture segmentation method, in particular to a gesture segmentation method based on multi-information fusion in complex scenes, which can be used in many fields such as gesture recognition, human-computer interaction, and mobile device manipulation. Background technique [0002] With the development of computers and its application in modern society more and more widely and rapidly, the demand for human-computer interaction technology has become higher and higher in human life, and gestures are the most natural and most in line with human behavior habits. It is one of the methods, and it is also one of the important research directions in the field of human-computer interaction. Gesture segmentation is usually the first and most critical step in an interactive system, and its effect will directly affect the accuracy of subsequent feature extraction and recognition. In order to obtain satisfactory gesture segmentation result...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/00G06K9/66
CPCG06T2207/10016G06V40/28G06V10/56
Inventor 于慧敏盛亚婷
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products