Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

201 results about "Hand region" patented technology

Human-machine interaction method and device based on sight tracing and gesture discriminating

The invention discloses a human-computer interaction method and a device based on vision follow-up and gesture identification. The method comprises the following steps of: facial area detection, hand area detection, eye location, fingertip location, screen location and gesture identification. A straight line is determined between an eye and a fingertip; the position where the straight line intersects with the screen is transformed into the logic coordinate of the mouse on the screen, and simultaneously the clicking operation of the mouse is simulated by judging the pressing action of the finger. The device comprises an image collection module, an image processing module and a wireless transmission module. First, the image of a user is collected at real time by a camera and then analyzed and processed by using an image processing algorithm to transform positions the user points to the screen and gesture changes into logic coordinates and control orders of the computer on the screen; and then the processing results are transmitted to the computer through the wireless transmission module. The invention provides a natural, intuitive and simple human-computer interaction method, which can realize remote operation of computers.
Owner:SOUTH CHINA UNIV OF TECH

Three-dimensional gesture estimation method and three-dimensional gesture estimation system based on depth data

The invention discloses a three-dimensional gesture estimation method and a three-dimensional gesture estimation system based on depth data. The three-dimensional gesture estimation method comprises the following steps of S1, performing hand region of interest (ROI) detection on photographed data, and acquiring hand depth data, wherein the S1 comprises the processes of (1), when bone point information can be obtained, performing hand ROI detection through single bone point of a palm; (2) when the bone point information cannot be obtained, performing hand ROI detection in a manner based on skin color; S2, performing preliminary estimation in a hand three-dimensional global direction, wherein the S2 comprises the processes of S21, performing characteristic extracting; S22, realizing regression in the hand global direction according to a classifier R1; and S3, performing joint gesture estimation on the three-dimensional gesture, wherein the S3 comprises the processes of S31, realizing gesture estimation according to a classifier R2; and S32, performing gesture correction. According to the three-dimensional gesture estimation method and the three-dimensional gesture estimation system, firstly cooperation of two manners is utilized for dividing hand ROI data; afterwards estimation in the hand global direction is finished according to a regression algorithm based on hand ROI data dividing; and finally three-dimensional gesture estimation is realized by means of the regression algorithm through utilizing the data as an aid. The three-dimensional gesture estimation method and the three-dimensional gesture estimation system have advantages of simple algorithm and high practical value.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

Dynamic gesture sequence real-time recognition method, system and device

The invention discloses a dynamic gesture sequence real-time recognition method, a dynamic gesture sequence real-time recognition system and a dynamic gesture sequence real-time recognition device. The dynamic gesture sequence real-time recognition method comprises the steps of: separately acquiring a color image and a depth image containing an object to be recognized; performing human body regiondetection and segmentation according to the acquired color image and depth image, so as to obtain a human body region; detecting and segmenting hand regions in the human body region so as to obtain the hand regions; adopting a skin color mode with illumination invariance and an elliptical boundary model based on Gaussian distribution for tracking hands dynamically according to the hand regions; adopting a method based on gesture trajectory and static attitude matching for performing space-time gesture sequence detection according to dynamic tracking results of the hands, so as to obtain a dynamic gesture sequence; and modeling and classifying the dynamic gesture sequence. The dynamic gesture sequence real-time recognition method, the dynamic gesture sequence real-time recognition system and the dynamic gesture sequence real-time recognition device improve the robustness of gesture recognition through utilizing the depth information and adopting the skin color mode with illumination invariance and the elliptical boundary model based on Gaussian distribution, have good recognition effect, and can be widely applied in the fields of artificial intelligence and computer vision.
Owner:盈盛资讯科技有限公司

Static gesture identification method based on vision

The invention provides a static gesture identification method based on vision, comprising the following steps of S1, gesture image pretreatment: separating a hand region from an environment accordingto the complexional characteristic of a human body and obtaining a gesture profile through image filtering and image morphological operation; S2, gesture characteristic parameter extraction: extracting an Hu invariable moment characteristic, a gesture region characteristic and a Fourier description subparameter so as to form a characteristic vector; and S3, gesture identification, using a multi-layer sensor classifier having self-organizing and self-studying abilities, capable of effectively resisting noise and treating incomplete mode, and having mode generalization ability. The static gesture identification method based on vision in the invention firstly carries out pretreatment and binarizes the original gesture image according to the complexional characteristic of the human body. The extracted gesture characteristic parameters are in three groups, namely the Hu invariable moment characteristic, the gesture region characteristic and the Fourier description subparameter, which form the characteristic vector together. The characteristic has better recognition rate.
Owner:HARBIN INST OF TECH SHENZHEN GRADUATE SCHOOL

Hand gesture recognition method based on switching Kalman filtering model

The invention discloses a hand gesture recognition method based on a switching Kalman filtering model. The hand gesture recognition method based on a switching Kalman filtering model comprises the steps that a hand gesture video database is established, and the hand gesture video database is pre-processed; image backgrounds of video frames are removed, and two hand regions and a face region are separated out based on a skin color model; morphological operation is conducted on the three areas, mass centers are calculated respectively, and the position vectors of the face and the two hands and the position vector between the two hands are obtained; an optical flow field is calculated, and the optical flow vectors of the mass centers of the two hands are obtained; a coding rule is defined, the two optical flow vectors and the three position vectors of each frame of image are coded, so that a hand gesture characteristic chain code library is obtained; an S-KFM graph model is established, wherein a characteristic chain code sequence serves as an observation signal of the S-KFM graph model, and a hand gesture posture meaning sequence serves as an output signal of the S-KFM graph model; optimal parameters are obtained by conducting learning with the characteristic chain code library as a training sample of the S-KFM; relevant steps are executed again for a hand gesture video to be recognized, so that a corresponding characteristic chain code is obtained, reasoning is conducted with the corresponding characteristic chain code serving as input of the S-KFM, and finally a hand gesture recognition result is obtained.
Owner:XIAN TECHNOLOGICAL UNIV

Multi-posture fingertip tracking method for natural man-machine interaction

The present invention discloses a multi-posture fingertip tracking method for natural man-machine interaction. The method comprises the following steps: S1: acquiring RGBD data by adopting Kinect2, including depth information and color information; S2: detecting a hand region: by means of conversion of a color space, converting a color into a space that does not obviously react to brightness so as to detect a complexion region, then detecting a human face by means of a human face detection algorithm, so as to exclude the human face region and obtain the hand region, and calculating a central point of a hand; S3: by means of the depth information and in combination with an HOG feature and an SVM classifier, identifying and detecting a specific gesture; and S4: by means of positions of preceding frames of fingertips and in combination with an identified region of the hand, performing prediction on a current tracking window, and then detecting and tracking a fingertip by means of two modes, i.e. a depth-based fingertip detection mode and a form-based fingertip detection mode. The method disclosed by the present invention is mainly used for performing detection and tracking on a single fingertip under motion and various postures, and relatively high precision and real-time property need to be ensured.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

Convolutional neural network-based human hand image region detection method

The invention discloses a convolutional neural network-based human hand image region detection method. The method comprises the following steps of: carrying out feature extraction on an image by utilizing a convolutional neural network, and training a weak classifier; for the image, angles of which are marked, segmenting the image on the basis of the classifier to obtain a plurality of candidate regions; modeling each candidate region by utilizing the convolutional neural network so as to obtain an angle estimation model, and carrying out angle marking to rotate the candidate area to a positive definite attitude; modeling each candidate region again by utilizing the convolutional neural network so as to obtain a classification model; for a test image, firstly segmenting the test image by using the weak classifier so as to obtain candidate regions, and for each candidate region, estimating angle of the candidate region through the angle estimation model and rotating the candidate region to a positive definite attitude; and inputting the candidate region under the positive definite attitude into the classification model to obtain a position and an angle of a human hand in the image. According to the method, the classification precision is improved by adopting convolutional neural network-based coding classification, and by utilizing the angle model, the method has rotation variance and very high human hand region detection precision.
Owner:INST OF SOFTWARE - CHINESE ACAD OF SCI

Interactive behavior recognition method and device, computer equipment and storage medium

PendingCN110674712AFlexible and accurate identificationImprove portabilityImage enhancementImage analysisHuman bodyMedicine
The invention relates to an interactive behavior recognition method and device, computer equipment and a storage medium. The interactive behavior recognition method comprises the steps of obtaining ato-be-detected image; performing human body posture detection on the to-be-detected image through a preset detection model to obtain human body posture information and hand position information, the detection model being used for performing human body posture detection; tracking the human body posture according to the human body posture information to obtain human body motion trail information; according to the hand position information, performing target tracking on the hand position to obtain a hand region image; performing article identification on the hand region image through a preset classification identification model to obtain an article identification result, the classification identification model being used for performing article identification; and obtaining a first interactionbehavior recognition result according to the human body movement track information and the article recognition result. According to the interactive behavior recognition method, the recognition precision of the interaction behavior can be improved, and better mobility is achieved.
Owner:SUNING CLOUD COMPUTING CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products