Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

73 results about "Gesture segmentation" patented technology

Gesture recognition method and gesture recognition control-based intelligent wheelchair man-machine system

The invention discloses a gesture recognition method and a gesture recognition control-based intelligent wheelchair man-machine system, and relates to the fields of computer vision, man-machine systems and control. The system comprises a video acquisition module, a separator, a query module, a tracking module, a gesture pretreatment module, a characteristic extraction module, a gesture recognition module and a control module. In the method, a hand is tracked by combining a Camshift tracking algorithm with a Kalman filtering algorithm, and a gesture is separated and is recognized by combining Hu moment with a support vector machine (SVM). By the gesture recognition method, the influence of skin color interference, shielding and a peripheral complex environment on gesture segmentation can be eliminated, and the hand is accurately tracked and quickly and accurately recognized; and when the gesture recognition method is used in the gesture recognition control-based intelligent wheelchair man-machine system, the aims of quickly and accurately recognizing a gesture command and safely controlling an intelligent wheelchair can be fulfilled, and the activity range and life quality of old people and disabled people can be improved.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Gesture recognition system and method adopting action segmentation

The invention provides a gesture recognition system and method adopting action segmentation and relates to the field of machine vision and man-machine interaction.The gesture recognition method comprises the following steps that firstly, head movements are detected, and head posture changes are calculated; then, a segmentation signal is sent according to posture estimation information, gesture segmentation beginning and end points are judged, if the signal indicates initiative gesture action segmentation, gesture video frame sequences are captured within a time interval of gesture execution, and preprocessing and characteristic extraction are conducted on gesture frame images; if the signal indicates automatic action segmentation, the video frame sequences are acquired in real time, segmentation points are automatically analyzed by analyzing the movement change rule of adjacent gestures for action segmentation, then vision-unrelated characteristics are extracted from segmented effective element gesture sequences, and a type result is obtained by adopting a gesture recognition algorithm for eliminating spatial and temporal disparities.The gesture recognition method greatly reduces redundant information of continuous gestures and the calculation expenditures of the recognition algorithm and improves the gesture recognition accuracy and real-timeliness.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

A method and a system for detecting key points of a three-dimensional gesture based on a neural network

The invention discloses a three-dimensional gesture key point detection method based on a neural network, which comprises the following steps: acquiring a gesture data set including gesture area information and gesture two-dimensional and three-dimensional key point position information; obtaining a gesture data set containing gesture area information and gesture two-dimensional and three-dimensional key point position information. Training a gesture segmentation network which can detect a gesture region in an RGB image by taking an RGB image containing a gesture as an input; the gesture region detected by the gesture segmentation network is truncated, up-sampled or down-sampled. Training a two-dimensional gesture key point detection network, which can detect a plurality of two-dimensionalgesture key points in a gesture region image; the absolute coordinates of the key points of the three-dimensional gesture are converted into relative coordinates; a 2D to 3D gesture key point mappingnetwork is trained. The network can map multiple 2D gesture key points into 3D space to form 3D gesture key points. The invention can quickly and effectively detect three-dimensional gesture key points from RGB images containing gestures.
Owner:SOUTH CENTRAL UNIVERSITY FOR NATIONALITIES

Gesture segmentation recognition method capable of detecting non-gesture modes automatically and gesture segmentation recognition system

ActiveCN102982315AReduce the amount of manual calibrationImprove accuracyCharacter and pattern recognitionThreshold modelRecognition system
The invention discloses a gesture segmentation recognition method capable of detecting non-gesture modes automatically and a gesture segmentation recognition system. The gesture segmentation recognition method includes multiple steps, a first step is that a gesture recognition model is trained based on heterogeneous data acquired by a camera and a sensor, the gesture recognition model is used for constructing a threshold model, and the gesture recognition model and the threshold model constitute a gesture segmentation model; a second step is that the gesture segmentation model is used for automatically detecting non-gesture modes from an input continuous action sequence; a third step is that the non-gesture modes are used for training a non-gesture recognition model; and a fourth step is that the gesture segmentation model is expanded based on the non-gesture recognition model and used for segmentation recognition of the input continuous action sequence. Due to the gesture segmentation recognition method capable of detecting the non-gesture modes automatically, the gesture segmentation recognition system can well represent the non-gesture modes, the probability that the non-gesture modes are misjudged to gesture modes is reduced, and accuracy of a gesture segmentation algorithm is improved.
Owner:INST OF COMPUTING TECH CHINESE ACAD OF SCI

Humanoid mechanical arm control method based on Kinect sensor

The invention discloses a humanoid mechanical arm control method based on a Kinect sensor. The method comprises the first step of collecting data through the Kinect sensor; the second step of preprocessing the collected data, and then using relevant algorithms to conduct gesture segmentation; the third step of using a DBN neural network to conduct gesture recognition; the fourth step of converting recognized gestures into instructions in a fixed format; the fifth step of using a TCP protocol to conduct telecommunication, and then sending the instructions to a server side; the sixth step of making the server side receive and recognize the instructions, and obtaining a control parameter through kinematics calculation; the seventh step of making the server side control motions of a mechanical arm according to the control parameter. According to the humanoid mechanical arm control method based on the Kinect sensor, the requirements in the aspects of cost and accuracy in actual operation, response speed and the like are considered, the problems that a data glove is high in control cost, and the traditional human-computer interactive modes based on keyboards and the like have certain requirements on specialized knowledge are solved, and the humanoid mechanical arm control method has the advantages that the operation is humanized, the response speed is high, the accuracy is high, and the humanoid mechanical arm control method has very good robustness.
Owner:SOUTH CHINA UNIV OF TECH

Deep information based sign language recognition method

The invention discloses a deep information based sign language recognition method. The method comprises steps of: (1) identification of a single gesture: dividing a sign language into a hand shape and a motion track; using deep information based multi-threshold hand gesture segmentation, and obtaining a feature value of the hand shape by using an improved SURF algorithm; obtaining the feature value of the motion track by using angular velocity and distance based motion characteristics, and performing gesture identification by using extracted feature value of the hand shape and the feature value of the motion track as an input of BP neural network; and (2) correction of a gesture sequence: according to the recognized gesture, performing automatic reasoning correction on gestures that have not been correctly recognized or that have polysemy by using a Bayesian algorithm. According to the method provided by the invention, the hand gesture segmentation is performed by using the deep information obtained by a Kinect camera, thereby overcoming the interference caused by illumination in the conventional vision based hand gesture segmentation, and improving naturality of human-computer interaction. The use of improved SURF algorithm reduces the calculation amount and improves the identification speed.
Owner:SHANDONG UNIV

A monocular static gesture recognition method based on multi-feature fusion

The invention discloses a monocular static gesture recognition method based on multi-feature fusion. The method comprises the following steps: gesture image collection: collecting an RGB image containing gesture by a monocular camera; image preprocessing: using human skin color information for skin color segmentation, using morphological processing and combining with the geometric characteristicsof the hand, separating the hand from the complex background, and locating the palm center and removing the arm region of the hand through the distance transformation operation to obtain the gesture binary image; gesture feature extraction: calculating the ratio of perimeter to area, Hu moment and Fourier descriptor feature of gesture and forming gesture feature vector; gesture recognition: usingthe input gesture feature vector to train the BP neural network to achieve static gesture classification. The invention combines the skin color information and the geometrical characteristics of the hand, and realizes accurate gesture segmentation under monocular vision by using morphological processing and distance transformation operation. By combining various gesture features and training BP neural network, a gesture classifier with strong robustness and high accuracy is obtained.
Owner:SOUTH CHINA UNIV OF TECH

Gesture recognition system based on infrared image and method thereof

The invention discloses a gesture recognition system based on an infrared image. The system comprises an infrared illumination module, a front end image processing module and a digital image gesture segmentation and tracking module. An infrared video acquisition unit comprises a CMOS optical sensing chip and a lens assembly. The front end image processing module comprises the infrared video acquisition unit and a FPGA control and output unit. The FPGA control and output unit comprises an image quality assessment module, a reference voltage adjustment module and an output module. The image quality assessment module is used to determine whether a hand area and a hand surrounding area present a same or similar pixel value in a digital image. If the hand area and the hand surrounding area present the same or similar pixel value in the digital image, a reference voltage of the CMOS optical sensing chip is adjusted. If the hand area and the hand surrounding area do not present the same or similar pixel value in the digital image, the digital image is directly output. The invention also discloses a gesture recognition method of the system. In the prior art, under low illumination level or variable illumination environment, recognition is unstable. By using the system and the method of the invention, the above problem can be solved, the quality of infrared digital video data and stability of the system can be increased.
Owner:SOUTH CHINA UNIV OF TECH

Gesture segmentation method and system based on global expectation-maximization algorithm

The present invention discloses a gesture segmentation method and system based on a global expectation-maximization algorithm. The method comprises: establishing a Gaussian model of a complexion; substituting pixel values of all pixel points of a to-be-segmented image into the Gaussian model of the complexion, so as to obtain a complexion similarity degree of all the pixel points of the to-be-segmented image; according to depth information of the to-be-segmented image and the complexion similarity degree of all the pixel points thereof, obtaining a four-dimensional space model consisting of all points in a three-dimensional space and the complexion similarity degree thereof; and dividing the four-dimensional space model into a plurality of sub-spaces, constructing a loss function for evaluating an hypersurface fitting effect in each sub-space, and minimizing the loss function by using a gradient descent method to obtain a four-dimensional hypersurface of the subspace, and finally obtaining a maximum value of the four-dimensional hypersurface of each subspace according to a gradient ascending direction. According to the method provided by the present invention, comparable mathematical description can be generated, the base of two-model fusion is realized, thereby providing a new basis for fusion of different modal data.
Owner:HUAZHONG NORMAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products