Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

151 results about "Gesture classification" patented technology

The actual gesture classification is based on the low level characteristics of individual angular data stream. Whenever new low-level characteristics are detected, the gesture classifier is activated. By using the set of current angular characteristics, a gesture label is determined based on built- in gesture model.

Gesture recognition method based on acceleration sensor

The invention discloses a gesture recognition method based on an acceleration sensor. The gesture recognition method based on an acceleration sensor comprises the following steps: automatically collecting gesture acceleration data, preprocessing, calculating the similarity of all gesture sample data so as to obtain a similarity matrix, extracting a gesture template, constructing a gesture dictionary by utilizing the gesture template, and carrying out sparse reconstruction and gesture classification on the gesture sample data to be recognized by adopting an MSAMP (modified sparsity algorithm adaptive matching pursuit) algorithm. According to the invention, the compressed sensing technique and a traditional DTW (dynamic time warping) algorithm are combined, and the adaptability of the gesture recognition to different gesture habits is improved, and by adopting multiple preprocessing methods, the practicability of the gesture recognition method is improved. Additionally, the invention also discloses an automatic collecting algorithm of the gesture acceleration data; the additional operation of traditional gesture collection is eliminated; the user experience is improved; according to the invention, a special sensor is not required, the gesture recognition method based on the acceleration sensor can be used for terminals carried with the acceleration sensor; the hardware adaptability is favorable, and the practicability of the recognition method is enhanced. The coordinate system is uniform, and can be adaptive to different multiple gesture habits.
Owner:BEIJING UNIV OF POSTS & TELECOMM

Egocentric vision in-the-air hand-writing and in-the-air interaction method based on cascade convolution nerve network

The invention discloses an egocentric vision in-the-air hand-writing and in-the-air interaction method based on a cascade convolution nerve network. The method comprises steps of S1: obtaining training data; S2: designing a depth convolution nerve network used for hand detection; S3: designing a depth convolution nerve network used for gesture classification and finger tip detection; S4: cascading a first-level network and a second-level network, cutting a region of interest out of a foreground bounding rectangle output by the first-level network so as to obtain a foreground region including a hand, and then using the foreground region as the input of the second-level convolution network for finger tip detection and gesture identification; S5: judging the gesture identification, if it is a single-finger gesture, outputting the finger tip thereof and then carrying out timing sequence smoothing and interpolation between points; and S6: using continuous multi-frame finger tip sampling coordinates to carry out character identification. The invention provides an integral in-the-air hand-writing and in-the-air interaction algorithm, so accurate and robust finger tip detection and gesture classification are achieved, thereby achieving the egocentric vision in-the-air hand-writing and in-the-air interaction.
Owner:SOUTH CHINA UNIV OF TECH

Vehicle detection method and system based on SSD (Single Shot MultiBox Detector) and vehicle gesture classification

The invention discloses a vehicle detection method and system based on a SSD (Single Shot MultiBox Detector) and vehicle gesture classification. The method comprises the following steps that: according to an angle between a vehicle head and a horizontal axis, dividing a vehicle gesture; adding a vehicle gesture classification task into an original SSD network model; combining a vehicle detection loss with a vehicle gesture classification task loss to form a multi-task loss; replacing the softmax loss of the original SSD model with a flocal loss; carrying out joint optimization on the vehicle gesture classification task and the vehicle detection task; training to obtain a detection model; and utilizing the detection model to carry out vehicle detection on a picture to be detected to realizemultiscale and multi-angle vehicle detection. By use of the method, a deep learning target detection SSD is used for vehicle detection, the vehicle gesture classification is used for assisting the task and the vehicle detection task in carrying out joint training, and the flocal loss is added to solve the problem of unbalanced vehicle samples so as to improve the accuracy and the stability of thesystem.
Owner:HUAZHONG UNIV OF SCI & TECH

A monocular static gesture recognition method based on multi-feature fusion

The invention discloses a monocular static gesture recognition method based on multi-feature fusion. The method comprises the following steps: gesture image collection: collecting an RGB image containing gesture by a monocular camera; image preprocessing: using human skin color information for skin color segmentation, using morphological processing and combining with the geometric characteristicsof the hand, separating the hand from the complex background, and locating the palm center and removing the arm region of the hand through the distance transformation operation to obtain the gesture binary image; gesture feature extraction: calculating the ratio of perimeter to area, Hu moment and Fourier descriptor feature of gesture and forming gesture feature vector; gesture recognition: usingthe input gesture feature vector to train the BP neural network to achieve static gesture classification. The invention combines the skin color information and the geometrical characteristics of the hand, and realizes accurate gesture segmentation under monocular vision by using morphological processing and distance transformation operation. By combining various gesture features and training BP neural network, a gesture classifier with strong robustness and high accuracy is obtained.
Owner:SOUTH CHINA UNIV OF TECH

Ultrasonic gesture recognition method and system

The invention provides an ultrasonic gesture recognition method. The method includes the steps that ultrasonic signals and situation information related to a current situation are collected at the same time, gesture features are obtained from the collected ultrasonic signals, and the probability that the gesture features belong to various preset gestures is obtained by means of a gesture classification model trained in advance; the probability that the gestures happen in the current situation is determined on the basis of the collected situation information; according to the two probabilities, the probability that the gesture features belong to various preset gestures in the current situation is calculated, and the gesture corresponding to the largest probability is recognized as the gesture corresponding to the collected ultrasonic signals. By means of the method, gesture signals and the situation information are fused, by means of the situation information, misoperation gestures of users are filtered, wrong gestures are corrected and recognized, invalid even wrong responses are reduced, accordingly accuracy and robustness of gesture recognition are improved, and man-machine interaction experiences are improved.
Owner:INST OF COMPUTING TECH CHINESE ACAD OF SCI

Depth motion map-scale invariant feature transform-based gesture recognition method

The invention relates to a depth motion map-scale invariant feature transform-based gesture recognition method. The method mainly comprises the following three parts: in the motion data acquisition aspect, an original depth image provided by the Kinect somatosensory technology is adopted as the input variable of a gesture recognition system. In the human body gesture feature construction aspect, a depth motion map-scale invariant feature transform-based extraction method is adopted, and data obtained after feature extraction are subjected to dimension-reduction treatment through the supervised locally linear embedding (SLLE) method. In this way, a gesture motion characteristic quantity is represented. In the gesture classifier recognition aspect, a support vector machine based on a discriminant is adopted to realize the sample training and modeling process of the characteristic quantities of a depth image sequence. Meanwhile, an unknown gesture is classified and predicted. The method of the invention can be adapted to different lighting environments, and is stronger in robustness. The method can also efficiently recognize gesture sequences in real time. Therefore, the method can be applied to the real-time gesture recognition field of man-machine interaction.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Gesture recognition-based mechanical arm pose control system

The invention discloses a gesture recognition-based mechanical arm pose control system. The system comprises a smart wristband module, a remote client module, a Bluetooth communication module, a dataprocessing module, a simulation module and a mechanical arm execution module which are connected successively, wherein the smart wristband is wirelessly connected with a PC through Bluetooth and is used for transmitting an electromyographic signal acquired by the smart wristband module to the remote client module; the remote client module is used for forwarding the signal to the data processing module after receiving the signal; the data processing module is used for carrying out filtration and noise reduction on the signal, classifying gestures, and solving joint angles by utilizing positiveand athwart kinematics after noise reduction; two smart wristbands are used for obtaining joint angles of arms of an operator; the remote client is further used for transferring signals of the joint angles and an operation instruction signal to the smart wristband module; a gesture action signal is transmitted to a simulation mechanical arm in the simulation module; the simulation mechanical arm transmits the signal to a working mechanical arm; and the working mechanical arm executes a command according to the signal.
Owner:ZHEJIANG UNIV OF TECH

Gesture recognition method based on multichannel electromyographic signal correlation

The invention provides a gesture recognition method based on multichannel electromyographic signal correlation. The gesture recognition method comprises the following steps: firstly, de-noising electromyographic signals acquired by each channel; detecting a movable section according to the signal amplitude intensity; then, performing structured processing on the active section signal; processing the signal into a format with time correlation by superposing a plurality of continuous time windows; and finally, realizing a hybrid neural network CRNet based on the CNN + RNN neural network, and establishing a classifier for gesture recognition, wherein the input of the classifier is a signal subjected to structured processing, and the output of the classifier is a gesture classification probability, and the trained classifier is utilized to perform gesture recognition. For the gesture recognition method, only a plurality of myoelectricity sensors are used for collecting original signals while extra complex equipment is not needed, so that operation is convenient, and environmental adaptability is good. According to the gesture recognition method, the noise in the signal can be effectively removed, and the used classifier reduces the computing resources and improves the recognition efficiency, and the gesture recognition method is more suitable for engineering application.
Owner:BEIHANG UNIV +1

Gesture recognition method based on fused skin color region segmentation and machine learning algorithm and application thereof

The invention discloses a gesture recognition method based on fused skin color region segmentation and a machine learning algorithm and an application thereof. The method comprises the following steps: after capturing and pre-processing a gesture image, using an Otsu adaptive threshold algorithm to segment a skin color region under an YCbCr skin color space; after segmentation, segmenting a gesture by setting a gesture region decision condition, and extracting an Hu moment character and the fingertip number as feature vectors on a gesture contour; and using an SVM classifier to classify and recognize six kinds of commonly used static gestures. According to the gesture recognition method based on the fused skin color region segmentation and the machine learning algorithm provided by the invention, the gesture can be accurately located and segmented through a skin color setting gesture decision condition; and the extracted gesture contour Hu moment character and the fingertip number provide more accurate feature vectors for gesture classification, and classification and recognition are carried out on the gesture by utilizing the mature SVM classifier, thus the gesture recognition rate is guaranteed.
Owner:新疆大学科学技术学院

Millimeter wave sensor gesture recognition method based on convolutional neural network

The invention discloses a millimeter wave sensor gesture recognition method based on a convolutional neural network, and the method comprises the steps: (1) enabling a millimeter wave sensor to emit frequency modulation continuous wave signals, carrying out the various gestures in front of the sensor, and enabling a receiving channel to obtain the time domain echo signals of the gestures; (2) obtaining a micro Doppler time-frequency diagram; (3) acquiring time-frequency diagram sample sets of different gestures; (4) preprocessing the data in the training sample set, inputting the pictures as training data into the established convolutional neural network, and carrying out supervised learning to obtain parameters of each layer of the convolutional neural network; and (5) initializing the network by using the trained parameters of each layer of the convolutional neural network to obtain an image recognition network with a gesture classification function. According to the gesture recognition method and device based on the convolutional neural network, gesture classification recognition is conducted through the convolutional neural network, manual intervention is avoided, the convolutional neural network can learn deep features of all kinds of actions, the generalization ability and adaptability are high, and the gesture recognition precision and speed are improved.
Owner:SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products