Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

62 results about "Fingertip detection" patented technology

The fingertip detection approach consists of two stages. First, based on the grid sampling and the analysis of sampled hand contour, the fingertip was detected roughly. Then, the location of fingertip was localized precisely based on circle feature matching.

Egocentric vision in-the-air hand-writing and in-the-air interaction method based on cascade convolution nerve network

The invention discloses an egocentric vision in-the-air hand-writing and in-the-air interaction method based on a cascade convolution nerve network. The method comprises steps of S1: obtaining training data; S2: designing a depth convolution nerve network used for hand detection; S3: designing a depth convolution nerve network used for gesture classification and finger tip detection; S4: cascading a first-level network and a second-level network, cutting a region of interest out of a foreground bounding rectangle output by the first-level network so as to obtain a foreground region including a hand, and then using the foreground region as the input of the second-level convolution network for finger tip detection and gesture identification; S5: judging the gesture identification, if it is a single-finger gesture, outputting the finger tip thereof and then carrying out timing sequence smoothing and interpolation between points; and S6: using continuous multi-frame finger tip sampling coordinates to carry out character identification. The invention provides an integral in-the-air hand-writing and in-the-air interaction algorithm, so accurate and robust finger tip detection and gesture classification are achieved, thereby achieving the egocentric vision in-the-air hand-writing and in-the-air interaction.
Owner:SOUTH CHINA UNIV OF TECH

Hybrid neural network-based gesture recognition method

The invention discloses a hybrid neural network-based gesture recognition method. For a gesture image to be recognized and a gesture image training sample, first a pulse coupling neural network is used to detect to obtain noise points, then a composite denoising algorithm is used to process the noise points, then a cell neural network is used to extract edge points in the gesture image, connected regions are obtained according to the extracted edge points, curvature is used to perform fingertip detection on each connected region to obtain undetermined fingertip points, interference of a face part is eliminated to obtain a gesture region, then the gesture region is partitioned according to gesture shape features, Fourier descriptors which keep phase information are obtained according to contour points of the partitioned gesture region, and first multiple Fourier descriptors are selected as gesture features; and a BP neural network is trained according to gesture features of the gesture image training sample, and the gesture features of the gesture image to be recognized are input to the BP neural network for recognition. The hybrid neural network-based gesture recognition method provided by the invention improves the accuracy rate of gesture recognition through utilization of various neural networks.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

First view fingertip detection method based on convolutional neural network and heat map

The invention discloses a first view fingertip detection method based on a convolutional neural network and a heat map, which includes the following steps: collecting a gesture image, marking the position of the bounding rectangle of the gesture and the coordinates of the fingertip, and cutting the original gesture image and updating the fingertip position through the bounding rectangle to generate a fingertip heat map; designing a gesture detection convolutional neural network, extracting gesture features, and training the network using the image before cutting and the bounding rectangle to make the network converge; designing a fingertip heat map regression convolutional neural network, extracting fingertip features, and training the network using the image after cutting and the heat mapto make the network converge; and segmenting an input first view video into frames, using the trained gesture detection convolutional neural network to get the bounding rectangle of the gesture, cutting out the gesture part and inputting the gesture part to the fingertip heat map regression convolutional neural network to predict the heat map of the fingertip, and getting the coordinates of the fingertip according to the heat map. The position of the fingertip can be accurately detected in complex backgrounds and under different light conditions.
Owner:SOUTH CHINA UNIV OF TECH

Multi-posture fingertip tracking method for natural man-machine interaction

The present invention discloses a multi-posture fingertip tracking method for natural man-machine interaction. The method comprises the following steps: S1: acquiring RGBD data by adopting Kinect2, including depth information and color information; S2: detecting a hand region: by means of conversion of a color space, converting a color into a space that does not obviously react to brightness so as to detect a complexion region, then detecting a human face by means of a human face detection algorithm, so as to exclude the human face region and obtain the hand region, and calculating a central point of a hand; S3: by means of the depth information and in combination with an HOG feature and an SVM classifier, identifying and detecting a specific gesture; and S4: by means of positions of preceding frames of fingertips and in combination with an identified region of the hand, performing prediction on a current tracking window, and then detecting and tracking a fingertip by means of two modes, i.e. a depth-based fingertip detection mode and a form-based fingertip detection mode. The method disclosed by the present invention is mainly used for performing detection and tracking on a single fingertip under motion and various postures, and relatively high precision and real-time property need to be ensured.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

Fingertip detection method in complex environment

The invention provides a fingertip detection method in a complex environment. The fingertip detection method comprises the steps of: step 1, calculating dense light stream information corresponding to scene information, and reconstructing a skin color filter to obtain a hand region; step 2, and constructing models of the hand region in various gestures by adopting equal-area blocks, calculating a mass center of the hand region, calculating distances from all contour sampling points to the mass center and an average mass center distance, determining an extended mass center distance according to detected number of fingertips, drawing a circle by taking the mass center as a circle center and the extended mass center distance as the radius, removing contour points inside the circle and a wrist region with maximum number of continuous pixels on the circle, searching contour points with partial maximum mass center distance outside the circle, and marking the contour points as fingertips, and comparing the detected number of fingertips in this round with the detected number of fingertips in the last round to judge whether to continue the fingertip detection. The fingertip detection method is high in robustness, and can detect the fingertips correctly when the hand of a person moves in front of a camera freely in the complex environment, thereby increasing the accuracy and effectiveness of fingertip detection.
Owner:SOUTH CHINA UNIV OF TECH

Enhanced assembly teaching system based on fingertip characteristics and control method thereof

The invention provides an enhanced assembly teaching system based on fingertip characteristics and a control method thereof. The enhanced assembly teaching system comprises an image acquisition module, an image preprocessing module, a hand area segmentation module, a fingertip detection and tracking module and a virtual component model space registration module. The method includes: collecting images of the finger and the interaction plane; preprocessing the acquired image; carrying out segmentation and edge extraction on the hand area; carrying out finger tip detection based on curvature operation and least square fitting, and tracking the finger tip through a method based on combination of Kalman filtering and particle filtering; performing calibration, computer rendering and virtual component model space registration on the image acquisition equipment; and enabling the fingertips to interact with the virtual component to complete plug-in mounting. According to the method, fingertipsare used as new computer input to complete interaction with the virtual object, inconvenience brought by an materialized handheld identifier is abandoned, and when motion is nonlinear, Kalman filtering and particle filtering are combined to improve the positioning accuracy and real-time performance of the target object.
Owner:JIANGSU UNIV

Fingertip detection method based on three-dimensional K curvature

The invention discloses a fingertip detection method based on three-dimensional K curvature. The method comprises two steps; in the first step, a hand region is extracted based on point cloud color region growing, wherein first, point cloud data acquired by an RGB-D sensor is filtered, then color region growing partitioning is performed on the filtered point cloud data, and finally a skin color detection algorithm is adopted to acquire the point cloud data in the hand region; in the second step, fingertip points are extracted based on a three-dimensional K curvature algorithm, wherein hand point cloud is filtered to remove some spatial dispersion points after the hand region is acquired, then the thought of the K curvature algorithm is utilized to process the point cloud data, fingertip candidate points are determined and clustered, and the fingertip points are obtained. Through the method, the fingertip points can be well detected at different positions, and under different backgrounds and different light environments under a plurality of common gestures such as gestures of representing the numbers 1, 2, 3, 4 and 5. According to the method, the distance error between the obtained fingertip points and actual fingertip points is only about 5mm, and therefore good precision and robustness are achieved.
Owner:NANJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products