Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

62 results about "Leap motion" patented technology

Gesture recognition method and device and Leap Motion system

The invention relates to a gesture recognition method and device and a Leap Motion system. The gesture recognition method comprises the methods that S1, multiple gesture operations used for controlling a three-dimensional body are stored to a gesture library; S2, motion data of the hand are collected in real time, and feature extraction is carried out on the collected data; S3, whether the motion of the hand belongs to the translation type operation or not is judged according to extracted feature parameters, the step S5 is started if the motion of the hand belongs to the translation type operation, and the step S4 is started if the motion of the hand does not belong to the translation type operation; S4, the operation type which the motion of the hand belongs to is determined through a preset algorithm; S5, the operation of the corresponding type is searched for in the gesture library according to the operation type, and the target operation is determined in the operation of the corresponding type according to the feature parameters. According to the technical scheme, the three-dimensional model operation gesture library suitable for Leap Motion and the corresponding gesture recognition method can be constructed, accuracy of gesture recognition is improved, and consistency and stability of model conversion in the three-dimensional operation are guaranteed.
Owner:TSINGHUA UNIV

Hand rehabilitation training method based on Leap Motion controller

The invention discloses a hand rehabilitation training method based on a Leap Motion controller. The hand rehabilitation training method comprises the following steps of A, standard hand motion data is typed into a computer host; B, a patient completes rehabilitation training motions according to standard hand motions played by a display, and data of the hand motions of the patient is obtained in real time and is transmitted to the computer host in the process that the patient completes the rehabilitation training motions; C, the acquired data of the hand motions of the patient is processed, the effective data in the data of the head motions is extracted, and then training characteristic data is obtained through extraction; D, the motions completed by the patient are evaluated. Through the combination of a virtual reality technology and a leap motion interaction technology, the hand motions of the patient are displayed in real time, the patient can see his/her own training process in a virtual reality environment, the training enthusiasm of the patient is improved, traditional passive training is turned into active training, the recovery training effect is improved, and the rehabilitation cost of the patient is lowered at the same time.
Owner:TSINGHUA UNIV

Complex dynamic gesture recognition method based on Leap Motion

The invention relates to a complex dynamic gesture recognition method based on Leap Motion, and belongs to the field of artificial intelligence and man-machine interaction. According to the method, static gesture recognition and continuous track recognition are used for complex dynamic gesture recognition, hand information in the teaching process of a user is captured through a somatosensory sensor, a support vector machine and a feature vector extraction mode based on learning are adopted for static gesture learning, and static gestures in the teaching process are all marked as instruction states. For the static gesture in the instruction state, information of the vertex of the distal bone of each finger and the central point of the palm is extracted and continuous dynamic track information is generated for learning. The complex dynamic gesture frame is decomposed by frame, and the instruction is identified after judging whether the complex dynamic gesture is the instruction gesture.According to the method, the accuracy of dynamic gesture recognition is greatly improved, the requirement for the complexity of dynamic gestures is lowered, and the man-machine interaction process ismore friendly and more natural on the basis of the visual acquisition equipment.
Owner:BEIJING UNIV OF TECH

Wearable computer

The invention discloses a wearable computer. The wearable computer comprises a leap motion module, wherein optical sensing is employed by the leap motion module to recognize hand gestures of a person, and then acquired user operation data is sent to a head-mounted module in a wireless mode; the head-mounted module, wherein the head-mounted module is used for analyzing the received user operation data, learning an operation command of a user and executing the operation command, and displaying an operation result feedback through a optical imaging module of the head-mounted module. The head-mounted module can further conduct a wireless interaction with an intelligent mobile terminal. According to the wearable computer, an intelligent sensor, a central processing unit, a leap motion input and an optical imaging displaying module are integrated as a whole, so that the wearable computer with extreme small volume is achieved. The wearable computer is convenient to carry, the convenience for carrying out interaction between the user and the computer and carrying out information acquisition is enhanced, and the wearable computer is convenient to use. Meanwhile, the wearable computer has an anti-theft function so that a computer owner can be informed in a timely manner, and the position of the computer can be found through a GPS system, and therefore the timely retrieval of the stolen computer can be conveniently achieved.
Owner:XUCHANG UNIV

Man-machine interaction integrated device based on Leap Motion equipment

The invention discloses a man-machine interaction integrated device based on Leap Motion equipment. The man-machine interaction integrated device comprises a Kinect camera, the Leap Motion equipment, a camera, a processing terminal and a display terminal, wherein the Kinect camera is used for acquiring skeleton image information and deep image information of a user, the Leap Motion equipment is used for acquiring hand or tool image information of the user, the camera is used for acquiring image information positioned in a visual blind area of the Kinect camera and is positioned on one side of the Kinect camera, the processing terminal is used for receiving and processing the skeleton image information, the deep image information, the hand or tool image information and the image information positioned in the visual blind area of the Kinect camera, the display terminal is used for displaying an initial image and dynamically displaying the image information processed by the processing terminal to realize man-machine interaction operation. By the man-machine interaction integrated device, long-distance, short-distance and various gestures or tools can be recognized, and man-machine interaction Kinect enjoyment of 3D (three-dimensional) virtual reality is greatly improved.
Owner:GUANGZHOU MIDSTERO TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products