An image gesture action online detection and recognition method based on deep learning

A gesture detection and deep learning technology, applied in the field of gesture recognition, can solve problems such as stop image recognition, and achieve the effect of accurate detection results

Active Publication Date: 2019-06-14
北京汉迪移动互联网科技股份有限公司
View PDF5 Cites 20 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The present invention uses the convolutional network to extract the features of highly abstract gestures in the time dimension and spatial dimension, and automatically combines the features through the convolutional network and maps them to the corresponding gesture categories, which solves the problem that the existing gesture recognition still remains in the right direction. Image recognition and problems requiring human intervention

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An image gesture action online detection and recognition method based on deep learning
  • An image gesture action online detection and recognition method based on deep learning
  • An image gesture action online detection and recognition method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0051] The present invention will be further described below in conjunction with drawings and embodiments.

[0052] The implementing device of the method of the present invention is as figure 2 As shown, the video is captured by the monocular camera installed on the smart car or drone, and the video stream is transmitted to the server through the wireless transmission module. After decoding, the server inputs the decoded video stream into the trained neural network model , and transmit the obtained results back to the smart car or drone.

[0053] The present invention first needs to train the deep learning model, and then deploy the trained model on a high-performance deep learning server to process unmodified video streams transmitted from clients such as smart cars or drones.

[0054] Such as figure 1 Shown, the embodiment of the inventive method is as follows:

[0055] 1) Model training first: model training is divided into gesture detection model and gesture recognitio...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an image gesture action online detection and recognition method based on deep learning. The method comprises extracting all frames of images, marking each frame of image, and inputting the marked image into a gesture detection network for training; inputting a section of gesture action video stream into a gesture detection network, obtaining a local gesture area marked as adynamic gesture in each frame of image, and obtaining a local gesture area image frame marked as the dynamic gesture; performing sampling processing by using a piecewise random sampling algorithm, extracting to obtain optical flow information, performing spatial feature map extraction, and inputting the feature maps into respective gesture recognition networks for training; inputting the gesturevideo stream to be detected into a gesture detection network, respectively obtaining predicted classification results, and taking the gesture video stream with the maximum probability as a final result. The gesture actions in the video stream are classified without human intervention, the calculated amount is small, the recognition accuracy is high, the features of the gesture actions can be extracted more efficiently, and the method has higher robustness compared with a complex background.

Description

technical field [0001] The present invention relates to a gesture recognition method, in particular to a method for online detection and recognition of image gestures based on deep learning, which learns highly abstract features through convolutional networks, and has the characteristics of high accuracy and strong robustness. Background technique [0002] In recent years, gesture recognition has mainly stopped at segmenting static gestures in a single simple background, and then using commonly used recognition methods to analyze the meaning of gesture expressions. However, in practical applications, gestures are usually a continuous action in a complex background. How to unsupervised segmentation and recognition of gestures in these complex environments is an urgent problem to be solved. [0003] At present, there are two types of gesture recognition methods based on wearable input device gesture recognition and visual gesture recognition. The wearable input device is easy...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62
Inventor 李霖烨田秋红黎运泽康宇泽
Owner 北京汉迪移动互联网科技股份有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products