Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An Action Recognition Method Based on Inductive Deep Learning

A technology of action recognition and deep learning, applied in character and pattern recognition, instruments, computing, etc., can solve the problems of large amount of training data and poor performance, achieve good recognition ability, improve recognition rate, and improve the effect of recognition

Active Publication Date: 2021-09-28
西安立天信息技术有限公司
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The purpose of the present invention is to provide an action recognition method based on inductive deep learning, which solves the technical problems of large amount of training data and poor performance when using deep learning for action recognition.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An Action Recognition Method Based on Inductive Deep Learning
  • An Action Recognition Method Based on Inductive Deep Learning
  • An Action Recognition Method Based on Inductive Deep Learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0089] An action recognition method based on inductive deep learning (such as figure 1 shown), including the following steps

[0090] Step 1: Use the camera to obtain a video sequence for action recognition;

[0091] Step 2: Preprocessing the video sequence to obtain one or more information source sequences with different characteristics;

[0092] Step 3: Select one or more types of features and construct a deep learning network that extracts the selected types of features;

[0093] Step 4: Input the information source sequence into the deep learning network to extract features, and obtain the action type of the video sequence.

[0094] The present invention is described from six aspects below:

[0095] 1. Selection of information sources. Based on action recognition, in the acquisition of information sources, grayscale images, RGB images and depth images are selected as three types of basic information sources. At the same time, extract the binarized image and histogram ...

Embodiment 2

[0102] This embodiment describes the present invention in detail based on Embodiment 1.

[0103] Step 1: Use the camera to obtain a video sequence for action recognition. The video sequence is an RGB action sequence. The recording scene of the camera is a variety of scenes. Actions may occur indoors or outdoors. In each scene, the camera Record the whole person.

[0104] Step 2: Preprocessing the video sequence to obtain one or more information source sequences with different characteristics;

[0105] In the process of action recognition, the video sequence is the RGB action sequence acquired by the camera without depth information, so the information source is the down-sampled RGB video stream and the position-corrected human joint point information stream;

[0106] Among them, the purpose of downsampling the video sequence is to reduce the calculation amount of the subsequent network, which is an existing technology; the human body joint point information flow is composed o...

Embodiment 3

[0134] This embodiment is used to illustrate the process of training the deep learning network architecture and using the trained deep learning network architecture to identify in the present invention.

[0135] Training process:

[0136] The first step: Divide the data set used for training into three parts, namely training set, verification set and test set;

[0137] Step 2: Use the training set to train the recognition structure separately;

[0138] The third step: use the verification set to verify the recognition structure separately, and verify the result of decision fusion;

[0139] Step 4: Use the test set to test the entire algorithm. If the test result meets the requirements, it ends. Otherwise, return to the second step and train again.

[0140] Recognition process: After the video sequence stream to be recognized is regularized into a video sequence stream of a specified length, the trained deep learning network architecture is input for recognition, and the classi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an action recognition method based on inductive deep learning, which belongs to the field of artificial intelligence. First, a camera is used to obtain a video sequence for action recognition; and then the video sequence is preprocessed to obtain one or more different characteristic information source sequence; then select one or more types of features, construct a deep learning network that extracts the selected type of features; finally input the information source sequence into the deep learning network to extract features, and obtain the video sequence The type of action; the amount of data required by this method is relatively small, the training process is faster, it is easier to converge, and the recognition accuracy is high.

Description

technical field [0001] The invention relates to the field of artificial intelligence, in particular to an action recognition method based on inductive deep learning. Background technique [0002] At present, in the field of artificial intelligence, the commonly used methods for action recognition can be divided into two categories: one is based on traditional machine learning methods, its core is based on artificially constructed features, and combined with trained classifiers to achieve action Recognition; the second category is based on the current popular deep learning method. Among them, deep learning is goal-oriented and based on a large amount of training data. It can not only train classifiers, but also learn features at the same time, which has very good results. [0003] However, the current use of deep learning for action recognition still has the following problems: [0004] 1) The data utilization rate is low, and a large amount of data is required to complete t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00
CPCG06V40/20G06V40/28G06V20/42
Inventor 韩云吕小英
Owner 西安立天信息技术有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products