Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Action recognition method and its neural network generation method, device and electronic equipment

A neural network and action recognition technology, applied in the field of image recognition, can solve the problem of low action recognition ability, achieve stability and accuracy, and solve the effect of low action recognition ability

Active Publication Date: 2022-03-25
BEIJING KUANGSHI TECH CO LTD
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In view of this, the object of the present invention is to provide an action recognition method and its neural network generation method, device and electronic equipment, to solve the problem that the current image recognition neural network in the prior art has low recognition ability for action recognition. technical problem

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Action recognition method and its neural network generation method, device and electronic equipment
  • Action recognition method and its neural network generation method, device and electronic equipment
  • Action recognition method and its neural network generation method, device and electronic equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0060] A method for generating a neural network for action recognition provided by an embodiment of the present invention is a method for generating a neural network for action recognition as a variable convolution kernel that fuses human body key point information, such as figure 1 As shown, the method includes:

[0061] S11: Detect the target image to obtain the human body key point information.

[0062] The target image may be a dynamic video, a still picture, or the like obtained by an image acquisition device such as a common camera or a depth camera. The human body key point information may be position information of the human body key points and / or angle information between the human body key points.

[0063] In this embodiment, the target image of the input image recognition neural network is detected and recognized first, so as to obtain the human body key point information such as the position of the human body key points and the angle between the human body key poi...

Embodiment 2

[0071] A method for generating a neural network for action recognition provided by an embodiment of the present invention is a method for generating a neural network for action recognition as a variable convolution kernel that fuses human body key point information, such as figure 2 As shown, the method includes:

[0072] S21: Detecting the target image through a human body pose estimation algorithm to obtain human body key point information.

[0073] Wherein, the human body key point information includes position information of human body key points and / or angle information between human body key points, which may be the positions of multiple human body key points and / or the angle information of multiple human body key points. Among them, the key points of the human body may be points of joint parts of the human body, or points of key parts of limbs. For example, human body key points can be: top of head, neck, left shoulder, right shoulder, left elbow, right elbow, left ha...

Embodiment 3

[0089] This embodiment provides an application example based on the above-mentioned method for generating a neural network for action recognition. In an implementation manner, the initial convolutional neural network is a two-dimensional convolutional neural network.

[0090] Preferably, the action recognition method of the two-dimensional deformed convolutional neural network may include: first, detecting the target image to obtain the human body key point information; then generating a feature vector according to the human body key point information; then, according to the feature vector, based on the two-dimensional The two-dimensional convolution kernel in the convolutional neural network obtains the spatial dimension offset vector and the time dimension offset vector; then, according to the spatial dimension offset vector, the two-dimensional convolution kernel in the two-dimensional convolutional neural network is spatially processed. Then, the deformed convolutional neur...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an action recognition method and its neural network generation method, device, and electronic equipment, and relates to the technical field of image recognition. The neural network generation method used for action recognition includes: detecting a target image to obtain key point information of a human body; The convolution kernel bias information is obtained according to the key point information of the human body; the deformed convolution neural network is generated based on the initial convolution neural network according to the convolution kernel bias information, which solves the current image recognition neural network existing in the prior art The technical problem of the network's low recognition ability for action recognition.

Description

technical field [0001] The present invention relates to the technical field of image recognition, in particular to an action recognition method and a neural network generation method, device and electronic device thereof. Background technique [0002] At present, motion recognition, as an important basis for automatic video analysis, will play an important role in a series of application scenarios such as intelligent monitoring, new retail, human-computer interaction, and education and teaching. [0003] For example, in a security monitoring scenario, if you can well identify abnormal behaviors such as pickpocketing, lock picking, and fighting, it can play an important role in reducing labor monitoring costs and maintaining public order; in the new retail field, motion recognition helps Better understand user behavior, automatically analyze customer preferences, and improve user experience. [0004] However, the current action recognition neural network mainly focuses on tr...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V40/20G06V10/82G06N3/04
CPCG06V40/20G06N3/045
Inventor 张弛吴骞
Owner BEIJING KUANGSHI TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products