Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Human body action recognition method

A human action recognition and image technology, applied in character and pattern recognition, instruments, biological neural network models, etc., can solve the problem of low accuracy of action recognition

Inactive Publication Date: 2020-04-24
SHANDONG SYNTHESIS ELECTRONICS TECH
View PDF1 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Aiming at the defects of the prior art, the present invention provides a human action recognition method, which solves the problem of low action recognition accuracy in large scenes, small targets, and complex backgrounds. At the same time, it solves the problem of realizing any Accurate action localization and action classification in long-length continuous videos

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Human body action recognition method
  • Human body action recognition method
  • Human body action recognition method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0070] This embodiment is mainly aimed at large scenes and small targets, by preprocessing training and test data, reducing the impact of complex backgrounds on model detection accuracy, and improving the model's action recognition accuracy. At the same time, only one three-dimensional convolutional deep learning model is used to detect and accurately locate actions in continuous videos of any length, reducing the amount of calculation.

[0071] Such as figure 1 As shown, this embodiment includes the following steps:

[0072] The first step: image preprocessing operation:

[0073] Decode the video and preprocess each frame of pictures. The preprocessing includes the following steps:

[0074] 1) Minimum Neighborhood Selection

[0075] For a two-dimensional image, the minimum neighborhood width is 9, that is, a pixel and its surrounding 8 pixels are taken as the minimum filtering neighborhood, that is, in the neighborhood window length (i, j) of the pixel, the selection of i ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a human body action recognition method. According to the method, firstly, minimum neighborhood construction and filtering preprocessing are carried out on an image, then imagechannel transformation, target contour enhancement and differential image extraction are carried out, threshold processing and foreground image processing are carried out on a foreground image, and finally model training or action recognition and action positioning are carried out on the basis of a three-dimensional convolutional network. According to the method, the problem that in an existing action recognition method, the detection precision of a model is reduced in a large scene, a small target and a complex background is solved, and meanwhile action detection and action positioning in anycontinuous borderless video stream are realized, the precision of human body motion recognition and the robustness in different application scenes are improved, and the normative application capability of the model is improved.

Description

technical field [0001] The invention relates to a human body action recognition method, which belongs to the technical field of human body action recognition. Background technique [0002] Action recognition achieves action classification tasks by extracting the action features of continuous video frames, avoiding possible dangerous behaviors in practice, and has a wide range of practical application scenarios, so it has always been an active research direction in the field of computer vision. The existing action recognition methods based on deep learning have achieved high classification accuracy in small scenes and large targets. However, in real-time monitoring of complex backgrounds (with noise) and small targets, the existing human action recognition methods have low recognition accuracy, a large number of false positives and false negatives. Contents of the invention [0003] Aiming at the defects of the prior art, the present invention provides a human action recog...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/40G06K9/62G06N3/04
CPCG06V40/20G06V10/30G06N3/045G06F18/214
Inventor 高朋许野平刘辰飞陈英鹏张朝瑞席道亮
Owner SHANDONG SYNTHESIS ELECTRONICS TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products