Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Human behavior recognition method based on multi-feature fusion

A technology of multi-feature fusion and recognition method, which is applied in the field of multi-feature fusion human behavior recognition, can solve problems such as inability to effectively obtain time information, cumbersome operation, and poor recognition effect, so as to improve the accuracy and speed of behavior recognition Practicality, the effect of fast matching recognition

Active Publication Date: 2020-05-05
深圳市烨嘉为技术有限公司
View PDF15 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] At present, human body behavior recognition can be divided into two methods based on sensors and vision. Among them, sensor-based behavior recognition needs to wear corresponding sensors on the joints of the human body, which has problems such as cumbersome operation, inflexibility, and poor user experience, so it can only be applied. Behavior recognition based on vision can be divided into recognition based on single-frame image and video. Behavior recognition based on single-frame image cannot effectively obtain the time information of the behavior, and its recognition effect is poor. Behavior recognition based on video It can use the spatiotemporal information in the video, and its accuracy is relatively high. At present, it mainly uses joint skeleton features, cyclic neural network models, 3D convolutional network learning models, etc. for video behavior recognition. However, the current mainstream deep learning methods have algorithmic complexity. Higher, high requirements on hardware, and because human behaviors are relatively random and unpredictable in terms of time duration and spatial posture, the posture and movement speed of the same behavior may be different, and the behavior of different people There are also certain differences in the range of motion and so on. Currently, there is a lack of effective unsupervised learning or semi-supervised learning methods, and it is still extremely dependent on massive behavioral data.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Human behavior recognition method based on multi-feature fusion
  • Human behavior recognition method based on multi-feature fusion
  • Human behavior recognition method based on multi-feature fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0025] To facilitate the understanding of those skilled in the art, the present invention will be further described below in conjunction with the accompanying drawings.

[0026] Such as figure 1 As shown, the present invention discloses a multi-feature fusion human behavior recognition method, comprising the following steps: using a camera to collect human behavior video, extracting the foreground image of each frame of image and performing hole filling and interference filtering to obtain a sequence of human silhouette images ;Calculate the similarity between adjacent frames in the image sequence to obtain the weight of each frame of the image representing the behavior posture; according to each frame of the image in the human silhouette image sequence and its corresponding weight, obtain the action energy map representing the behavior process through weighted average; extract the action The Zernike moment, gray histogram and texture features of the energy map form a multi-di...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a human behavior identification method based on multi-feature fusion, which comprises the following steps of: acquiring a human body behavior video by using a camera, extracting a foreground image of each frame of image, and carrying out cavity filling and interference filtering to obtain a human body silhouette image sequence; Calculating the similarity between adjacent frames in the image sequence, and obtaining the weight of each frame of image representing the behavior posture; According to each frame of image in the human body silhouette image sequence and the corresponding weight thereof, obtaining an action energy diagram representing a behavior process through weighted average; And extracting a Zernike moment, a gray histogram and a texture feature of the action energy diagram to form a multi-dimensional feature fusion vector containing behavior space-time characteristics; Constructing feature vector template libraries of different standard behaviors; And in the behavior identification process, extracting feature vectors of to-be-identified behaviors according to the to-be-identified videos, matching the feature vectors of the to-be-identified behaviors with the feature vectors of the standard behavior template library one by one, determining behavior types according to matching results, and realizing accurate identification of human body behaviors. According to the method, the time change and spatial attitude characteristics of human body behaviors are represented by constructing the action energy diagram, the behavior recognition accuracyand real-time performance can be improved, and certain practical value is achieved.

Description

technical field [0001] The invention relates to a computer vision and image processing method, in particular to a multi-feature fusion human behavior recognition method. Background technique [0002] Human behavior recognition is an important branch of computer vision. It refers to the use of pattern recognition, machine learning and other methods to automatically analyze and recognize the human body's actions from an unknown video. It can be widely used in intelligent security, traffic management, intelligent robots, and intelligent care. , entertainment and leisure and other real life fields. [0003] At present, human body behavior recognition can be divided into two methods based on sensors and vision. Among them, sensor-based behavior recognition needs to wear corresponding sensors on the joints of the human body, which has problems such as cumbersome operation, inflexibility, and poor user experience, so it can only be applied. Behavior recognition based on vision can...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/40G06K9/46
Inventor 刁思勉钟震宇雷欢谭鹏辉李娜李锡康
Owner 深圳市烨嘉为技术有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products