Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Human Behavior Recognition Method Based on Depth Video Sequence

A deep video sequence and recognition method technology, applied in the field of computer pattern recognition, can solve the problems of reducing the recognition rate, misrepresenting coefficients, and not being able to more accurately represent the detailed information of local descriptors, so as to achieve strong dictionary expression ability and improve recognition rate Effect

Active Publication Date: 2018-03-09
BEIJING UNIV OF TECH
View PDF1 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The existing human behavior recognition technology based on the normal vector is the most popular. The human behavior recognition method based on the normal vector currently has the following two problems: (1) When using the normal vector to construct the descriptor, since the extraction is based on a certain point The information in the spatial-temporal neighborhood of the layer cannot more accurately represent the detailed information of the local descriptor; (2) when classifying the video sequence of the behavior to be recognized, the atoms in the complete dictionary are used to represent the video sequence to be recognized, if different behaviors The video sequence has similar features, then the similar features are also partially represented, and the incorrectly obtained representation coefficient will reduce the recognition rate for classification

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Human Behavior Recognition Method Based on Depth Video Sequence
  • A Human Behavior Recognition Method Based on Depth Video Sequence
  • A Human Behavior Recognition Method Based on Depth Video Sequence

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0011] like figure 1 As shown, this human behavior recognition method based on depth video sequence, the method calculates the four-dimensional normal vector of all pixels in the video sequence, and extracts the pixel points in different layers by constructing the space-time pyramid model of the behavior sequence in different space-time domains. Low-level features, learn group sparse dictionary based on low-level features, obtain sparse coding of low-level features, use spatial average pooling and temporal maximum pooling to integrate coding, and obtain high-level features as the descriptor of the final behavior sequence.

[0012] The present invention constructs a space-time pyramid model, and retains information in the multi-layer space-time field of local descriptors in a targeted manner. At the same time, due to the use of a group sparse dictionary to encode the underlying features, it avoids the interference of different categories containing similar information, making th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a human behavior recognition method based on a depth video sequence. The method calculates the four-dimensional normal vectors of all pixels in the video sequence, and extracts pixels in different layers by constructing a time-space pyramid model of the behavior sequence in different time-space domains. Based on the low-level features, the group sparse dictionary is learned based on the low-level features, and the sparse coding of the low-level features is obtained. The spatial average pooling and temporal maximum pooling are used to integrate the coding, so as to obtain high-level features as the descriptor of the final behavior sequence. This kind of descriptor can effectively preserve the spatiotemporal multi-resolution information of human behavior, and at the same time, by eliminating the similar content contained in different behavior categories, a sparse dictionary with stronger expressive power can be obtained to effectively improve the behavior recognition rate.

Description

technical field [0001] The invention belongs to the technical field of computer pattern recognition, and in particular relates to a human behavior recognition method based on depth video sequences. Background technique [0002] Vision is an important way for human beings to observe and understand the world. With the continuous improvement of computer processing capabilities, we hope that computers can have part of the visual functions of human beings, helping or even replacing human eyes and brains to observe and perceive external things. With the improvement of computer hardware processing ability and the emergence of computer vision technology, people's expectation of computer may become a reality. Human behavior recognition has always been a research hotspot in the fields of pattern recognition, computer vision, and artificial intelligence. The purpose of video-based human behavior recognition is to understand and recognize individual human actions, interactive movement...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/66
CPCG06V40/20G06F18/217
Inventor 李承锦孙艳丰胡永利张坤
Owner BEIJING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products