Human body behavior recognition method based on depth video sequence

A deep video sequence and recognition method technology, applied in the field of computer pattern recognition, can solve the problems of lower recognition rate, wrong representation coefficient, and inability to express the detailed information of local descriptors more accurately, so as to improve the recognition rate and strong dictionary expression ability Effect

Active Publication Date: 2015-01-21
BEIJING UNIV OF TECH
View PDF1 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The existing human behavior recognition technology based on the normal vector is the most popular. The human behavior recognition method based on the normal vector currently has the following two problems: (1) When using the normal vector to construct the descriptor, since the extraction is based on a certain point The information in the spatial-temporal neighborhood of the layer cannot more accurately represent the detailed

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Human body behavior recognition method based on depth video sequence
  • Human body behavior recognition method based on depth video sequence
  • Human body behavior recognition method based on depth video sequence

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0011] Such as figure 1 As shown, this method of human behavior recognition based on depth video sequences calculates the four-dimensional normal vectors of all pixels in the video sequence, and builds a spatiotemporal pyramid model of the behavior sequence in different time and space domains to extract the pixels in different layers. The low-level features, based on the sparse dictionary of the low-level feature learning group, obtain the sparse coding of the low-level features, and use the spatial average pool and the time maximum pool to integrate the coding, thereby obtaining the high-level features as the descriptor of the final behavior sequence.

[0012] The present invention constructs a spatiotemporal pyramid model, which specifically retains the information in the multi-layer spatiotemporal domain of local descriptors. At the same time, because a group sparse dictionary is used to encode the underlying features, the interference of different categories containing similar ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a human body behavior recognition method based on a depth video sequence. The method comprises the steps that four-dimensional normal vectors of all pixel points in the video sequence are calculated; low-level features of the pixel points in different layers are extracted by establishing behavior sequence space-time pyramid models in different space-time domains; a group sparse dictionary is learned based on the low-level features, so that sparse codes of the low-level features are obtained; the codes are integrated through a space average pool and a time maximum pool, so that high-level features are obtained and serve as descriptors of a final behavior sequence. The descriptors can effectively preserve space-time multi-resolution information of human body behaviors; meanwhile, the sparse dictionary with stronger expressive power is obtained by eliminating similar contents contained in different behavior categories, so that the behavior recognition rate is effectively increased.

Description

Technical field [0001] The present invention belongs to the technical field of computer pattern recognition, and specifically relates to a method of human behavior recognition based on a deep video sequence. Background technique [0002] Vision is an important way for humans to observe and understand the world. With the continuous improvement of computer processing power, we hope that computers can have part of the human visual function, help or even replace the human eye and brain to observe and perceive external things. With the improvement of computer hardware processing capabilities and the emergence of computer vision technology, people’s expectations of computers may become a reality. Human behavior recognition has always been a research hotspot in the fields of pattern recognition, computer vision, and artificial intelligence. The purpose of video-based human behavior recognition is to understand and recognize individual human actions, the interactive movement between pe...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/66
CPCG06V40/20G06F18/217
Inventor 李承锦孙艳丰胡永利张坤
Owner BEIJING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products