Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Video Human Behavior Recognition Method Based on Salient Trajectory Spatial Information

A recognition method and spatial information technology, applied in the field of computer vision, can solve the problems of inaccurate trajectories and redundancy, and achieve the effect of expressing ability, small error, and improving the recognition effect.

Active Publication Date: 2020-06-30
SUN YAT SEN UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, dense trajectories ignore the detection of human motion regions in videos, and it is easy to extract redundant and inaccurate trajectories in complex scenes

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Video Human Behavior Recognition Method Based on Salient Trajectory Spatial Information
  • A Video Human Behavior Recognition Method Based on Salient Trajectory Spatial Information
  • A Video Human Behavior Recognition Method Based on Salient Trajectory Spatial Information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0063] Such as figure 1 As shown, the present invention first preprocesses the video, then filters the dense trajectory features of the video by calculating the saliency to obtain the salient trajectory, and then uses the spatial information of the trajectory to perform two-layer clustering on the salient trajectory of the video, clustering After completion, use the visual dictionary to obtain the representation of the video, and finally use the method of multi-core learning to learn and classify.

[0064] Such as figure 2 As shown, it includes the original frame of the video, the saliency of the dynamic and static combination of the frame, and the original frame and the saliency trajectory based on the dynamic and static combination of saliency filtering. In the present invention, the length of the trajectory is set to be 15, and the trajectories whose significance is less than 1.4 times the average significance of the 15 frames where the trajectory is located are filtered....

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention provides a video human behavior recognition method based on salient trajectory space information, which redefines the salience of the trajectory in the video, effectively removes the background and the trajectory of the non-moving parts of the human body in the video, and leaves the foreground with significant motion. These trajectories have smaller error and stronger expressive ability; in addition, this method distinguishes the moving parts and interactive objects of different human body parts, and uses the spatial and semantic relationship between them through multi-core learning to improve The recognition effect of the algorithm.

Description

technical field [0001] The invention relates to the field of computer vision, and more specifically, to a video human behavior recognition method based on salient trajectory space information. Background technique [0002] With the advancement of society, the video information generated in daily life has shown an explosive growth. People urgently need to analyze these video contents to obtain valuable information. Vision-based human behavior recognition is an important and difficult point in the field of video analysis, and is widely used in intelligent monitoring, video retrieval and animation synthesis. In recent years, many scholars have conducted in-depth research on this, and the research datasets have been transferred from recorded videos taken in a single surveillance scene to life-like videos taken in complex natural scenes. Since videos often have shadows and complex moving backgrounds, as well as being affected by factors such as camera shake, human action recogn...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/46G06K9/62
CPCG06V40/20G06V20/40G06V10/50G06V10/462G06F18/23213G06F18/24137
Inventor 衣杨胡攀邓小康张念旭谢韬郑镇贤
Owner SUN YAT SEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products