Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Coarse-to-fine video target behavior identification method

A recognition method and behavior technology, applied in character and pattern recognition, instruments, biological neural network models, etc., can solve problems such as increasing storage consumption and calculation amount

Inactive Publication Date: 2019-08-23
国网江西省电力有限公司超高压分公司 +1
View PDF5 Cites 25 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this type of method lacks information guidance such as semantics and attention mechanisms, and requires a large-scale video data set for training, which increases storage consumption and computation.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Coarse-to-fine video target behavior identification method
  • Coarse-to-fine video target behavior identification method
  • Coarse-to-fine video target behavior identification method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0022]The present invention will be further described below in conjunction with the accompanying drawings. The following examples will help those skilled in the art to further understand the present invention, but do not limit the present invention in any form. It should be noted that those skilled in the art can make several modifications and improvements without departing from the concept of the present invention. These all belong to the protection scope of the present invention.

[0023] A coarse-to-fine video target behavior recognition method, which clusters some similar behavior categories into the same coarse-grained category, and trains different fine-grained classifiers so that different fine-grained classifiers have the ability to recognize the fine-grained classifiers. Differential properties between granular similar behaviors. The feature representations of fine-grained classifiers are weighted with body part information, and the feature representations of these ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a coarse-to-fine video target behavior identification method, and the method comprises the steps: firstly obtaining human body key points by using an attitude estimation algorithm or annotation information in a video, and cutting and zooming different body parts of a human body; taking the deep neural network as a feature extraction network, and extracting feature vectors of different part areas; and iteratively training a classifier by utilizing the extracted feature vectors of different parts, and searching for the optimal coarse classification of the behavior; for the coarse classifier and each fine-grained classifier, selecting different parts to be cascaded with the global feature vector; carrying out individual training of the classifiers; and carrying out probability fusion on classification results of the coarse-grained classifier and the fine-grained classifier to obtain a final behavior recognition result. According to the method, a behavior recognition framework from coarse to fine is constructed, cascading is utilized to train classifiers in a targeted mode for feature expressions of different body parts of a person with different granularities,and therefore the probability of wrongly dividing similar behaviors is effectively reduced, and the overall behavior recognition accuracy is improved.

Description

technical field [0001] The invention relates to the field of behavior recognition in videos, in particular to a coarse-to-fine video target behavior recognition method. Background technique [0002] Different from image-based recognition and detection, video-based content and human behavior analysis is currently a difficult and challenging task for human visual understanding. Video human behavior recognition, as the basic research of video abnormal behavior detection, relational reasoning and deep understanding of content, has been widely concerned by researchers. [0003] At present, more mature behavior recognition schemes can be divided into two categories according to different application scenarios and information sources: (1) template matching based on background modeling. This method mainly divides the scene of the moving target in the video, that is, the input static image, and uses the frame difference method or background modeling method to segment the foreground ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/20G06N3/045G06F18/241
Inventor 周其平刘伟伟钟幼平赖韵宇李文旦章武文胡睿哲陈振刚刘成庆温舜茜
Owner 国网江西省电力有限公司超高压分公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products