Action recognition method based on adaptive context region selection

An action recognition and context technology, applied in the fields of computer vision and action recognition, can solve problems such as affecting the performance of action recognition methods, reducing the accuracy of action recognition, etc., to achieve effective utilization, reduce risks, and improve efficiency.

Active Publication Date: 2020-05-26
TONGJI UNIV
View PDF6 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

These areas may be mixed with motion-independent information. For example, a bicycle in the same picture will have an adverse effect on the recognition of the motion of the running person in the picture, which will affect the performance of the entire motion recognition method and reduce the performance of motion recognition. Accuracy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Action recognition method based on adaptive context region selection
  • Action recognition method based on adaptive context region selection
  • Action recognition method based on adaptive context region selection

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0058] Such as figure 2 As shown, the present invention provides an action recognition method based on adaptive context region selection. The main purpose of the present invention is to use the spatial position information of the person to adaptively select the context from the candidate regions generated by the network to help recognize the person's action . It mainly includes the following four steps:

[0059] Step A: For a given single image, use the first four convolution blocks of the ResNet deep learning model to extract the feature map of the entire image;

[0060] Step B: Input the feature map of the entire image and the character bounding box n information of the action to be recognized into the adaptive context selection algorithm, generate and select the first N area bounding boxes considering the spatial relationship with the character for each character, as context area;

[0061] Step C: According to the character bounding box n and the selected bounding box o...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to an action recognition method based on adaptive context region selection, which is used for recognizing character actions in an image and comprises the following steps: S1) extracting an overall feature map of a to-be-identified image and a character bounding box n of a to-be-identified action character in the to-be-identified image by utilizing the first four convolution blocks of a ResNet model; S2) adaptively selecting a context area bounding box of each person in the to-be-identified image according to the feature map and the related information of the person bounding box n; S3) carrying out feature extraction on the figure bounding box n and the context area bounding box, and calculating to obtain scores of the figure corresponding to each action category and scores of the context area corresponding to each action category; and S4) according to the scores of the action categories corresponding to the character and the context area, judging the action category of the character in the image, and completing the identification of the character action, compared with the prior art, the method has the advantages of high identification precision, high identification speed and the like.

Description

technical field [0001] The invention relates to the technical fields of computer vision and action recognition, in particular to an action recognition method based on adaptive context region selection. Background technique [0002] For decades, action recognition has been an important research branch in the field of computer vision. Its research scope covers many aspects such as image and video data. Related technologies are also widely used in human-computer interaction, information retrieval, security monitoring and other fields. [0003] Traditional action recognition mostly uses methods based on manual features. In recent years, thanks to the rapid development of deep learning, action recognition methods based on deep neural network learning and feature extraction have emerged in an endless stream. According to the features they extract and utilize, these methods can be divided into three categories: global feature-based methods, local feature-based methods, and context...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/08G06V40/20G06N3/045
Inventor 梁爽马文韬
Owner TONGJI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products