Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Behavior recognition method based on feature mapping and multilayer time interactive attention

A feature mapping and recognition method technology, applied in the field of computer vision and video processing, can solve problems such as poor behavior recognition ability, insufficient time dynamic information modeling, ignoring the interdependence of different frames, etc., to achieve the effect of improving accuracy

Active Publication Date: 2021-05-07
XIDIAN UNIV
View PDF7 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The purpose of the present invention is to address the deficiencies of the above-mentioned prior art, and propose a behavior recognition method based on feature mapping and multi-layer temporal interactive attention, which is used to solve the problem of insufficient modeling of temporal dynamic information in the prior art, ignoring different The interdependence between frames leads to the problem of poor behavior recognition ability

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Behavior recognition method based on feature mapping and multilayer time interactive attention
  • Behavior recognition method based on feature mapping and multilayer time interactive attention
  • Behavior recognition method based on feature mapping and multilayer time interactive attention

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] Attached below figure 1 The specific steps of the present invention are further described.

[0035] Step 1. Generate a training set.

[0036] The sample set is composed of RGB videos containing N behavior categories in the video data set, each category contains at least 100 videos, and each video has a certain behavior category, where N>50. Each video in the sample set is preprocessed to obtain the corresponding RGB image of the video, and the RGB images of all the preprocessed videos form a training set. Among them, preprocessing refers to sampling 60 frames of RGB images at equal intervals for each video in the sample set, scaling the size of each frame of RGB images to 256×340 and then cropping to obtain a video with a size of 224×224. 60 frames of RGB images.

[0037] Step 2. Get the depth feature map.

[0038] Input each frame of RGB image in each video in the training set to the Inception-v2 network in turn, and output the size of each frame image in each vide...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a behavior recognition method based on feature mapping and multi-layer time interactive attention, and solves the problem that the behavior recognition capability is insufficient due to the fact that time dynamic information modeling is insufficient and the mutual dependency relationship between different frames is ignored in the prior art. The method comprises the following implementation steps: (1) generating a training set; (2) acquiring a depth feature map; (3) constructing a feature mapping matrix; (4) generating a time interaction attention matrix; (5) generating a time interaction attention weighted feature matrix; (6) generating a multi-layer time interactive attention weighted feature matrix; (7) acquiring a feature vector of the video; and (8) carrying out behavior identification on the video. According to the method, the feature mapping matrix is constructed, and multi-layer time interaction attention is provided, so that the accuracy of behavior recognition in the video can be improved.

Description

technical field [0001] The invention belongs to the technical field of video processing, and further relates to a behavior recognition method based on feature mapping and multi-layer temporal interactive attention in the technical field of computer vision. The invention can be used for human body behavior recognition in video. Background technique [0002] Video-based human behavior recognition tasks occupy an important position in the field of computer vision and have broad application prospects. At present, they have been applied in unmanned driving, human-computer interaction, video surveillance and other fields. The goal of human behavior recognition is to judge the category of human behavior in a video, which is essentially a classification problem. In recent years, with the development of deep learning, behavior recognition methods based on deep learning have been widely studied. [0003] South China University of Technology disclosed a human behavior recognition met...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/049G06N3/084G06V40/23G06N3/047G06F18/2415G06F18/241
Inventor 同鸣金磊董秋宇边放
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products