Human body behavior recognition method of non-local double-flow convolutional neural network model

A convolutional neural network and recognition method technology, applied in the field of computer vision image and video processing, can solve problems such as low recognition accuracy, complex background environment, and diverse human behaviors, achieve good training results, and alleviate network overfitting , Good classification and recognition effect

Inactive Publication Date: 2020-02-21
SHANGHAI MARITIME UNIVERSITY
View PDF2 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In order to solve the problem of low recognition accuracy caused by complex background environment, diverse human behaviors and large similarities in behavioral videos, the present invention provides a human behavior recognition method based on a non-local two-stream convolutional neural network model, and designs a non-local The local two-stream convolutional neural network model combines the traditional CNN with the non-local feature extraction module; the A-softmax Loss function is used to make the inter-class distance larger and the intra-class distance smaller in the final classification of the two-stream two-branch model

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Human body behavior recognition method of non-local double-flow convolutional neural network model
  • Human body behavior recognition method of non-local double-flow convolutional neural network model
  • Human body behavior recognition method of non-local double-flow convolutional neural network model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0018] The invention provides a human body behavior recognition method of a non-local dual-stream convolutional neural network model, the overall network structure adopts a dual-stream convolutional neural network model, figure 1 It is briefly explained that the network framework mainly includes spatial stream and temporal stream CNN models, which are used to extract spatial appearance information and temporal motion information of video samples. The structure and settings of each convolutional layer and fully connected layer are the same and the weight parameters are shared.

[0019] figure 1 The input video sample set is preprocessed to obtain RGB frames and optical flow images (single RGB frame and continuous optical flow frame), which are divided into training set and test set, which are respectively sent to spatial stream CNN and temporal stream CNN for training and testing. Select an appropriate size input for the RGB frame and optical flow image, select an appropriate ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a human body behavior recognition method of a non-local double-flow convolutional neural network model. Two shunt networks are improved on the basis of a double-flow convolutional neural network model; a non-local feature extraction module is added into the spatial flow CNN and the time flow CNN for extracting a more comprehensive and clearer feature map. According to themethod, the depth of the network is deepened to a certain extent, network over-fitting is effectively relieved, non-local features of a sample can be extracted, an input feature map is subjected to de-noising processing, and the problem of low recognition accuracy caused by reasons such as complex background environment, diverse human body behaviors and high action similarity in a behavior video is solved. According to the method, an A-softmax loss function is adopted for training in a loss layer; on the basis of a softmax function, m times of limitation is added to a classification angle, andthe weight W and bias b of a full connection layer are limited, so that the inter-class distance of samples is larger, the intra-class distance of the samples is smaller, better recognition precisionis obtained, and finally a deep learning model with higher identification capability is obtained.

Description

technical field [0001] The invention relates to computer vision image and video processing technology, in particular to a human body behavior recognition method of a non-local two-stream convolutional neural network model. Background technique [0002] The research on human behavior recognition is to endow computers with the ability of vision similar to human beings, so that computers can obtain information through the visual system like human beings. Analyze and process human actions in the video, and classify and understand human behavior by automatically tracking the global information and local information of human behavior. Due to the complex background environment in the action video, the variety of human actions and the large similarity of actions, there will be a phenomenon of grouping two similar types of actions into one category, resulting in a low accuracy rate of human action recognition. Therefore, human action recognition is still a challenging task in comput...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/20G06N3/047G06N3/045G06F18/2415G06F18/241
Inventor 周云陈淑荣
Owner SHANGHAI MARITIME UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products