Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A method and system for deep video behavior recognition

A deep video and recognition method technology, applied in character and pattern recognition, instruments, computing, etc., can solve the problems of ignoring the learning ability, reducing the powerful expression ability of CNNs convolution features, etc., and achieve the effect of good geometric information and privacy

Active Publication Date: 2021-03-23
SHANDONG UNIV
View PDF10 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In addition, convolutional features are multi-channel, and different channels correspond to different feature detectors. Ignoring the different learning capabilities between feature channels and treating them equally may reduce the powerful expressiveness of CNNs convolutional features.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A method and system for deep video behavior recognition
  • A method and system for deep video behavior recognition
  • A method and system for deep video behavior recognition

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0061] In one or more embodiments, a deep video behavior recognition method that fuses convolutional neural networks and channel and spatiotemporal interest point attention models is disclosed, such as figure 1 As shown, the dynamic image sequence representation of the depth video is used as the input of CNNs, and the channel and spatiotemporal interest point attention model are embedded after the CNNs convolutional layer, and the convolutional feature map is optimized and adjusted. Finally, the global average pooling is applied to the adjusted convolutional feature map of the input depth video to generate a feature representation of the behavioral video, which is input into the LSTM network to capture the temporal information of human behavior and classify it.

[0062] This embodiment proposes a dynamic image sequence representation (dynamic image sequence, DIS) for the video, divides the entire video into a group of short-term segments along the time axis, and then encodes ea...

Embodiment 2

[0160] In one or more embodiments, a deep video behavior recognition system that fuses convolutional neural networks and channel and spatiotemporal interest point attention models is disclosed, including a server, which includes a memory, a processor, and a memory stored on the memory And it is a computer program that can run on a processor, and when the processor executes the program, the depth video behavior recognition method described in the first embodiment is realized.

Embodiment 3

[0162] In one or more embodiments, a computer-readable storage medium is disclosed, on which a computer program is stored. When the program is executed by a processor, the fusion convolutional neural network and channel and space-time described in Embodiment 1 are executed. A method for deep video action recognition with point-of-interest attention models.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a deep video behavior recognition method and system, comprising: taking dynamic image sequence representation of deep video as the input of CNNs, embedding channels and spatiotemporal interest point attention models after the CNNs convolution layer, and analyzing the convolution feature map Make optimization adjustments. Finally, global average pooling is applied to the adjusted convolutional feature map of the input depth video to generate a feature representation of the behavioral video, which is fed into the LSTM network to capture the temporal information of human behavior and perform classification. The evaluation is carried out on three challenging public human behavior data sets, and the experimental results show that the method of the present invention can extract discriminative spatiotemporal information and significantly improve the performance of video human behavior recognition. Compared with other existing methods, this method effectively improves the behavior recognition rate.

Description

technical field [0001] The invention belongs to the technical field of video-based human behavior recognition, and in particular relates to a deep video behavior recognition method and system that integrates a convolutional neural network and a channel and spatiotemporal interest point attention model. Background technique [0002] The statements in this section merely provide background information related to the present invention and do not necessarily constitute prior art. [0003] Video-based human action recognition has attracted increasing attention in the field of computer vision in recent years due to its wide range of applications, such as intelligent video surveillance, video retrieval, and elderly monitoring. Although a lot of research work has been carried out on the understanding and classification of human behavior in videos to improve the performance of action recognition, due to the interference caused by complex background environments, rich inter-behavior c...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/20G06V20/40G06F18/2193
Inventor 马昕武寒波宋锐荣学文田国会李贻斌
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products