Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Behavior recognition and positioning method based on multi-task joint learning

A technology of behavioral localization and localization method, applied in biometric recognition, character and pattern recognition, instruments, etc., to solve the problem of labeling data sets, expand diversity, and save costs

Active Publication Date: 2018-11-16
四川瞳知科技有限公司
View PDF5 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0008] The purpose of the present invention is to solve the above-mentioned problems in the prior art, and propose a behavior recognition and positioning method based on multi-task joint learning, which uses the combination of convolutional neural network in deep learning and multi-task joint learning to replace a single task Convolutional neural network algorithm to meet the needs of human behavior recognition and behavior positioning in video clips

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Behavior recognition and positioning method based on multi-task joint learning
  • Behavior recognition and positioning method based on multi-task joint learning
  • Behavior recognition and positioning method based on multi-task joint learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0070] Exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be understood that the implementations shown and described in the drawings are only exemplary, intended to explain the principle and spirit of the present invention, rather than limit the scope of the present invention.

[0071] Embodiments of the present invention provide a behavior recognition and positioning method based on multi-task joint learning, such as figure 1 As shown, including the following steps S1-S5:

[0072] S1. Construct a multi-channel combined behavior recognition convolutional neural network.

[0073] like figure 2 As shown, in the embodiment of the present invention, the behavior recognition convolutional neural network includes an optical flow channel and an image channel, and the optical flow channel and the image channel respectively include independent first-layer networks, second-layer networks, third-layer n...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a behavior recognition and positioning method based on multi-task joint learning. A convolutional neural network in deep learning is combined with multi-task joint learning toreplace a single-task convolutional-neural-network algorithm to achieve the goal of human body behavior recognition and behavioral positioning in a video. The method improves an object detection deep-network in faster rcnn, combines a behavior recognition deep-network on the basis thereof, enables a combined network to achieve capability of multi-task joint learning, enables two tasks to promote each other, and enhances robustness and accuracy of a recognition algorithm; at the same time, the method combines a video data set and a picture data set, and enhances information diversity of a training set; and in addition, huge energy can be consumed if human body positions in the video data set are labeled, but the method can omit labeling work of the data set through self-learning of the algorithm, and can greatly reduce labeling workloads.

Description

technical field [0001] The invention belongs to the technical fields of computer vision, machine learning and deep learning, and specifically relates to the design of a behavior recognition and positioning method based on multi-task joint learning. Background technique [0002] In the field of security, there is a great demand for human behavior detection and positioning, such as the detection of violent behavior. If the violent behavior that endangers society and others can be detected in real time and measures taken, it will be of great significance to social stability. If it is possible to locate the specific thugs who committed violent acts in the video, combined with the application of face recognition, it will be of great value to quickly solve the case. However, the current video surveillance system is mainly based on manpower and supplemented by computers. It mainly identifies the content of the surveillance video manually. The workload is huge, and as the surveillan...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04
CPCG06V40/20G06V40/10G06N3/045G06F18/241
Inventor 郝宗波
Owner 四川瞳知科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products