Unlock instant, AI-driven research and patent intelligence for your innovation.

Multi-scene crowd density estimation method based on convolution network and multi-task learning

A multi-task learning and crowd density technology, applied in the field of computer vision and intelligent monitoring, can solve the problems of high cost of data labeling and model training, and the same distribution, so as to improve the efficiency of data utilization, reduce the number of models, and reduce the cost of labeling.

Active Publication Date: 2019-02-26
ARMY ENG UNIV OF PLA
View PDF2 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method is more accurate, but the disadvantage is that it is necessary to ensure that the data of the training set and the test set are in the same distribution
However, in actual application scenarios, due to the different backgrounds of the cameras at each location, the concentrated distribution area of ​​the crowd and the density of the crowd will be quite different. Therefore, whenever a camera needs to be deployed in a scene corresponding to it, it is often necessary to collect and mark a large number of this camera. Retraining the density map regression network on the crowd images corresponding to the scene, or using the model fine-tuning method to migrate, these two mechanisms need to pay additional data collection labeling and model training costs for the deployment scene, in the actual deployment and application of a large number of cameras The cost of data labeling and model training in the process is extremely high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-scene crowd density estimation method based on convolution network and multi-task learning
  • Multi-scene crowd density estimation method based on convolution network and multi-task learning
  • Multi-scene crowd density estimation method based on convolution network and multi-task learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0056] The present invention will be described in further detail below in conjunction with the accompanying drawings.

[0057] Refer to attached figure 1 - attached image 3 , the present invention will be further described below in conjunction with accompanying drawing:

[0058] The technical solution to realize the object of the present invention is as follows: first, learn the generality of crowd density regression in any scene through a robust convolutional neural network, and perform rough density estimation on crowd pictures in any scene; Finally, in the crowd pictures of each scene, the scene characteristics are used to correct and further refine the rough density map to improve the density estimation accuracy of each scene.

Embodiment 2

[0060] A multi-scene crowd density estimation method based on multi-task learning and convolutional neural network, including the following steps:

[0061] (1) Rough density estimation step: an arbitrary scene density map regression step, using a unified density map regression model to perform rough and overall crowd density map regression on video frames of any scene. The flow of the rough density estimation step is as follows figure 1 shown.

[0062] In the rough density estimation step, the training data needs to be prepared. First, the network supervision signal needs to be generated according to the marked position information. The marked information is the coordinate position (x, y) of all heads in the picture, and the supervision signal is generated according to the coordinate position of the human head. crowd density map,

[0063]

[0064] where (x i ,y i ) is the coordinate position, and σ is the parameter of the Gaussian function.

[0065] The overall process...

Embodiment 3

[0078] A multi-scene crowd density estimation system based on multi-task learning and convolutional neural network, including the following steps:

[0079] (1) Rough density estimation step: an arbitrary scene density map regression step, using a unified density map regression model to perform rough and overall crowd density map regression on video frames of any scene. The flow of the rough density estimation step is as follows figure 1 shown.

[0080] In the rough density estimation step, the training data needs to be prepared. First, the network supervision signal needs to be generated according to the marked position information. The marked information is the coordinate position (x, y) of all heads in the picture, and the supervision signal is generated according to the coordinate position of the human head. crowd density map,

[0081]

[0082] where (x i ,y i ) is the coordinate position, and σ is the parameter of the Gaussian function.

[0083] The overall process...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-scene crowd density estimation system and method based on convolution network and multi-task learning. Including crowd density map generation module, cross-camera multi-scene learning module, each scene density map calibration module. The first part of the framework is a robust density map generation module based on convolution neural network, The convolution neuralnetwork is composed of three deep fusion subnetworks, Each deep fusion sub-network has three convolution cores with different size and number, so it can effectively grasp the commonness of density estimation problem, and estimate the density map of multi-scene surveillance video frames with different background, illumination and crowd density in practical application. The second part of the framework is the data distribution learning based on multi-task learning, and learns the different characteristics of the crowd distribution in each scene. In the third part of the framework, the results of general crowd density estimation in the first part are calibrated and fine-tuned by using the characteristics of crowd distribution in each scene learned by multi-task learning. This system can estimate the crowd density of multi-scene and multi-camera with high efficiency and accuracy under the real surveillance scene.

Description

technical field [0001] The invention relates to a computer vision and intelligent monitoring technology, in particular to a multi-scene crowd density estimation system based on convolutional network and multi-task learning. Background technique [0002] In recent years, the phenomenon of high-density crowd gathering in cities has become more and more frequent, and stampede incidents have occurred from time to time, which seriously threatens urban public safety. Therefore, the technology of crowd control and early warning in public places has increasingly become the focus of research in the field of intelligent monitoring and urban security. Crowd density estimation technology refers to the estimation of crowd density through computer vision technology, so as to provide early warning and evacuation of high-density crowds, and has become an important technology in crowd control. [0003] There are currently solutions based on unsupervised learning methods for people counting t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/08G06V20/41G06V20/53G06N3/045
Inventor 唐斯琪潘志松李云波焦珊珊黎维刘桢王彩玲
Owner ARMY ENG UNIV OF PLA
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More