Unlock instant, AI-driven research and patent intelligence for your innovation.

Multi-scene crowd density estimation method based on convolutional network and multi-task learning

A multi-task learning and crowd density technology, applied in the field of computer vision and intelligent monitoring, can solve the problems of high cost of data labeling and model training, and the same distribution, so as to improve data utilization efficiency, reduce the number of models, and reduce labeling costs.

Active Publication Date: 2021-11-23
ARMY ENG UNIV OF PLA
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method is more accurate, but the disadvantage is that it is necessary to ensure that the data of the training set and the test set are in the same distribution
However, in actual application scenarios, due to the different backgrounds of the cameras at each location, the concentrated distribution area of ​​the crowd and the density of the crowd will be quite different. Therefore, whenever a camera needs to be deployed in a scene corresponding to it, it is often necessary to collect and mark a large number of this camera. Retraining the density map regression network on the crowd images corresponding to the scene, or using the model fine-tuning method to migrate, these two mechanisms need to pay additional data collection labeling and model training costs for the deployment scene, in the actual deployment and application of a large number of cameras The cost of data labeling and model training in the process is extremely high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-scene crowd density estimation method based on convolutional network and multi-task learning
  • Multi-scene crowd density estimation method based on convolutional network and multi-task learning
  • Multi-scene crowd density estimation method based on convolutional network and multi-task learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0055] The present invention will be described in further detail below in conjunction with the accompanying drawings.

[0056] Refer to attached figure 1 - attached image 3 , the present invention will be further described below in conjunction with accompanying drawing:

[0057] The technical solution to realize the object of the present invention is as follows: first, learn the generality of crowd density regression in any scene through a robust convolutional neural network, and perform rough density estimation on crowd pictures in any scene; Finally, in the crowd pictures of each scene, the scene characteristics are used to correct and further refine the rough density map to improve the density estimation accuracy of each scene.

Embodiment 2

[0059] A multi-scene crowd density estimation method based on multi-task learning and convolutional neural network, including the following steps:

[0060] (1) Rough density estimation step: an arbitrary scene density map regression step, using a unified density map regression model to perform rough and overall crowd density map regression on video frames of any scene. The flow of the rough density estimation step is as follows figure 1 shown.

[0061] In the rough density estimation step, the training data needs to be prepared. First, the network supervision signal needs to be generated according to the marked position information. The marked information is the coordinate position (x, y) of all heads in the picture, and the supervision signal is generated according to the coordinate position of the human head. crowd density map,

[0062]

[0063] where (x i ,y i ) is the coordinate position, and σ is the parameter of the Gaussian function.

[0064] The overall process...

Embodiment 3

[0077] A multi-scene crowd density estimation system based on multi-task learning and convolutional neural network, including the following steps:

[0078] (1) Rough density estimation step: an arbitrary scene density map regression step, using a unified density map regression model to perform rough and overall crowd density map regression on video frames of any scene. The flow of the rough density estimation step is as follows figure 1 shown.

[0079] In the rough density estimation step, the training data needs to be prepared. First, the network supervision signal needs to be generated according to the marked position information. The marked information is the coordinate position (x, y) of all heads in the picture, and the supervision signal is generated according to the coordinate position of the human head. crowd density map,

[0080]

[0081] where (x i ,y i ) is the coordinate position, and σ is the parameter of the Gaussian function.

[0082] The overall process...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-scene crowd density estimation system and method based on convolutional network and multi-task learning. It includes a crowd density map generation module, a cross-camera multi-scene learning module, and a density map calibration module for each scene. The first part of the framework is the robust density map generation module based on the convolutional neural network. The convolutional neural network is composed of three deep fusion sub-networks, and each deep fusion sub-network has three networks with different sizes and numbers of convolution kernels. Therefore, we can effectively grasp the commonality of the density estimation problem, and perform a relatively robust density map estimation for cross-camera multi-scene surveillance video frames with large differences in data distribution in background, illumination, crowd density, etc. in practical applications; the first part of the framework The second part is the data distribution learning of each scene based on multi-task learning, which learns the different crowd distribution characteristics of each scene; the third part of the framework uses the crowd distribution characteristics of each scene learned by multi-task learning to estimate the general crowd density of the first part The results are fine-tuned for calibration. In real surveillance scenarios, this system can efficiently and accurately estimate the density of crowds in multiple scenes across cameras.

Description

technical field [0001] The invention relates to a computer vision and intelligent monitoring technology, in particular to a multi-scene crowd density estimation system based on convolutional network and multi-task learning. Background technique [0002] In recent years, the phenomenon of high-density crowd gathering in cities has become more and more frequent, and stampede incidents have occurred from time to time, which seriously threatens urban public safety. Therefore, the technology of crowd control and early warning in public places has increasingly become the focus of research in the field of intelligent monitoring and urban security. Crowd density estimation technology refers to the estimation of crowd density through computer vision technology, so as to provide early warning and evacuation of high-density crowds, and has become an important technology in crowd control. [0003] There are currently solutions based on unsupervised learning methods for people counting t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/08G06V20/41G06V20/53G06N3/045
Inventor 潘志松唐斯琪李云波焦珊珊黎维刘祯王彩玲
Owner ARMY ENG UNIV OF PLA
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More