Pedestrian flow monitoring method based on pedestrian detection and tracking

A pedestrian detection and crowd flow technology, applied in character and pattern recognition, image data processing, instruments, etc., can solve the problems of low detection accuracy and slow detection speed, and achieve high detection accuracy, fast detection speed, and fast pedestrian tracking Effect

Active Publication Date: 2013-12-04
ZHEJIANG UNIV +1
2 Cites 44 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0005] The purpose of the present invention is to provide a pedestrian flow monitoring method based on pedestrian detection and tracking, aiming at the defects of low detection accuracy and slow detection speed of the current pedestrian detection method, which is applied to the statisti...
View more

Abstract

The invention discloses a pedestrian flow monitoring method based on pedestrian detection and tracking. The method includes: acquiring and decoding a camera video stream to obtain single-frame RGB images, performing pedestrian detection on each image frame to obtain a group of pedestrian positions, calculating similarity, matching the pedestrians of every two adjacent image frames so as to track the pedestrians and obtain motion trajectory of each pedestrian, setting detection lines in the monitoring video, and judging pedestrian flow in different directions through the obtained pedestrian trajectories. The method is based the latest developments of pedestrian detection in computer vision, high in detection accuracy, fast in detection and promising in development prospect. By combining the method with a fast tracking method based on similarity and using a multi-scale detection method from sparsity to density, detection speed is further increased, fast pedestrian tracking is realized, detection and tracking speed can reach 10FPS on a current common computer, and practical level is reached.

Application Domain

Image analysisCharacter and pattern recognition

Technology Topic

Computer basedTraffic volume +8

Image

  • Pedestrian flow monitoring method based on pedestrian detection and tracking
  • Pedestrian flow monitoring method based on pedestrian detection and tracking
  • Pedestrian flow monitoring method based on pedestrian detection and tracking

Examples

  • Experimental program(1)

Example Embodiment

[0035] The present invention will be described in detail below with reference to the accompanying drawings, and the objects and effects of the present invention will become more apparent.
[0036] The pedestrian flow monitoring method based on pedestrian detection and tracking of the present invention is figure 2 It is implemented on the people flow monitoring system shown. The flow monitoring system includes: a video input device and a control center, and the video input device and the control center are connected through a LAN network port.
[0037] Video input device: There can be one or more video input devices required by the system. The video input device can be a surveillance camera or a traditional camera. The resolution of the camera is required to be higher than 320*240, the frame rate is higher than 15FPS, and the pixel depth is not lower than RGB888. The camera is three to five meters above the ground, and the shooting angle is thirty to sixty degrees diagonally downward. The placement position and shooting angle of the camera are required so that most of the people appear in the shooting area, and there is less mutual occlusion between people. At the same time, the people in the shooting area are required to be standing still or walking.
[0038] Control Center: The control center of this system can be realized by ordinary or dedicated PC or server. The control center includes: video acquisition module, pedestrian detection and tracking module, and people flow statistics module, which can analyze people flow and display people flow monitoring results.
[0039] like figure 1 As shown, the method includes the following steps:
[0040] Step 1: Obtain and decode the camera video stream to obtain a single frame image in RGB format.
[0041] Step 2: Perform pedestrian detection on each frame of images to obtain a set of pedestrian positions (including boxes). like image 3 shown, this step is achieved through the following sub-steps:
[0042] 2.1 Read the current frame image;
[0043] 2.2 Calculate the image integration channel features;
[0044] 2.3 Perform multi-scale identification from sparse to dense;
[0045]First, the scaling value of each scale to be detected is generated according to the minimum and maximum scale and the number of scales by the method of scaling the value between scales, and then every N scales are detected once, and then N/2 the scales near the detected pedestrians are detected. scale to be detected. This multi-scale detection method can reduce the detection time by 20% to 50% while keeping the detection accuracy unchanged. In the actual scene, the N value is selected as 3 or 5.
[0046] 2.4 The NMS method combines the identification results in all scales;
[0047] 2.5 Calculate the difference between each detection result area and the background area;
[0048] The problem that the background is falsely detected as a pedestrian is solved by setting the background image. When no pedestrian is detected in 5 consecutive frames and the average absolute difference per pixel channel between these frames is less than 5, the current frame is set as the background. After that, if the average absolute difference per pixel channel between the containing box of the detected pedestrian and the corresponding area of ​​the background image is less than 5, it is considered that this position is the background falsely detected as a person, so this position should be excluded.
[0049] 2.6 Detection results of small deletion and background difference values;
[0050] 2.7 Obtain the detection results, that is, a set of pedestrian positions (including boxes) and the corresponding confidence values ​​for each position.
[0051] Step 3: By calculating the similarity, the pedestrians detected in adjacent frames are matched, so as to realize pedestrian tracking and obtain the movement trajectory of each person.
[0052] In this step, pedestrian tracking is realized by matching the pedestrian position detected in the current frame with the pedestrian position detected in the previous frame. The process is as follows Figure 4 shown. The specific steps of pedestrian tracking are:
[0053] 3.1 For each position in the pedestrian list, calculate the similarity between it and all pedestrian positions in the current frame.
[0054] The pedestrian list is an array that records all pedestrian information detected in the current frame, and each item in the array records information such as the location, number, containing box, and detection confidence of a single pedestrian. The list is initially empty, and the tracking algorithm processes each frame in the video to obtain pedestrian information and continuously update this list.
[0055] The similarity calculation formula between two pedestrian locations is:
[0056]
[0057] The subscripts a and b represent two different pedestrian positions (including boxes), each pedestrian containing box is a square area in the image, represented by the upper left and lower right coordinates of the area; F represents the channel feature The integral value in the pedestrian containing box area, the channel is selected as the LUV color channel, such as Fa is a three-dimensional vector, the first dimension of which is the integral value (accumulated sum) of the L color component of the image in the entire a square area, and the second The dimension and the third dimension are the integral values ​​of the U and V components in the area a; C is the pixel coordinate of the center point of the containing box; N, D, and M are parameters, respectively. When the similarity is greater than T, the two locations are considered to be the same person. These parameters are tested and the effect is better when N=1, D=50, M=5, and T=0.8.
[0058] 3.2 For each person whose number is not -1 in the list, select the position with the highest similarity in the current frame as the matching position, and assign its number to this position.
[0059] The number is the unique identification of the person. If the position number of the pedestrian in different frames is the same, the two positions are considered to be the same person. The number starts from 0 and increases by 1 each time. When a new pedestrian is detected, the new number is assigned to this person. In order to solve the problem of false detection of people in non-pedestrian areas in one frame of images, the newly detected position number is set to -1, and a new pedestrian number is assigned only when a match is found in the next frame. In order to solve the problem of missed pedestrian detection in a single-frame image, a pedestrian that has been detected before is considered to have disappeared from the monitoring area only when a matching position cannot be found in 5 consecutive frames, and the person is removed from the list.
[0060] Through this step, the pedestrian detection result of each frame of image can be used to obtain the pedestrian list in the current frame, so that people can be tracked in the video and the movement trajectory of each person can be obtained, and the number of different numbers in each frame can be obtained. the number of people in the frame.
[0061] Step 4: Set detection lines in the surveillance video, and judge the flow of people in different directions based on the pedestrian movement trajectory obtained in Step 3.
[0062] The detection line is usually set at the entrance and exit to determine the flow of people in and out. When a pedestrian enters the adjacent area of ​​the detection line from one side of the detection line, and then exits the adjacent area of ​​the detection line from the other side, it is judged that the pedestrian crosses the detection line from this direction, and the flow of people crossing the line from different directions can be obtained. information. At the same time, the detection area can also be set to meet the needs of different people flow statistics.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.

Similar technology patents

Optical device

ActiveUS20120120403A1improve detection accuracy
Owner:SEIKO EPSON CORP

Classification and recommendation of technical efficacy words

  • Improve detection accuracy
  • detection speed

Commodity positioning identification method, device and equipment and storage medium

InactiveCN109522967AReduce video memory usagedetection speed
Owner:COMMA SMART RETAIL CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products