[0026] The technical solutions of the present invention will be clearly and completely described below in conjunction with the accompanying drawings of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
[0027] figure 1 The present invention is a structural block diagram of an urban road traffic obstacle detection system, such as figure 1 As shown, the detection of urban road traffic obstacles shown in the present invention is based on a fixed CCD image detector for urban road traffic. Including the following steps:
[0028] Step S1: Obtain a video image sequence of urban road traffic through a fixed CCD image sensor;
[0029] Step S2: Use an improved Gaussian mixture background modeling method to establish a background, and extract a foreground image of the moving target;
[0030] Step S3: Extract the road area by using the background image obtained in step 2;
[0031] Step S4: Establish a meanshift tracking window for each target block and assign a unique ID number to perform multi-vehicle tracking based on the tracking linked list;
[0032] Step S5, using dynamic features and static features to propose a geometric constraint obstacle detection method to detect obstacles;
[0033] Step S6: Analyze the impact of obstacles on traffic safety through changes in vehicle trajectory;
[0034] Step S7, classify and detect obstacles and alarm.
[0035] Step S11: Obtain the road traffic video image by fixing the CCD image sensor, and perform median filtering processing on the video image to eliminate the interference noise of the image;
[0036] figure 2 Extract a schematic diagram for the foreground target, such as figure 2 As shown, in step S2, the mixed Gaussian background extraction is performed, and the foreground target is extracted, and the specific steps are as follows:
[0037] Step S21 establishes a mixed Gaussian distribution model, and performs real-time update of the background;
[0038] In step S22, the original video image is subtracted from the background image, and the foreground target is separated through the acquisition of Otsu's automatic threshold;
[0039] specific methods:
[0040] The basic principle is as follows: Set the image gray level 1~M, the i-th level pixel n i Total pixels Then the probability of the i-th gray level is P i =n i /N. Suppose the gray level threshold is k, then the image pixels are divided into foreground target C according to the gray level 0 ={1,2,L,k} and background C 1 ={k+1,L,M} Two types.
[0041] The proportion of the foreground target part is
[0042] ω 0 = X i = 1 k P i = ω ( k )
[0043] The proportion of the background part is
[0044] ω 1 =1-ω(k)
[0045] The average gray level of the foreground target is
[0046] μ ( k ) = X i = 1 k i · P i
[0047] The average gray level of the background is
[0048] μ′=μ T -μ(k)
[0049] Prospect target mean
[0050] μ 0 =μ(k)/ω(k)
[0051] Background mean
[0052] μ 1 =[μ T -μ(k)]/[1-ω(k)]
[0053] Total mean
[0054] μ T =ω 0 μ 0 +ω 1 μ 1
[0055] The between-class variance is
[0056] σ B 2 = ω 0 ( μ 0 - μ T ) 2 + ω 1 ( μ 1 - μ T ) 2 = ω 0 ω 1 ( μ 1 - μ 0 ) 2
[0057] For the target class and background class in the image, combine each pixel with the corresponding class center μ i The variance of D(k) is defined as the degree of dispersion of the class. The smaller the degree of dispersion of each class, that is, the stronger its cohesion, the better the classification effect. The formula is as follows:
[0058] D 0 ( k ) = X i = 1 k ( i - μ 0 ) 2 · P i ω 0
[0059] D i ( k ) = X i = k + 1 M ( i - μ 1 ) 2 · P i ω 1
[0060] In order to obtain accurate classification results, this article considers C 0 And C 1 The variance between the two classes and the cohesion of the respective classes should make Max and D 0 (k), D 1 (k) The smallest. Therefore, this article defines the optimal threshold as:
[0061] Threshold = arg max 1 ≤ k ≤ M σ B 2 ( k ) D 0 ( k ) D 1 ( k )
[0062] In addition, when there is no target in the traffic scene, the Otsu threshold will be very low. At this time, the segmented binary image will have a large white area, which will misjudge the background as the target. Therefore, an improvement is made. When the threshold Threshold is lower than 20, let Threshold=20.
[0063] Step S23 performs interference elimination on the obtained target, eliminates the interference small image block by corrosion expansion, and then performs connected domain calibration to fill the target block to obtain a relatively complete foreground target;
[0064] image 3 Extract schematic diagrams for road areas, such as image 3 As shown, in step S3, the road area is extracted from the background image obtained based on the Gaussian mixture distribution model. The specific steps are as follows:
[0065] Step S31 select a point in the center of the background image, and select the gray value of this point;
[0066] Step S32: Determine whether the gray value of the seed point is 10 pixels apart from the top, bottom, left, and right of the seed point. If the difference is within 3, use this point as the seed point; otherwise, select the seed point again;
[0067] Step S33: After obtaining the seed point in step S32, take its gray value, and fill the road area through the eight-neighborhood seed gray-level filling algorithm;
[0068] specific methods:
[0069] The specific method is to first select a point in the center of the road in the background image obtained by Gaussian background modeling as the seed point, take its gray value as the reference value, and start traversing the pixels from the surrounding eight directions, set the threshold, the gray value is greater than or less than Points within a certain range of the reference value will have their gray value set to 255, and the unsatisfied points remain unchanged. Generally, the gray value of the road edge will change obviously, which is quite different from the road area, so the traversal stops at the edge of the road and keeps the original gray value. After filling, the road area is completely white, and the other areas maintain the original gray level. Value image.
[0070] Step S34 sets all the road areas to gray value 255, and sets the remaining areas to 0. Get the preliminary road area;
[0071] specific methods:
[0072] Then traverse this image. Set the pixels with gray values between 255 and a certain value (200) to 255, and set the gray values of other points to 0, and perform contour detection on the resulting image and use white Fill it in to form an image in which the road area is completely white and the other areas are completely black.
[0073] Step S35 performs contour filling on the road area obtained in step S34 to obtain the final road area as a template for judging whether the obstacle is in the road area.
[0074] Figure 4 Schematic diagram of multi-target tracking, such as Figure 4 As shown, in step S4, a tracking linked list is established for each target block detected to perform multi-target tracking. The specific steps are as follows:
[0075] In step S41, each detected target is created as a block, and its ID number is assigned to establish a tracking linked list;
[0076] In step S42, meanshift is used for tracking. When occlusion occurs, the Kalman filter algorithm is used to predict the target position as the input value for the next frame tracking;
[0077] Step S43 uses the nearest neighbor judgment method to find the same target in the previous frame in one frame, and adds it to the tracking linked list of the corresponding target.
[0078] Figure 5 Schematic diagram of detection of obstacle position change, such as Figure 5 As shown, in step S5, the position change of each target block is detected, and the specific steps are as follows:
[0079] Step S53: Obtain the centroid coordinates of each target block through step S4, and compare the centroid changes of the same target block at regular intervals. When the centroid change is almost zero, it is preliminarily judged to be a stationary obstacle;
[0080] Specific method: Here we track the position of the same target in each frame of image, and detect the change in the position of this target over a period of time. If it is almost unchanged, it is preliminarily judged that the target is left behind. Suppose a goal is I k , Its centroid coordinate at time t is C t (x,y), if its centroid coordinate becomes C after l t+l (x, y), the change is greater than T, generally T is 10 to 20, indicating that the target is moving and not marked. If the centroid change is less than T, it is almost zero (considering the influence of micro-changes in the binary image extracted by the foreground target), It is preliminarily determined to be an obstacle. Then take its centroid value to determine whether the location is in the road area, if it is in the road area, record its centroid coordinates and mark it in the video image.
[0081] D = 0 , | C t ( x , y ) - C t + 1 ( x , y ) | T 1 , | C t ( x , y ) - C t + 1 ( x , y ) | ≈ 0
[0082] The target marked as 1 is initially judged as an obstacle, and the target marked as 0 is a moving target.
[0083] In step S54, it is judged whether the detected stationary obstacle material center is in the road area. If it is to further study its influence on traffic, if it is not in the road area, its influence can be ignored.
[0084] The specific method is to initially determine the obstacle by detecting the change of the target centroid position, take the obstacle material center coordinates and the coordinates of 5 pixels apart from the top, bottom, left and right, and use these five coordinate values to take the pixels at the same position in the road area template image When the gray value of more than 3 pixels is 255, it is judged that the target is in the road area.
[0085] Figure 6 is a schematic diagram of the obstacle threat model establishment. As shown in Figure 6, in step S6, the impact of obstacles on traffic safety is studied by analyzing the changes in vehicle trajectory and running speed. The specific steps are as follows:
[0086] Step S61: When a stationary obstacle is detected in the road area in step S4, the trajectory change of the upstream vehicle is recorded;
[0087] The specific method is to calculate the slope value of the line between the target centroid in the current frame and the first saved centroid of the corresponding target every 10 frames; sort the calculated slope values of the same target to find the slope value with the largest change; The slope value with the largest change obtained in one step is compared with the set threshold value. When it is greater than the threshold value, it is judged that the vehicle changes the driving direction after encountering an obstacle, indicating that the obstacle has affected traffic safety. The calculation formula for discriminating parameters of movement direction change is
[0088] ΔD ‾ = ΔD - D L D H - D L D L ≤ ΔD = | y ‾ 0 - y ‾ 10 * m x ‾ 0 - x ‾ 10 * m | ≤ D H , m ≥ 1 0 other
[0089] among them, Is the road safety threat discrimination parameter, D L , D H Is the minimum and maximum threshold to meet the conditions, ΔD is the direction change value of the current vehicle target, Indicates that the centroid of the moving vehicle in the first frame can be detected, Represents the centroid of the moving vehicle after m ten frames, Means uniform Value, when the condition is met, the value range is between 0 and 1, when the condition is not met, The value is zero.
[0090] Step S62 detects changes in the speed of nearby vehicles;
[0091] The specific method is to calculate the relative distance of the target centroid position every 10 frames. Because the time interval is the same, the size of the distance represents the size of the vehicle speed. Compare the size of the distance to get the size change of the vehicle speed. The calculation formula of the speed change discriminating parameter is
[0092] ΔV ‾ = ΔV - V L V H - V L V L ≤ ( x ‾ n - x ‾ n - 10 ) 2 + ( y ‾ n - y ‾ n - 10 ) 2 ≤ V H , n ≥ 10 0 other
[0093] among them, Is the road safety threat discrimination parameter, V L , V H Is the minimum and maximum threshold to meet the conditions, and ΔV is the speed change value of the current vehicle target. Respectively represent the centroid of the moving vehicle in the current frame and the last ten frames in the x direction. They respectively represent the center of mass of the moving vehicle in the y direction in the current frame and the last ten frames. Because the time is fixed, the distance change is used to measure the speed change. Means uniform Value, when the condition is met, the value range is between 0 and 1. When the condition is not met, The value is zero.
[0094] Step S63 combines the trajectory change and speed change to analyze the impact of obstacles on traffic safety.
[0095] In step S7, by using Zeinike moments to classify obstacles to distinguish vehicles from leftover objects, the specific steps are as follows:
[0096] Step S71 establishes a vehicle binary image sample database for vehicle detection;
[0097] Step S72 respectively extract the Zernike moments and save them as an array to form a detection library;
[0098] Step S73: For the obstacle that is actually detected, take the smallest outer rectangular binary image area, and also extract its Zernike moment to match the sample value in the detection library. If a certain degree of matching is satisfied, the obstacle is judged to be a vehicle, otherwise it is a leftover object ;
[0099] In step S74, an alarm is issued.
[0100] The above are only specific embodiments of the present invention, but the protection scope of the present invention is not limited to this. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed by the present invention. It should be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.