Ultra-high vehicle approaching detecting method based on sensitive region image contour progressive increase
A technology for sensitive areas and vehicles, applied in instruments, character and pattern recognition, computer parts and other directions, which can solve problems such as slow calculation speed, influence of road driving conditions, and inability to judge the distance of oncoming vehicles one by one.
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment 1
[0037] Such as figure 2 As shown, the overhead transmission line channel above the road is in the shape of a parabola, and the warning height A is set below its lowest point, and the monitoring camera device is parallel to the road surface to shoot. Among them, the in-lens settings such as image 3 As shown, the sensitive area is set as a rectangle, and the lower marking line of the rectangular sensitive area is parallel to the height A. This schematic diagram image 3 It is a sensitive area that divides the sensitive area into 4 rows and n columns, that is, 4×n blocks. image 3 In the middle, preset the background of the monitoring picture, record its gray level and compare the gray level of the camera picture at an interval of 10 frames. When no vehicle enters the picture, the background gray level basically remains unchanged. When a vehicle enters the picture, the more sensitive area Whether the gray value changes. Embodiment 1: When a vehicle enters the screen, the att...
Embodiment 2
[0044] Embodiment 2: When a non-vehicle disturbance enters the screen, the attached Figure 6Take falling insects as an example. When a flying insect lands in the lens, it also blocks the sensitive area, which will cause a frame difference in the image captured by the lens. At this time, the vehicle is not too high to reach the sensitive area. The method of the trend of regional module changes and the method of eliminating false alarms are described as follows:
[0045] Divide the sensitive area into sub-areas of 4×n blocks. Each sub-region includes p×q pixels (p represents the number of pixel rows in the sub-region, q represents the number of pixel columns in the sub-region, and both p and q are positive integers).
[0046] The sensitive sub-areas occupied by the edge of the landed insects are F(1,2)-F(10,4), and the pixels occupied in these sub-areas are X(F(1,2)), X(F(1,3) )...X(F(10,4)).
[0047] When the screen of the sensitive area changes, for example, after the fall...
PUM
Login to View More Abstract
Description
Claims
Application Information
Login to View More 


