[0058] See figure 1 To illustrate the block diagram of the system architecture of the present invention, the image-based obstacle detection and warning system 10 is electrically connected to the processing device 14 by at least one image capturing unit 11 as a sensing element for capturing images. The image capturing unit 11 can be used as Charge-coupled Device (CCD) or Complementary Metal-Oxide-Semiconductor (CMOS), the processing device 14 can be selectively electrically connected to at least one external signal capture unit 12 Receive external signals such as vehicle speed signal, gear position signal or steering wheel angle to provide reference. The processing device 14 has at least one arithmetic processing unit 144 electrically connected to at least one signal arithmetic unit 142, wherein the arithmetic processing unit 144 can be a central processing unit (CPU), a microprocessor (Micro Processor, μP or uP) or Single-Chip Microcomputer (Single-Chip Microcomputer) to receive and process external signals, and built-in a horizontal line detection algorithm, an obstacle height and width detection algorithm, and an obstacle distance algorithm as the processing core of the execution program. The signal calculation unit 142 uses a digital signal processor (DSP) to connect to the image capture unit 11 to convert the image and generate a warning signal. The processing device 14 is electrically connected to the warning device 16 and transmits the determined signal to the warning device 16 to issue a warning in a timely manner to remind the driver. The warning device 16 can display the distance between the obstacle and the vehicle and the state of the system through the display unit 162. The LED unit 164 warns of the system status, and the loudspeaker unit 166 is used to send out voice signals to inform the driver of the current obstacle and approaching conditions of the vehicle. The warning device 16 can use one of the display unit 162, the LED unit 164, and the loudspeaker unit 166 alone. , Or alternatively used in combination, the display unit 162 can be applied to all display related products, such as cathode ray tube (Cathode ray tube, CRT), liquid crystal display (LCD) or plasma display ( Plasma Display Panel, PDP), the LED unit 164 can use Light-Emitting Diode (LED) or Organic Light-Emitting Diode (OLED), and the speaker unit 166 can use a buzzer or speaker alone, The buzzer or speaker can also be selectively used interactively. The image-based obstacle detection and warning system 10 can be integrated and applied to a parking guidance system (PGS) and a parking assistance system (PAS) to give drivers appropriate warnings.
[0059] Please refer to the process of each step of the method of the present invention at the same time Figure 1-Figure 8 ,Such as figure 2 As shown, the image-based obstacle detection and warning method is explained. As shown in step S10 and step S12, after the system is activated, the processing device 14 starts to capture a road image, and then determines to start the definition according to at least one lane line in the road image The horizontal line and the defined region of interest (Region Of Interest, ROI), the method of setting the region of interest and the setting range can be referred to Figure 3A~Figure 3D ,Such as Figure 3A Shown in the definition of the horizontal line, such as Figure 3B , The interest interval is preset with a vanishing point horizontal signal height interval (Region Of Interest-Height, ROI-H) to detect the height of obstacles, and the interest interval is set with a ground interval (Region Of Interest-Ground, ROI-G). ) To detect the location of obstacles, such as Figure 3C The image capturing unit 11 of the present invention adopts a camera with a general viewing angle of 130 degrees. The setting range of the image capturing unit 11 may vary depending on the setting position and elevation angle of the image capturing unit 11. The difference is because the moving vehicle shakes due to uneven road conditions during actual operation, or because the detection target may also be the vertical angle between a wall and the ground, so the use of a single line segment is likely to cause position changes, such as Figure 3D , The relationship between the angle, focal length and the preset horizontal position set by the image capturing unit 11 can be described by formula (1) on how to obtain each function value in actual operation:
[0060] θ = tan - 1 a - b f - - - ( 1 ) ;
[0061] Among them, θ is the rotation angle of the central axis of the lens of the image capturing unit 11, b is the preset vanishing point position in the image plane; a is the vanishing point position in the image plane, therefore, Is the horizontal displacement; f is the distance between the lens of the image capturing unit 11 and the image plane, that is, the focal length.
[0062] After the horizontal line position setting is completed, set the vanishing point up and down from the horizontal line as the baseline. The horizontal line detection area will each occupy about 5%-10% of the screen (the image capturing unit 11 of the present invention is 7.5% for 30 pixels) Set the range of 30~35pixels to solve the problem of detecting the vertical angle between the wall and the ground.
[0063] Such as Figure 4A Figure~ Figure 4D as well as Figure 5A Figure~ Figure 5D As shown, the different shielding results produced by different distances are explained. The processing device 14 uses the principle of imaging geometry to estimate the obstacle distance and the pixel size in the image from the focal length obtained by the image capturing unit 11, thereby setting the required preset Threshold and vanishing point horizontal detection range. Therefore, the implementation of obstacle width setting is as follows: the obstacle distance is 3 meters, the width is 10 cm as the initial detection setting value, and the obstacle of 10 cm size is detected at 3 meters. 10pixels is the default threshold. At 1.5 meters, 10pixels can detect obstacles of 7.5 cm in size (the obstacle recommended by ISO17386 is a circular tube with a diameter of 75 mm and a height of 1000 mm). The obstacle height is set as the obstacle The object distance is 3 meters, and the height is 30 cm. The initial setting value is set to 3 meters. Obstacles with a width of 30 cm can be detected at a distance of 3 meters. (The installation height of the camera is about 25CM. 30CM is higher than the camera and the vanishing point. Therefore, the vanishing point connection will be blocked). After the obstacle position is erected by the image capturing unit 11, the parameters of the image capturing unit 11 obtained by calibration are converted to the geometric relationship between the parameters of the image capturing unit 11 and the distance. Get the location of the obstacle.
[0064] When the processing device 14 defines the horizontal line and the definition of the interesting interval, obstacle detection starts. In step S14, the processing device 14 determines whether the horizontal line signal exists. Such as Figure 6A~Figure 6C As shown, the detection method of the present invention uses the edge information of high-frequency images such as Sobel mask or Canny mask to process horizontal and vertical edges for edge image processing, and use expansion and erosion calculations to strengthen the required edges, and the horizontal edges are projected to After the horizontal axis is converted into a histogram, such as Figure 6A , To illustrate the schematic diagram of the present invention's real scene marking record, such as Figure 6B , To illustrate the schematic diagram of the grayscale image mark recording of the present invention, where the obtained horizontal edge is used as the basis for blocking the connection of the horizontal line signal in the height interval, such as Figure 6C , When an obstacle is blocked, a histogram gap as shown in the figure will be generated, and the processing device 14 will statistically determine whether the histogram position that is less than the threshold exists? And mark the area that meets the obstacle condition to record the height and width of the histogram gap to facilitate subsequent processing. Following step S14, the processing device 14 judges whether the horizontal line signal in the height interval exists according to the built-in horizontal line detection algorithm? When the horizontal signal of the height interval is present, it is judged that there is no obstacle, and the procedure returns to step 10; when the horizontal signal of the height interval does not exist, it is judged that an obstacle may occur, and then the step S16 is continued. In step S16, it is determined whether the horizontal line signal in the height interval is completely shielded? If it is not completely covered, proceed to step S18 to detect the horizontal line signal of the blocked obstacle and record the bottom position and height information of each obstacle according to the obstacle height and width detection algorithm; when the horizontal line signal of the height interval is complete When masking, the processing device 14 performs step S22 according to the previously defined horizontal line to search for the horizontal edge downward with the horizontal line signal center of the height interval as the setting range. In step S20, use the interest interval to add a preset threshold as the edge detection to determine whether the height and width of the obstacle detected by the contour are greater than the preset threshold? And make a comparative analysis with the previously recorded histogram notch height and width and the preset threshold value. The judgment method can be mainly to detect the height of the obstacle, and then to detect the width of the obstacle as a supplementary method. Such as Figure 7A , To illustrate the real-time detection status of the relationship between the obstacle and the preset threshold of the present invention, such as Figure 7B , When the height and width of the detected obstacle are greater than the preset threshold, it is determined that the obstacle has a certain large enough volume to hinder the movement of the vehicle. Therefore, the processing device 14 determines the step S28 ground section according to the obstacle distance algorithm Distance estimation of obstacles; such as Figure 7C , Is a schematic diagram of the obstacle smaller than the preset threshold of the present invention. When the height and width of the obstacle are not greater than the preset threshold, (because the detected target may be a marking line or a vertical clip between the wall and the ground Angle), it is judged that the volume of the obstacle is not enough to hinder the movement of the vehicle, and the procedure returns to step 10. As in step S16, when the horizontal line signal of the height interval is completely shielded, the processing device 14 performs step S22 according to the previously defined horizontal line. Step S22 uses the horizontal line signal center of the height interval as the setting range to search for the horizontal edge, using the image high-frequency information to perform Step S24: Mark and record the upper and lower edges of the horizontal line of the barrier, and record the bottom position and height of the barrier (please refer to Figure 8 ), judge whether it is an obstacle according to the position and height of the horizontal edge in step S26? When the processing device 14 determines that it is an obstacle, it performs step S28 to estimate the distance of the obstacle in the ground section; if the horizontal edge position and height are not determined to be an obstacle, the procedure returns to step 10. When it is determined in step S20 that the height and width of the obstacle are greater than the threshold and step S26 is determined as an obstacle based on the horizontal edge position and height, step S28 is performed to estimate the obstacle distance in the ground section. When the obstacle distance estimate does not reach the set range, the program Return to step 10; when the obstacle distance estimation reaches the set range, proceed to step S30 to warn. In step S30, when the estimated obstacle distance reaches the set range, the warning device 16 starts to operate, and the procedure returns to step 10 and continues to perform obstacle detection actions.
[0065] Refer to Picture 9 , Where step S14 to determine whether there is a horizontal line signal in the height interval includes steps S32 to S40, please refer to Figure 1-Figure 2 , The processing device 14 performs step S32 after capturing a road image in step 10, such as step S32, after setting the vanishing point horizontal line detection range, perform step S34, such as step S34, performing horizontal edge processing after the vanishing point horizontal line detection range In step S36, as in step S36, the captured image is projected vertically to the horizontal axis, and then step S38 is performed, as in step S38 for histogram statistics. In step S40, it is judged whether there is a histogram smaller than the preset threshold value? If it exists, it is determined that there is no obstacle, and the procedure returns to step 10; if it does not exist, the processing device 14 determines that there may be an obstacle, and proceeds to step S16 to determine whether the horizontal signal of the height interval is completely covered.
[0066] Refer to Picture 10 , Where step S18 obstacle contour detection further includes steps S42~S58, please refer to Figure 1-Figure 2 In step S42, the processing device 14 uses the edge information of the high-frequency image such as the Sobel mask or Canny mask to perform edge processing, and then performs edge detection in step S42, using the previously defined horizontal lines as the upper and lower halves The boundary line of the image detection center, above the center of the horizontal line is referred to as the upper half of the image in step S44, and the center of the horizontal line is referred to below as the lower half of the image in step S54. For example, in step S44, the upper half of the image is subjected to step S46, vertical edge estimation and step S48 The horizontal edge estimation is then subtracted in step S50, and then in step S54, the half of the image is removed, and the horizontal edge estimation is performed in step S56, followed by expansion and erosion in step S58 to enhance the image.
[0067] Refer to Picture 11 , Step S32 to set the detection range of the vanishing point horizontal line further includes steps S60~S76, please refer to figure 1 with Picture 9 , Such as step S60, use the road lane line image characteristics to carry out the lane line identification process, and then go through the high grayscale value identification in step S62, the lane line edge feature identification in step S64, and the lane width identification in step S66, and the content of the road image is analyzed Vehicle lane line feature points (not shown in the figure), and the lower half of the road image is divided into multiple regions from bottom to top with the ROI method, and the latest position of the lane line feature points is updated and updated in real time; at this time, if only the road is detected If there are lane line feature points on one side, an instant lane line can be established, that is, step S68 is entered. If there are identified lane line feature points on both sides of the road, it means that the processing device 14 can simultaneously establish two instant lane lines L 1 , L 2 , That is, enter step S72. In particular, many rural roads are not marked with markings. In order to take into account the occurrence of this situation, it is entered under the condition that the characteristic points of the lane line have not been detected for a period of time. Step S74.
[0068] In accordance with the above, see figure 1 and Picture 11 And refer to Picture 12 If it enters step S68, it means that after step S66, the processing device 14 only establishes a one-sided instant lane line, where the right instant lane line L 1 Take an example; at this time, in step S70, the obtained real-time lane line L 1 In addition to the preset standard lane width, a virtual real-time lane line L can be established 2 , Which is equivalent to obtaining the meaning of two instant lane lines in step S72. Then, you can proceed to step S76 to pass the instant lane line L 1 Line L with that virtual instant lane 2 Obtain the instantaneous vanishing point P extending at the far end of the road x : In addition, the virtual real-time lane line can also be set to any line segment parallel to the real-time lane line, so that the instantaneous vanishing point P of the two extending at the far end of the road can also be obtained x.
[0069] When the road is not drawn with either the left lane line or the right lane line, in order to obtain the lane line vanishing point P x At this time, it is necessary to use step S74 to obtain the lane line, but at least one virtual lane line is established by image processing, and the processing device 14 establishes at least one virtual lane line program to cooperate with Figure 13 As shown, first, in step S80, edge detection is performed, using the feature points of multiple virtual lanes obtained from the edge of the image. The feature points can be obtained by using images of vehicles in front or roadside buildings, etc., through such parallel arrangement in the traffic lane. Then, in step S82, use Hall transformation to analyze the lane feature points and perform feature conversion. Then, in step S84, straight line detection is performed to extract the corresponding multiple virtual lane line feature points, and connect each virtual lane line feature point before and after. In this way, one or more virtual real-time lane lines are established, and the instantaneous vanishing point P required in step S76 is also formed by estimating the intersection point where all virtual lane lines extend in the distance. x.
[0070] Refer to Figure 14A ~ Figure 14D And please also refer to figure 2 , Figure 8 and Picture 10 ,Such as Figure 14A , The detailed step of obstacle contour detection is to use the horizontal line defined at the beginning to cut the screen into two parts: the upper half of the image above the horizontal line and the lower half of the image below the horizontal line, such as Figure 14B , The upper half of the image mainly detects the height of the horizontal edge of the obstacle (such as Figure 8 Shown), such as Figure 14C , The upper half of the image is mainly to find the height of the vertical edge of the obstacle (such as Figure 8 Shown), such as Figure 14D , The lower half of the image mainly detects the bottom position of the obstacle to find the horizontal edge-bottom of the obstacle (such as Figure 8 As shown), the upper half of the image is judged for the height of the previously processed horizontal and vertical edges, and the required edges are strengthened by expansion and erosion operations respectively, and then the vertical edges are subtracted from the horizontal edges, leaving the main vertical edges. The bottom half of the image is expanded and eroded by the horizontal edge to strengthen the bottom edge. When the height and distance recorded in the top half of the image and the bottom half of the image are greater than the preset threshold, the edge of the possible obstacle will be marked Project to the vertical axis and perform histogram statistics. The upper half of the image scans the uppermost edge from top to bottom to mark and record the highest position of the obstacle, and calculate whether the height of the obstacle exceeds the preset threshold. Scan the top edge from top to bottom, mark the bottom position of the obstacle, calculate the distance to the bottom position of the obstacle, and finally add the distance and warning information back to the original image and display it on the screen.
[0071] Finally, see Figure 15 , The obstacle distance estimation formula (2) of the present invention is D=H*f/Yv-Ov(2), where D=the distance between the image capturing unit 11 and the obstacle, H=the installation height of the image capturing unit 11, f = The focal length of the image capturing unit 11, Ov = the center point of the image, Yv: the bottom position of the obstacle in the image. The processing device 14 can set related warning modes according to different distances.
[0072] The present invention proposes an image-based obstacle detection and warning system and method with high reliability and good recognition efficiency to detect obstacles, which overcomes the possibility of misjudgment or over-sensitivity generated by current existing obstacle recognition systems , And further improve the reliability of the system, using image processing and computer vision technology to replace the existing radar or ultrasonic obstacle detection technology, the driver can directly and clearly see the current vehicle and the surrounding environment The positional relationship further increases the convenience of use.
[0073] Although the foregoing embodiments of the present invention are disclosed as above, they are not intended to limit the present invention. Changes and modifications made without departing from the spirit and scope of the present invention all belong to the claims of the patent scope of the present invention. For the scope of patents defined by the present invention, please refer to the attached patent scope.