Road network difference method and device, electronic equipment and storage medium

A differential method and differential technology, applied in the field of electronic maps, can solve the problems of erroneous differential results, affecting matching differential, poor accuracy of differential results, etc., to achieve the effect of improving generalization and accuracy.

Pending Publication Date: 2022-05-03
HANHAI INFORMATION TECH SHANGHAI
0 Cites 0 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0004] The existing road network difference method includes two sub-processes of matching and difference, and belongs to the two-stage method. This method has the following disadvantages: road network matching is given two roads to determine whether they match and gives the matching confidence. The network matching itself is relatively complicated, and generally needs to rely on the characteristics of the road shape, distance, topology, and road attributes. Dependency, as mentioned above, road network matching generally uses features such as road attributes and topology, which requires source data to have these features, and the source data may not only be complete road network data produced by map data manufacturers, but may also be based on The road network generated by extracting satellite images, trajectories, etc., these data have no road attributes and lack of complete topological relationship, which affects the subsequent matching difference;...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Method used

By extracting the skeleton diagram of binary image, and expanding preset range to each data point representing road in the road vector data obtained after vectorization, make the road vector data after expanding preset range relative to actual road The range of the road data of the network is larger, so that the accurate road intersection can be obtained when the subsequent intersection is obtained, and the accurate difference result can be obtained, that is, the accuracy of the difference result can be improved.
The first road image and the second road image are stacked according to independent channels to form a dual-channel image, and the obtained dual-channel image is input into the semantic segmentation model, and the reference image in the dual-channel image is used as a reference by the semantic segmentation model, to The segmented image in the dual-channel image is subjected to semantic segmentation processing to obtain the semantic segmentation result, that is, the difference result image. The road network difference goal is a binary classification task, and the last layer is activated using the sigmoid function, so the final output feature map is a single channel, so the input of the network is a two-channel image stacked by two images, and the output difference result image is a single channel image. By stacking the first road image and the second road image into a two-channel image and inputting it into the semantic segmentation model, compared with fusing the first road image and the second road image into one fused image, the road network from different sources can be reduced. Data masking interference.
The road network difference device that the embodiment of the present application provides, intercepts the road data of the area to be differentiated from the first road network data by the road data interception module, as the first road data, intercepts the road data to be differentiated from the second road network data The road data in the area is used as the second road data. The rasterization processing module performs vector rasterization processing on the first road data and the second road data respectively to obtain the first road image and the second road image. The semantic segmentation module converts the first road data A road image and a second road image are input into the semantic segmentation model, and the semantic segmentation process is performed on the second road image relative to the first road image through the semantic segmentation model to obtain a difference result image, and the vectorization processing module performs raster data processing on the difference result image Vectorization processing to convert the difference result image into a difference result corresponding to the first road data and the second road data. In the embodiment of the present application, since the semantic segmentation model is used to perform s...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Abstract

The embodiment of the invention discloses a road network difference method and device, electronic equipment and a storage medium, and the method comprises the steps: respectively intercepting road data of a to-be-differentiated region from first road network data and second road network data, and obtaining first road data and second road data; performing vector data rasterization processing on the first road data and the second road data to obtain a first road image and a second road image; inputting the first road image and the second road image into a semantic segmentation model, and performing semantic segmentation processing on the second road image relative to the first road image through the semantic segmentation model to obtain a difference result image; and performing raster data vectorization processing on the difference result image to convert the difference result image into difference results corresponding to the first road data and the second road data. According to the embodiment of the invention, the global features of the road network in the to-be-differentiated region are comprehensively considered, and the accuracy of road network difference can be improved.

Application Domain

Technology Topic

Computer visionRaster data +5

Image

  • Road network difference method and device, electronic equipment and storage medium
  • Road network difference method and device, electronic equipment and storage medium
  • Road network difference method and device, electronic equipment and storage medium

Examples

  • Experimental program(3)

Example Embodiment

[0028] Embodiment 1
[0029] The embodiment provides a road network difference method, such as Figure 2 As shown, the method includes steps 210 to 240.
[0030] In the first step, the data of the road network to be divided is taken as the data of the road network, and the data of the road network to be divided is taken from the first step, and the data of the road network to be divided is taken from the second step.
[0031] Among them, the first road network data and the second road network data can be road network data from two different sources, or road network data from different versions of the same source. The first road network data and the second road network data are vector data. The area to be differentiated is the actual geographical area to be differentiated by road network, which is determined in advance, and can be expressed by longitude and latitude coordinates.
[0032] After determining the area to be differentiated, query the roads included in the area to be differentiated in the first road network data through the spatial index according to the longitude and latitude coordinates representing the area to be differentiated, truncate the roads intersecting with the frame of the area to be differentiated, and only retain the roads within the area to obtain the first road data. Similarly, the roads included in the area to be differentiated are queried in the second road network data through the spatial index, the roads intersecting with the frame of the area to be differentiated are truncated, and only the roads within the area are retained to obtain the second road data.
[0033] In one embodiment of the present application, before the road data of the area to be differentiated is intercepted from the first road network data, it also includes: determining the size of the area to be differentiated according to the input image size of the semantic segmentation model and the length and width of the grid represented by each pixel in the input image; Determining the start longitude and latitude coordinates and the end longitude and latitude coordinates of the area to be differentiated according to the size of the area to be differentiated.
[0034] The size of the input image of the semantic segmentation model is generally determined. Therefore, it is necessary to determine the size of the area to be differentiated according to the size of the input image of the semantic segmentation model and the length and width of the grid represented by each pixel in the input image (i.e. the length and width of the real world represented by each pixel), so that when the road network is differentiated corresponding to a specific large area, This large area can be divided into several smaller areas to be differentiated according to the size of the area to be differentiated, and the starting longitude and latitude coordinates and ending longitude and latitude coordinates of each area to be differentiated can be obtained. For example, when the large area to be differentiated is a city, the size of the city is 100 × 100, and the size of each area to be differentiated is 10 × 10, the city needs to be divided into 100 areas to be differentiated. For a plurality of areas to be differentiated obtained by dividing a large area, the road network difference is carried out according to the road network difference method in the embodiment of the application.
[0035] For example, the image size input by the u-net network is 512 × 512, corresponding to the original road network, it is necessary to determine the length and width represented by each pixel, for example, each pixel represents 0.5m in the real world × 0.5m, 512 × 512 can represent 256M in the real world × The length and width of the 256M area to be differentiated are fixed. The bounding box of the area to be differentiated is uniquely determined by the longitude and latitude coordinates (x, y) of the upper left corner of the area. After determining the longitude and latitude coordinates (starting longitude and latitude coordinates) of the upper left corner, the ending longitude and latitude coordinates of the area to be differentiated can be determined according to the length and width of the area to be differentiated.
[0036] After the area frame of the difference area is determined, you can query all the roads contained in the difference area from the original road network (the first road network data and the second road network data) according to the spatial index. The roads intersecting with the area frame are truncated, and only the roads within the difference area are retained, such as Figure 3 As shown in.
[0037] In one embodiment of the present application, the road data of the area to be differentiated is intercepted from the first road network data as the first road data, and the road data of the area to be differentiated is intercepted from the second road network data as the second road data, including: expanding the preset distance outward from the four sides of the area to be differentiated to obtain the intercepted area; Intercepting the road data of the intercepted area from the first road network data as the first road data; Intercepting the road data of the intercepted area from the second road network data as the second road data.
[0038] In order to avoid the poor segmentation effect at the regional boundary caused by the offset of the road network and the lack of context information of the pixels at the regional boundary during the road network difference, the overlap tile strategy can be adopted, that is, the four sides of the area to be differentiated expand the preset distance outward to obtain the interception area, and intercept the road data of the interception area from the first road network data and the second road network data respectively, Obtain the first road data and the second road data, obtain the difference result image through the semantic segmentation model, and intercept the data of the region to be differentiated from the vector data obtained after vectorizing the difference result image as the difference result of the region to be differentiated. For example, the longitude and latitude coordinates of the upper left corner and the lower right corner of the area to be differentiated are (x1, Y1, X2, Y2), and a certain area can be appropriately expanded, that is, the four edges can be extended outward by the preset distance t respectively to obtain the longitude and latitude coordinates of the upper left corner and the longitude and latitude coordinates of the lower right corner of the intercepted area as (x1-t, y1-t, X2 + T, Y2 + T). Later, the difference result image is obtained through the semantic segmentation model, The data of the region to be differentiated is intercepted from the vector data obtained after vectorization of the difference result image. As the difference result, the longitude and latitude coordinates of the upper left corner and the lower right corner of the vector data are (x1-t, y1-t, X2 + T, Y2 + T). The segmentation result of the intercepted region (x1, Y1, X2, Y2) is taken as the difference result.
[0039] In step 220, the first road data is rasterized by vector data to obtain the first road image, and the second road data is rasterized by vector data to obtain the second road image.
[0040]In GIS (Geographic Information System), the common data forms are vector data and grid data. The line in vector data is composed of a certain sequence of point coordinates, while grid data is composed of pixel array. The first road data is the data intercepted from the first road network data, and the second road data is the data intercepted from the second road network data. Since the first road network data and the second road network data are vector data, the first road data and the second road data are also vector data. In the embodiment of the application, when performing road network difference, the image representing the road data of the area to be differentiated in two different road network data is processed through the semantic segmentation model based on neural network to obtain the road network difference result. Therefore, it is necessary to rasterize the vector data of the first road data and the second road data respectively to convert the first road data into the first road image, Converting the second road data into a second road image.
[0041] In one embodiment of the application, the first road data is rasterized by vector data to obtain the first road image, and the second road data is rasterized by vector data to obtain the second road image, including setting the pixel value of the pixel point corresponding to the grid representing the road in the first road data to 1 and the pixel value of the pixel point corresponding to other grids to 0, Obtaining a first road image; Set the pixel value of the pixel point corresponding to the grid representing the road in the second road data to 1, and set the pixel value of the pixel point corresponding to other grids to 0 to obtain the second road image.
[0042] The grid in which the data representing the road is located is determined from the first road data according to the spatial index, the pixel value of the pixel corresponding to the grid representing the road is set to 1, and the pixel value of the pixel corresponding to other grids is set to 0, that is, the pixel representing the road is set as the foreground and the other pixel is set as the background. Similarly, the grid in which the data representing the road is located is determined from the second road data according to the spatial index, the pixel value of the pixel corresponding to the grid representing the road is set to 1, and the pixel value of the pixel corresponding to other grids is set to 0, that is, the pixel representing the road is set as the foreground and the other pixel is set as the background. When rasterizing the road data, setting the pixel value of the last corresponding pixel representing the road to 1 and setting the pixel value of the pixel corresponding to other grids to 0 is equivalent to masking the road, which is convenient for subsequent calculation and improves the accuracy of the road network difference results.
[0043] In step 230, the first road image and the second road image are input into the semantic segmentation model, and the semantic segmentation processing is performed on the second road image relative to the first road image through the semantic segmentation model to obtain the difference result image.
[0044] Among them, the semantic segmentation model can include full convolution network model, convolution model with image, encoder decoder model, or model based on multi-scale and pyramid network. Taking the first road image as the reference image and the second road image as the segmented image, the segmented image is semantically segmented during road network difference to determine the missing road of the reference image relative to the segmented image, that is, the difference result image is the image of the missing road of the first road image relative to the second road image. Of course, the second road image can also be used as the reference image and the first road image can be used as the segmentation image. At this time, after the first road image and the second road image are input into the semantic segmentation model, the first road image is semantically segmented relative to the second road image through the semantic segmentation model to obtain the difference result image, The difference result image is an image of a road whose first road image is redundant with respect to the second road image (also an image of a road whose second road image is missing with respect to the first road image).
[0045] In the embodiment of the application, the road network difference problem is regarded as a semantic segmentation problem, the first road image and the second road image are input into the semantic segmentation model, and the semantic segmentation processing is carried out on the segmented image with the reference image as the reference through the semantic segmentation model to obtain the difference result image. Among them, semantic segmentation is a common task in computer vision, which refers to the classification of each pixel in the image. Among them, the semantic segmentation model has been trained according to the samples.
[0046] In one embodiment of the present application, inputting the first road image and the second road image into a semantic segmentation model includes: fusing the first road image and the second road image into one image to obtain a fused image, and distinguishing the road data representing the first road image and the second road image in the fused image; Inputting the fused image into the semantic segmentation model.
[0047] Before inputting the first road image and the second road image into the semantic segmentation model, the first road image and the second road image are fused into one image to obtain the fused image. During the fusion, the road data representing the first road image and the second road image are distinguished, so that the road data in the first road image and the second road image can be distinguished in the fused image, for example, The road data in the first road image and the second road image can be represented by different colors or different linetypes,. After fusing the first road image and the second road image into a fused image, the fused image is input into the semantic segmentation model. Through the semantic segmentation model, the road data from the segmented image in the fused image is semantically segmented by taking the road data from the reference image in the fused image as a reference. When the first road image and the second road image are fused into a fused image, the road data representing the first road image and the second road image are distinguished, so that the semantic segmentation model can distinguish the road data as a reference and the road data to be segmented.
[0048] In one embodiment of the present application, inputting the first road image and the second road image into the semantic segmentation model includes: stacking the first road image and the second road image according to the channel to obtain a dual channel image; Inputting the dual channel image into the semantic segmentation model.
[0049] The first road image and the second road image are stacked according to independent channels to form a dual channel image. The obtained dual channel image is input into the semantic segmentation model. Through the semantic segmentation model, the segmented image in the dual channel image is semantically segmented with the reference image in the dual channel image as a reference to obtain the semantic segmentation result, that is, the difference result image. The difference target of the road network is a two classification task. The last layer is activated by sigmoid function, so the final output characteristic image is a single channel. Therefore, the input of the network is a two channel image stacked by two images, and the output difference result image is a single channel image. By stacking the first road image and the second road image into a dual channel image and inputting the semantic segmentation model, the masking interference of road network data from different sources can be reduced compared with fusing the first road image and the second road image into a fused image.
[0050] In one embodiment of the present application, the semantic segmentation model is a u-net model;
[0051] Through the convolution layer in the u-net model, the input characteristic map is filled before the convolution processing of the input characteristic map, and the input characteristic map after the filling processing is convoluted through the convolution layer to obtain the output characteristic map, wherein the size of the output characteristic map is the same as that of the input characteristic map.
[0052] Among them, u-net model is a very popular semantic segmentation model. It was originally applied in medical image segmentation. It adopts the design method of encoder (down sampling) - decoder (up sampling) structure and jump connection. It can achieve very good segmentation effect with only a small amount of labeled data. At present, it has been successfully applied to other fields. The network structure of u-net is similar to the letter "U". U-net includes the contraction process on the left and the expansion process on the right. In the contraction process, convolution and down sampling operations are used to reduce the size of the feature map for feature extraction. In the expansion process, convolution and up sampling operations are used to expand the size of the feature map, and layer hopping connection is carried out between the contraction and expansion processes to integrate different levels of features to improve the segmentation effect, Finally, the output feature map is classified pixel by pixel.
[0053]The original u-net model does not use padding in the convolution process, resulting in the size of the feature map becoming smaller after each convolution calculation. In the embodiment of the application, padding is added in the convolution process in order to make the output feature map correspond to the size of the input image, that is, the input feature map is filled before each convolution layer convolutes the input feature map of the convolution layer, After that, the input feature map after filling processing is input into the convolution layer, and the input feature map after filling processing is convoluted through the convolution layer to obtain the output feature map, so that the size of the output feature map and the input feature map are the same, so that the whole u-net network only changes the image size in the down sampling and up sampling process, while the convolution process does not change the image size, This ensures that the size of the difference result image is the same as that of the first road image and the second road image.
[0054] The first road image and the second road image can be stacked into a dual channel image, and the dual channel image is input into the u-net model. Through the u-net model, the segmented image in the dual channel image is semantically segmented with the reference image in the dual channel image. In this process, the feature is extracted through the down sampling process as the encoder in the u-net model, Then, each pixel in the segmented image is binary predicted based on the corresponding pixel in the reference image by up sampling as the decoder. A positive example indicates that the road represented by the pixel is missing in the road network corresponding to the reference image, and a negative example indicates that the road represented by the pixel has a corresponding road in the road network corresponding to the reference image. as Figure 4a-4c As shown in, Figure 4A Is a schematic diagram of the reference image (road network a) in the embodiment of the present application, Figure 4B Is a schematic diagram of the segmented image (road network B) in the embodiment of the present application, Figure 4C It is the schematic diagram of the segmentation result (ground truth) marked in the embodiment of the application. The result in the ground truth is the missing road of road network a relative to road network B (the black foreground represents a positive example and the white background represents a negative example). When the sequence of road network a and road network B is exchanged, that is, road network a is used as the segmentation image and road network B is used as the reference image. When segmented through the semantic segmentation model, the redundant road of road network a relative to road network B can be obtained.
[0055] Wherein, the loss function of the u-net model is expressed as follows:
[0056]
[0057] Where n is the length and width of the first road image and the second road image, y is the differential annotation image corresponding to the first road image and the second road image as samples, and Y i Is the value of the ith pixel in the differential annotation image y, Is the difference prediction result obtained by inputting the first road image and the second road image as samples into the u-net model, It is the result of differential prediction The value of the ith pixel in, w i Is the value of the ith pixel in the second road image as a sample.
[0058] Wherein, the first road image and the second road image are images of the same size, and N = 512 in the u-net model, so the size of the first road image and the second road image is 512 × 512, the first road image as the reference image, the second road image as the segmented image, W i Is the value of the ith pixel in the segmented image as a sample.
[0059] The loss function commonly used in the original u-net model is the cross entropy or weighted cross entropy loss function. Because the task of the embodiment of the application is to give two images of the first road image and the second road image, semantically segment the segmented image, judge whether the road represented by each pixel is missing in the road network where the reference image is located, output 1 if missing, otherwise output 0, Therefore, the result of all background pixels in the segmented image in the output feature map is 0, so it is only necessary to binary classify each pixel in the foreground (the area with road network) in the segmented image. This has the following advantages: make the segmented image correspond to the road in the reference image to ensure that no new road will be generated out of thin air; It greatly reduces the amount of calculation and improves the training speed, from pixel by pixel classification of the original image to classification of only foreground pixels; The accuracy can be improved. Because the first road image and the second road image only take the pixels with road network data as the foreground and the others are the background, it is equivalent to adding a layer of mask, so that the learning of the model will not tend to the background, but focus on the classification of foreground pixels, which is equivalent to adding attention mechanism to the model.
[0060] Since the value of the foreground pixel in the segmented image is 1 and the value of other pixels is 0, W i Expressed as follows:
[0061]
[0062] Wherein, it is assumed that the first road image is X A , from road network a, the second road image is X B , from road network B, i.e. given input (x) A ,X B ), here the first road image x A As a reference image, the second road image x B As a segmented image, therefore, W i Is the value of pixel I in the second road image as the segmented image.
[0063] The u-net model only needs a small amount of labeled data to obtain a very good segmentation effect. In the road network difference of the embodiment of the application, the road network data of different regions can be selected for manual labeling. For example, in practical application, 70 data are manually labeled, and each data contains (a, B, groundtruth). The labeling results are as follows: Figure 4a-4c As shown in, then use random rotation, clipping, offset and other methods to enhance the data. Randomly select 50 data from 70 actual data as the training set and 20 data as the test set to train the u-net model. After 20 rounds of iteration, the road network difference effect on the test set is as follows: Figure 5a-5d As shown in, where Figure 5A Is a schematic diagram of road network a in the embodiment of the application, Figure 5B Is a schematic diagram of road network B in the embodiment of the application, Figure 5C Is a schematic diagram of A-B in the embodiment of the application, Figure 5D It is the schematic diagram of B-A in the embodiment of the application. A-B represents that road network a is unique (i.e. road network a is redundant), and B-A represents that road network B is unique (i.e. road network a is missing).
[0064] In step 240, raster data vectorization processing is performed on the difference result image to convert the difference result image into a difference result corresponding to the first road data and the second road data.
[0065] The difference result image obtained by the semantic segmentation model is grid image data, which needs to be converted into vector data and corresponding to the original road network. The difference result image obtained after the output of the network in the semantic segmentation model passes through the last layer of sigmoid function is the classification confidence of each pixel between 0-1. The difference result image is binarized according to the preset threshold to obtain the binary image of 0 and 1, The binary map is vectorized by raster data, and the binary map is converted into the difference result corresponding to the first road data and the second road data, that is, the missing road data of the first road data relative to the second road data is obtained.
[0066]In one embodiment of the present application, the grid data vectorization processing is performed on the difference result image to convert the difference result image into a difference result corresponding to the first road data and the second road data, including: binarization processing is performed on the difference result image to obtain a binary map; Thinning the binary map to obtain a skeleton map; Carrying out raster data vectorization processing on the skeleton map to obtain the road vector data corresponding to the skeleton map; Expanding the preset range of each data point representing the road in the road vector data; The intersection of the expanded road vector data and the first road data or the second road data is obtained to obtain the difference result corresponding to the first road data and the second road data.
[0067] The difference result image is binarized according to the preset threshold, the pixel point whose pixel value is less than the preset threshold is set as 0, and the pixel point whose pixel value is greater than or equal to the preset threshold is set as 1 to obtain the binary map. The binary map is refined to extract the skeleton and obtain the skeleton map. The skeleton map is vectorized by grid data, that is, track and traverse each pixel in the skeleton map, Convert the value of the pixel point into vector data, connect each data, convert the pixel coordinates of the vector data into longitude and latitude coordinates through the corresponding relationship between the pixel coordinates and the longitude and latitude coordinates of the original road network, and obtain the vector data value corresponding to the longitude and latitude coordinates, that is, obtain the road vector data corresponding to the skeleton map, and then expand the preset range of each data point representing the road in the road vector data, that is, take the data point as the center, Taking the preset radius as the radius, expand the preset range of the data point to expand a certain range of each road, that is, add a buffer to the road, calculate the intersection between the expanded road vector data and the first road data or the second road data, and obtain the difference result corresponding to the first road data and the second road data. Among them, if the result of road network difference is to obtain the missing road of the first road data relative to the second road data, the intersection of the expanded road vector data and the second road data is obtained; if the result of road network difference is to obtain the redundant road of the first road data relative to the second road data, the intersection of the expanded road vector data and the second road data is obtained. The specific one of the above two results is determined by the sample data and annotation data given when training the semantic segmentation model.
[0068] By extracting the skeleton diagram of the binary graph and expanding the preset range for each data point representing the road in the vectorized road vector data, the range of the road vector data after expanding the preset range is larger than that of the actual road network, so that the accurate Road intersection can be obtained and the accurate difference result can be obtained in the subsequent intersection calculation, that is, the accuracy of the difference result can be improved.
[0069] In the embodiment of the application, after rasterizing the road data, the semantic segmentation model based on deep neural network is used to realize end-to-end learning. There is no need to go through the road matching stage and know the accurate matching position of the road. The difference result can be obtained directly, which simplifies the process of road network difference; The existing road network difference methods take a single road arc segment as the unit. For each road arc segment in the first road network, find the matching road arc segment in the second road network, and then determine the matching position for difference. The single road arc segment only considers the local characteristics, while the embodiment of the application comprehensively considers the road network information in the region. With the global characteristics, the accuracy of road network difference can be improved; The existing matching difference method only considers local features. In order to improve the accuracy, it is also necessary to add road attribute features (such as road grade, name and direction), so it needs to rely on the integrity of the source road network data. However, the embodiment of the application does not rely on any road attribute features, so there are no integrity requirements for the source road network data, which improves the generalization.
[0070] The road network difference method provided by the embodiment of the application obtains the first road image and the second road image by intercepting the road data of the area to be differentiated from the first road network data as the first road data and intercepting the road data of the area to be differentiated from the second road network data as the second road data, The first road image and the second road image are input into the semantic segmentation model, the second road image is semantically segmented relative to the first road image through the semantic segmentation model to obtain the difference result image, and the difference result image is vectorized with grid data to convert the difference result image into the difference result corresponding to the first road data and the second road data. The embodiment of the application uses the semantic segmentation model to segment the second road image relative to the first road image, comprehensively considers the global characteristics of the road network in the area to be differentiated, can improve the accuracy of the road network difference, does not rely on any road attribute characteristics, has no integrity requirements for the source road network data, and improves the generalization.

Example Embodiment

[0071] Example 2
[0072] The embodiment provides a road network difference device, such as Figure 6 As shown, the road network difference device 600 includes:
[0073] The road data interception module 610 is used to intercept the road data of the area to be differentiated from the first road network data as the first road data, and intercept the road data of the area to be differentiated from the second road network data as the second road data;
[0074] A rasterization processing module 620 for rasterizing the vector data of the first road data to obtain the first road image, and rasterizing the vector data of the second road data to obtain the second road image;
[0075] A semantic segmentation module 630 for inputting the first road image and the second road image into a semantic segmentation model, performing semantic segmentation processing on the second road image relative to the first road image through the semantic segmentation model to obtain a difference result image;
[0076] The vectorization processing module 640 is used to vectorize the grid data of the difference result image to convert the difference result image into a difference result corresponding to the first road data and the second road data.
[0077] Optionally, the semantic segmentation module comprises:
[0078] An image stacking unit for stacking the first road image and the second road image according to channels to obtain a dual channel image;
[0079] A first image input unit for inputting the dual channel image into the semantic segmentation model.
[0080] Optionally, the semantic segmentation module comprises:
[0081] An image fusion unit for fusing the first road image and the second road image into one image to obtain a fused image, and distinguishing the road data representing the first road image and the second road image in the fused image;
[0082] A second image input unit for inputting the fused image into the semantic segmentation model.
[0083] Optionally, the semantic segmentation model is a u-net model;
[0084] Through the convolution layer in the u-net model, the input characteristic map is filled before the convolution processing of the input characteristic map, and the input characteristic map after the filling processing is convoluted through the convolution layer to obtain the output characteristic map, wherein the size of the output characteristic map is the same as that of the input characteristic map.
[0085] Optionally, the loss function of the u-net model is expressed as follows:
[0086]
[0087] Where n is the length and width of the first road image and the second road image, y is the differential annotation image corresponding to the first road image and the second road image as samples, and Y i Is the value of the ith pixel in the differential annotation image y, Is the difference prediction result obtained by inputting the first road image and the second road image as samples into the u-net model, It is the result of differential prediction The value of the ith pixel in, w i Is the value of the ith pixel in the second road image as a sample.
[0088] Optionally, the road data interception module comprises:
[0089] A region expansion unit for expanding the four edges of the region to be differentiated outward by a preset distance to obtain an intercepted region;
[0090] A road data interception unit for intercepting the road data of the interception area from the first road network data as the first road data; Intercepting the road data of the intercepted area from the second road network data as the second road data.
[0091] Optionally, the vectorization processing module comprises:
[0092] A binarization processing unit for binarization processing the difference result image to obtain a binary image;
[0093] A thinning processing unit for thinning the binary map to obtain a skeleton map;
[0094] A vectorization processing unit for vectorizing the grid data of the skeleton map to obtain the road vector data corresponding to the skeleton map;
[0095] A data point range expansion unit for expanding the preset range of each data point representing the road in the road vector data;
[0096] The difference result determination unit is used to obtain the intersection between the expanded road vector data and the first road data or the second road data, and obtain the difference result corresponding to the first road data and the second road data.
[0097] Optionally, the device further comprises:
[0098] A size determination module for the area to be differentiated, which is used to determine the size of the area to be differentiated according to the input image size of the semantic segmentation model and the length and width of the grid represented by each pixel in the input image;
[0099] A sub region determination module for determining the start longitude and latitude coordinates and the end longitude and latitude coordinates of the region to be differentiated according to the size of the region to be differentiated.
[0100] Optionally, the rasterization processing module is specifically used for:
[0101] Setting the pixel value of the pixel point corresponding to the grid representing the road in the first road data to 1, and setting the pixel value of the pixel point corresponding to other grids to 0 to obtain the first road image;
[0102] Set the pixel value of the pixel point corresponding to the grid representing the road in the second road data to 1, and set the pixel value of the pixel point corresponding to other grids to 0 to obtain the second road image.
[0103] The road network difference device provided in the embodiment of the application is used to realize the steps of the road network difference method described in embodiment 1 of the application. See the corresponding steps for the specific implementation mode of each module of the device, which will not be repeated here.
[0104] The road network difference device provided by the embodiment of the application intercepts the road data of the area to be differentiated from the first road network data through the road data interception module as the first road data, intercepts the road data of the area to be differentiated from the second road network data as the second road data, and the grid processing module performs vector grid processing on the first road data and the second road data respectively, The first road image and the second road image are obtained. The semantic segmentation module inputs the first road image and the second road image into the semantic segmentation model, performs semantic segmentation processing on the second road image relative to the first road image through the semantic segmentation model, and obtains the difference result image. The vectorization processing module performs grid data vectorization processing on the difference result image, To convert the difference result image into a difference result corresponding to the first road data and the second road data. The embodiment of the application uses the semantic segmentation model to segment the second road image relative to the first road image, comprehensively considers the global characteristics of the road network in the area to be differentiated, can improve the accuracy of the road network difference, does not rely on any road attribute characteristics, has no integrity requirements for the source road network data, and improves the generalization.

Example Embodiment

[0105] Example 3
[0106] The embodiment of the application also provides an electronic device, such as Figure 7 As shown, the electronic device 700 may include one or more processors 710 and one or more memories 720 connected to the processor 710. The electronic device 700 may also include an input interface 730 and an output interface 740 for communicating with another device or system. The program code executed by the processor 710 may be stored in the memory 720.
[0107] The processor 710 in the electronic device 700 calls the program code stored in the memory 720 to execute the road network difference method in the above embodiment.
[0108] The above elements in the above electronic equipment can be connected to each other through a bus, such as one of data bus, address bus, control bus, expansion bus and local bus or any combination thereof.
[0109] The embodiment of the application also provides a computer-readable storage medium on which a computer program is stored. When the program is executed by the processor, the steps of the road network difference method described in embodiment 1 of the application are realized.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Similar technology patents

Bill identification method, server and computer readable storage medium

InactiveCN108446621AImprove digital efficiencyImprove accuracyFinanceBilling/invoicingCharacter recognitionText detection
Owner:PING AN TECH (SHENZHEN) CO LTD

Classification and recommendation of technical efficacy words

  • Improve accuracy
  • Improve generalization

Cassette-based dialysis medical fluid therapy systems, apparatuses and methods

InactiveUS20050209563A1Improvement for dialysisImprove accuracyMedical devicesPeritoneal dialysisAccuracy improvementDialysis
Owner:BAXTER INT INC +1

Image retrieval method and system

InactiveCN108829848AImprove accuracyImprove generalizationCharacter and pattern recognitionSpecial data processing applicationsExclusive orMultiple image
Owner:HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products