Viaduct identification method and related product

An identification method and viaduct technology, applied in the electronic field, can solve the problems that satellite data is susceptible to environmental interference, user experience is not high, and the accuracy of viaduct scene recognition is low.

Pending Publication Date: 2020-04-24
GUANGDONG OPPO MOBILE TELECOMM CORP LTD
0 Cites 0 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0003] In the process of driving a car, mobile terminals are usually used for map navigation. When identifying common viaduct scenes in urban roads, it is usually judged by satellite data....
View more

Abstract

The embodiment of the invention discloses a viaduct recognition method and a related product, and is characterized in that the method is applied to electronic equipment, the electronic equipment comprises a sensor module, and the method comprises the steps: obtaining a lane video of a current driving lane; determining a first identification result of the current driving lane according to the lanevideo; receiving sensing data acquired by the sensor module, and determining a second identification result of the current driving lane according to the sensing data; and if the first identification result is successfully compared with the second identification result, determining that the current driving lane is a viaduct lane. The embodiment of the invention has the advantage of improving the recognition accuracy of the viaduct scene.

Application Domain

Measurement devicesCharacter and pattern recognition

Technology Topic

Sensing dataSystems engineering +4

Image

  • Viaduct identification method and related product
  • Viaduct identification method and related product
  • Viaduct identification method and related product

Examples

  • Experimental program(1)

Example Embodiment

[0026] The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are part of the embodiments of the present invention, not all of them. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
[0027] The terms "first", "second", "third" and "fourth" in the specification and claims of the present invention and the drawings are used to distinguish different objects, rather than describing a specific order . In addition, the terms "including" and "having" and any variations thereof are intended to cover non-exclusive inclusion. For example, a process, method, system, product, or device that includes a series of steps or units is not limited to the listed steps or units, but optionally includes unlisted steps or units, or optionally also includes Other steps or units inherent in these processes, methods, products or equipment.
[0028] The reference to "embodiments" herein means that specific features, results or characteristics described in conjunction with the embodiments may be included in at least one embodiment of the present invention. The appearance of the phrase in various places in the specification does not necessarily refer to the same embodiment, nor is it an independent or alternative embodiment mutually exclusive with other embodiments. Those skilled in the art clearly and implicitly understand that the embodiments described herein can be combined with other embodiments.
[0029] Hereinafter, some terms in this application will be explained to facilitate the understanding of those skilled in the art.
[0030] Electronic devices can include various handheld devices with wireless communication functions, vehicle-mounted devices, wearable devices (such as smart watches, smart bracelets, pedometers, etc.), computing devices or other processing devices that are communicatively connected to wireless modems, and various Various forms of user equipment (User Equipment, UE), mobile station (Mobile Station, MS), terminal device (terminal device) and so on. For ease of description, the devices mentioned above are collectively referred to as electronic devices.
[0031] See figure 1 , figure 1 It is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application. The electronic device 100 includes a storage and processing circuit 110, and a sensor 170 connected to the storage and processing circuit 110, in which:
[0032] The electronic device 100 may include a control circuit, and the control circuit may include a storage and processing circuit 110. The storage and processing circuit 110 can be a memory, such as a hard disk drive memory, a non-volatile memory (such as flash memory or other electronic programmable read-only memory used to form a solid-state drive, etc.), and a volatile memory (such as static or dynamic random access memory). Access to memory, etc.), etc., are not limited in the embodiment of the present application. The processing circuit in the storage and processing circuit 110 may be used to control the operation of the electronic device 100. The processing circuit can be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, etc.
[0033] The storage and processing circuit 110 can be used to run software in the electronic device 100, such as Internet browsing applications, Voice over Internet Protocol (VOIP) telephone calling applications, email applications, media playback applications, and operating system functions Wait. These softwares can be used to perform some control operations, for example, camera-based image capture, ambient light measurement based on ambient light sensors, proximity sensor measurement based on proximity sensors, and information based on status indicators such as LED status indicators Display functions, touch event detection based on touch sensors, functions associated with displaying information on multiple (for example, layered) display screens, operations associated with performing wireless communication functions, and functions associated with collecting and generating audio signals Operations, control operations associated with collecting and processing button press event data, and other functions in the electronic device 100 are not limited in the embodiment of the present application.
[0034] The electronic device 100 may include an input-output circuit 150. The input-output circuit 150 can be used to enable the electronic device 100 to input and output data, that is, allow the electronic device 100 to receive data from an external device and also allow the electronic device 100 to output data from the electronic device 100 to the external device. The input-output circuit 150 may further include a sensor 170. The sensor 170 may include an ambient light sensor, a proximity sensor based on light and capacitance, a fingerprint recognition module, a touch sensor (for example, a light-based touch sensor and/or a capacitive touch sensor, where the touch sensor may be part of a touch screen , Can also be used independently as a touch sensor structure), such as acceleration sensors and other sensors.
[0035] The electronic device 100 may also include a camera 140. The camera 140 includes an infrared camera, a color image camera, etc. The camera may be a front camera or a rear camera. The fingerprint recognition module may be integrated under the display screen to collect fingerprint images. The fingerprint identification module may be at least one of the following: an optical fingerprint identification module, or an ultrasonic fingerprint identification module, etc., which is not limited herein. The aforementioned front camera can be arranged below the front display screen, and the aforementioned rear camera can be arranged below the rear display screen. Of course, the foregoing front camera or rear camera may not be integrated with the display screen. Of course, in practical applications, the foregoing front camera or rear camera may also have a lifting structure. The specific embodiments of this application do not limit the foregoing front camera Or the specific structure of the rear camera.
[0036] The input-output circuit 150 may also include one or more display screens. In the case of multiple display screens, for example, two display screens, one display screen may be set in front of the electronic device, and the other display screen may be set in the electronic device. , Such as the display screen 130. The display screen 130 may include one or a combination of a liquid crystal display screen, an organic light emitting diode display screen, an electronic ink display screen, a plasma display screen, and a display screen using other display technologies. The display screen 130 may include an air pressure sensor, a global navigation satellite positioning GNSS sensor, and may also include a touch sensor array (that is, the display screen 130 may be a touch screen). The touch sensor can be a capacitive touch sensor formed by an array of transparent touch sensor electrodes (such as indium tin oxide (ITO) electrodes), or can be a touch sensor formed using other touch technologies, such as sonic touch, pressure-sensitive touch, and resistance Touch, optical touch, etc., are not limited in the embodiment of the present application.
[0037] The communication circuit 120 may be used to provide the electronic device 100 with the ability to communicate with external devices. The communication circuit 120 may include analog and digital input-output interface circuits, and wireless communication circuits based on radio frequency signals and/or optical signals. The wireless communication circuit in the communication circuit 120 may include a radio frequency transceiver circuit, a power amplifier circuit, a low noise amplifier, a switch, a filter, and an antenna. Wherein, the communication circuit 120 may include a first Wi-Fi channel and a second Wi-Fi channel, and the first Wi-Fi channel and the second Wi-Fi channel operate at the same time to realize dual Wi-Fi functions. For example, the wireless communication circuit in the communication circuit 120 may include a circuit for supporting Near Field Communication (NFC) by transmitting and receiving near-field coupled electromagnetic signals. For example, the communication circuit 120 may include a near field communication antenna and a near field communication transceiver. The communication circuit 120 may also include a cellular phone transceiver and antenna, a wireless local area network transceiver circuit and antenna, and so on.
[0038] The electronic device 100 may further include a battery, a power management circuit, and other input-output units 160. The input-output unit 160 may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes, and other status indicators.
[0039] The user can input commands through the input-output circuit 150 to control the operation of the electronic device 100, and can use the output data of the input-output circuit 150 to realize receiving status information and other outputs from the electronic device 100.
[0040] Based on the above figure 1 The described electronic equipment can be used to realize the following functions:
[0041] Obtain the lane video of the current driving lane;
[0042] Determining the first recognition result of the current driving lane according to the lane video;
[0043] Receiving sensor data collected by the sensor module, and determining the second recognition result of the current driving lane according to the sensor data;
[0044] If the comparison between the first recognition result and the second recognition result is successful, it is determined that the current driving lane is an overpass lane.
[0045] See figure 2 , figure 2 It is a schematic flow diagram of a viaduct identification method provided by an embodiment of this application, which is applied to figure 1 The electronic device described includes: a sensor module, and the viaduct identification method includes:
[0046] Step 201: Obtain a lane video of the current driving lane;
[0047] Optionally, when the startup of the target vehicle is detected, a camera module corresponding to the electronic device is activated, and the camera module is used to collect a lane video corresponding to the current driving lane of the target vehicle.
[0048] Wherein, the lane video is video data in the driving direction of the target vehicle.
[0049] Step 202: Determine a first recognition result of the current driving lane according to the lane video;
[0050] Optionally, a preset image recognition algorithm is acquired, and the image recognition algorithm is executed on the lane video to obtain a first recognition result corresponding to the current driving lane, where the first recognition result may include: a first preset result or The second preset result.
[0051] Optionally, a pre-trained image recognition model is obtained, and the Cheao video is used as the input of the image recognition model to obtain the first recognition result corresponding to the current driving lane.
[0052] Step 203: Receive sensor data collected by the sensor module, and determine a second recognition result of the current driving lane according to the sensor data;
[0053] Optionally, acquiring the first recognition result, and receiving the sensor data collected by the sensor module according to the first recognition result includes: determining whether the first recognition result includes the first preset result, and if the first recognition result includes the first A preset result is to generate a pressure change value acquisition instruction and a global navigation satellite positioning GNSS data acquisition instruction, generate a first sensor data acquisition instruction based on the pressure change value acquisition instruction and the GNSS data acquisition instruction, and send the sensor data acquisition instruction to the sensor module First sensor data acquisition instruction; if the first recognition result does not include the first preset result, determine whether the first recognition result includes the second preset result, if the first recognition result includes the second preset result, Generate an air pressure change value acquisition instruction. A second sensor data acquisition instruction is generated according to the air pressure change value acquisition instruction, and the sensor module is sent to the second sensor data acquisition instruction.
[0054] Further, the sensor data acquisition response returned by the sensor module is received, and the sensor data is extracted from the sensor data acquisition response, where the sensor data may include: first sensor data or second sensor data, Obtain a preset judgment model, use the sensor data as an input of the judgment model, and obtain a second recognition result corresponding to the current driving lane.
[0055] Step 204: If the comparison between the first recognition result and the second recognition result is successful, determine that the current driving lane is an overpass lane.
[0056] Optionally, if the first recognition result includes the first preset result, the second recognition result may include: air pressure recognition result and GNSS recognition result, and the GNSS recognition result may include: satellite number recognition result and satellite signal-to-noise ratio recognition result ; If the first recognition result includes a second preset result, the second recognition result may include: an air pressure recognition result. Optionally, if the first recognition result includes a first preset result, extract the air pressure recognition result and the GNSS recognition result from the second recognition result, and determine whether the air pressure recognition result includes the first preset result. If the recognition result does not include the first preset result, the comparison between the first recognition result and the second recognition result is unsuccessful; if the air pressure recognition result includes the first preset result, obtain the satellite number recognition result from the GNSS recognition result , It is judged whether the recognition result of the number of satellites includes the first preset result, if the recognition result of the number of satellites does not include the first preset result, the comparison between the first recognition result and the second recognition result is unsuccessful; The satellite recognition result includes the first preset result, the satellite signal-to-noise ratio recognition result is obtained from the GNSS recognition result, and it is judged whether the satellite signal-to-noise ratio recognition result includes the first preset result, if the satellite signal-to-noise ratio recognition result If the first preset result is not included, it is determined that the comparison between the first recognition result and the second recognition result is unsuccessful. If the satellite signal-to-noise ratio recognition result includes the first preset result, the first recognition result and the second recognition result are determined Second, the recognition results are successfully compared, and the current driving lane is determined to be the viaduct lane.
[0057] Optionally, after determining that the current driving lane is the viaduct lane, obtain the air pressure change value, determine whether the air pressure change value is greater than 0, and if the air pressure change value is greater than 0, determine that the target vehicle is in the state of driving out of the viaduct; if the air pressure change value is If it is less than 0, it is determined that the target vehicle is in the state of entering the viaduct.
[0058] Optionally, after determining that the current driving lane is the viaduct lane, obtain the change value of the number of satellites and determine whether the change value of the number of satellites is greater than 0. If the change value of the number of satellites is greater than 0, it is determined that the target vehicle is in the state of entering the viaduct; If the change value of the number of satellites is less than 0, it is determined that the target vehicle is in the state of driving out of the viaduct.
[0059] In a possible example, the determining the first recognition result of the current driving lane according to the lane video includes: acquiring m frames of images contained in the lane video, where m is an integer greater than 0; The m frames of images execute an image recognition algorithm to obtain n frames of viaduct images containing viaduct lanes in the m frames of images, where n is an integer greater than or equal to 0 and less than or equal to m; according to a preset scene ratio calculation formula Calculate the m-frame image and the n-frame viaduct image to obtain the viaduct scene ratio; determine whether the viaduct scene ratio is greater than a preset scene ratio threshold, and if the viaduct scene ratio is greater than the scene ratio threshold, The preset occlusion rate algorithm calculates the n frames of viaduct images, determines the n viaduct occlusion rates corresponding to the n frames of viaduct images; determines the average viaduct occlusion rate according to the n viaduct occlusion rates, and determines the average viaduct Whether the occlusion rate is greater than the preset occlusion rate threshold; if the average viaduct occlusion rate is not greater than the occlusion rate threshold, the first recognition result is determined to be the first preset result; otherwise, the first recognition result is determined to be The second preset result.
[0060] Wherein, the scene ratio threshold may include: 50%, 60%, 70%, etc., which are not limited here.
[0061] Wherein, the occlusion rate threshold may include: 60%, 65%, 70%, etc., which are not limited here.
[0062] Optionally, obtain m frames of images contained in the lane video, where any one of the m frames of images includes: the current lane, the current lane line, and the adjacent lane environment; perform an image recognition algorithm on the m frames of image, where , The image recognition algorithm is used to identify the viaduct lanes in the image, obtain n frames of viaduct images, obtain the preset viaduct scene ratio calculation formula, and calculate the n frames of viaduct images and m frames of images to determine the viaduct scene ratio, where, The formula for calculating the viaduct scene ratio may include: a=/m*100%, where a is the viaduct scene ratio. It is judged whether the viaduct scene ratio a is greater than a preset scene ratio threshold. If it is greater, n frames of viaduct images are obtained n viaduct occlusion rates, where the viaduct occlusion rate represents the ratio of the pixels that are blocked by the viaduct to the viaduct pixels, calculate the average viaduct occlusion rate of n viaduct occlusion rates, and determine whether the average viaduct occlusion rate is greater than the preset occlusion rate threshold , If it is greater than, the first recognition result is determined to be the first preset result, if not greater than, the first recognition result is determined to be the second preset result, and it is determined whether the average viaduct occlusion rate is greater than the first condition threshold, and if the average viaduct occludes If the average viaduct occlusion rate is greater than the first condition threshold, it is determined that the second preset result satisfies the second condition; if the viaduct scene ratio is less than The scene ratio threshold is to determine whether the viaduct scene ratio is less than the second condition threshold; if it is not less, it is determined that the first recognition result is the second preset result and meets the first condition; if it is less, the first recognition result is determined to be The second preset result meets the second condition.
[0063] In a possible example, the receiving the sensor data collected by the sensor module includes: if the first recognition result includes the first preset result, receiving the first data collected by the sensor module Sensor data, wherein the first sensor data includes: barometric pressure change value and global navigation satellite positioning GNSS data; if the first recognition result includes the second preset result, the sensor module collects The second sensing data, wherein the second sensing data includes: the air pressure change value.
[0064] The first preset result indicates that the current lane is an unobstructed viaduct lane, and the second preset result indicates that the current lane is a non-viaduct lane or the current lane is a blocked viaduct lane.
[0065] Wherein, the air pressure change value is the difference between the first air pressure collected at the first time corresponding to the lane video and the second air pressure collected at the second time corresponding to the lane video;
[0066] Wherein, the GNSS data includes: satellite number change value and satellite signal-to-noise ratio change value, the satellite number change value is the first satellite number collected at the first time corresponding to the lane video and the second time corresponding to the lane video The difference between the number of second satellites collected; the change value of the satellite signal-to-noise ratio is the first satellite signal-to-noise ratio collected at the first time corresponding to the lane video and the second satellite signal collected at the second time corresponding to the lane video Poor noise ratio.
[0067] In a possible example, the determining the second recognition result of the current driving lane according to the sensor data includes: if the sensor data includes the first sensor data, according to the air pressure change value Determine the air pressure recognition result, determine the GNSS recognition result according to the GNSS data, and determine the second recognition result according to the air pressure recognition result and the GNSS recognition result; if the sensor data includes the second sensor data, The air pressure recognition result is determined according to the air pressure change value, and the second recognition result is determined according to the air pressure recognition result.
[0068] The barometric pressure recognition result may include: a first preset result and a second preset result; the GNSS recognition result may include: a satellite number recognition result and a satellite signal-to-noise ratio recognition result, and the satellite number recognition result may include: first The preset result, the second preset result, and the satellite signal-to-noise ratio recognition result may include: the first preset result and the second preset result. In a possible example, the determining the air pressure recognition result according to the air pressure change value includes: obtaining the air pressure change value and a preset altitude change value calculation formula; and using the air pressure change value as the altitude change value Input of the calculation formula to determine the altitude change value corresponding to the air pressure change value; determine whether the altitude change value is greater than a preset altitude change threshold, and if the altitude change value is greater than the altitude change threshold, determine the air pressure recognition The result is the first preset result; otherwise, it is determined that the air pressure recognition result is the second preset result.
[0069] Optionally, obtain the air pressure change value x, calculate the absolute value |x| of the air pressure change value x, obtain a preset air pressure change threshold value, and determine whether the absolute value |x| is greater than the air pressure change threshold value. If the value |x| is greater than the air pressure change threshold, it is determined that the air pressure recognition result is the first preset result, and if the absolute value |x| is not greater than the air pressure change threshold, the air pressure recognition result is determined to be the second preset result.
[0070] Optionally, obtain the air pressure change value x, calculate the absolute value of the air pressure change value x |x|, obtain the altitude change value calculation formula, substitute the absolute value |x| into the altitude change value calculation formula to obtain the altitude change value h , Where the height change value calculation formula may include: h=44300*(1-|x|/p 0 ) 1/5.256 , Where p 0 It is the standard atmospheric pressure value.
[0071] Wherein, the height change threshold may include: 5 meters, 10 meters, 15 meters, etc., which are not limited here.
[0072] In a possible example, the determining the GNSS identification result based on the GNSS data includes: the GNSS data includes: a change value of the number of satellites and a change value of a satellite signal-to-noise ratio, then the GNSS identification result includes: a satellite The number recognition result and the satellite signal-to-noise ratio recognition result; determine whether the satellite number change value is greater than a preset number change threshold, and if the satellite number change value is greater than the number change threshold, it is determined that the satellite number recognition result is all The first preset result, otherwise, determine that the satellite number recognition result is the second preset result; determine whether the change value of the satellite signal-to-noise ratio is greater than the preset signal-to-noise ratio change threshold, if the satellite signal If the noise ratio change value is greater than the signal-to-noise ratio change threshold, the satellite signal-to-noise ratio recognition result is determined to be the first preset result; otherwise, it is determined that the satellite signal-to-noise ratio recognition result is the second preset result result.
[0073] Wherein, the number change threshold may include: 10, 15, 20, etc., which are not limited here.
[0074] Wherein, the signal-to-noise ratio change threshold may include: 5, 10, 15, etc., which are not limited here.
[0075] Wherein, the first preset result is used to indicate that the current lane is an viaduct lane, and the second preset result is used to indicate that the current lane is a non-viaduct lane.
[0076] In a possible example, if the comparison between the first recognition result and the second recognition result is successful, it includes: judging whether the first recognition result is consistent with the second recognition result, and if the first recognition result is If a recognition result is consistent with the second recognition result, it is determined that the first recognition result is successfully compared with the second recognition result; if the first recognition result is inconsistent with the second recognition result, the first recognition result is determined The comparison of a recognition result with the second recognition result is unsuccessful.
[0077] Optionally, if the first recognition result includes a first preset result, a preset first comparison rule is acquired, and the first recognition result is compared with the second recognition result according to the first comparison rule, wherein the first A comparison rule may include: obtaining the air pressure recognition result, the satellite number recognition result, and the satellite signal-to-noise ratio recognition result from the second recognition result, and judging the first recognition result, the air pressure recognition result, the satellite number recognition result, and the satellite signal-to-noise ratio Whether the recognition results all include the first preset result, if the first recognition result, barometric pressure recognition result, satellite number recognition result, and satellite signal-to-noise ratio recognition result all include the first preset result, determine whether the first recognition result and the second If the recognition results are consistent, it is determined that the first recognition result and the second recognition result are successfully compared. If there is at least one of the air pressure recognition result, the number of satellite recognition results, and the satellite signal-to-noise ratio recognition result, the recognition result does not include the first preset result, then it is determined The first recognition result is inconsistent with the second recognition result, and it is determined that the comparison between the first recognition result and the second recognition result is unsuccessful.
[0078] Optionally, if the first recognition result includes a second preset result, analyze the second preset result, and if the second preset result meets the first condition, extract the air pressure recognition result from the second recognition result , Determine whether the air pressure recognition result includes the first preset result, if the air pressure recognition result includes the first preset result, determine that the first recognition result and the second recognition result are successfully compared, and determine that the current driving lane is an overpass lane ; If the air pressure recognition result does not include the first preset result, it is determined that the comparison between the first recognition result and the second recognition result is unsuccessful.
[0079] Further, if the second preset result meets the second condition, it is determined that the comparison between the first recognition result and the second recognition result is unsuccessful.
[0080] In combination with the above example, an example will be described below. Assuming that the first recognition result includes the first preset result, the first preset result may include: an unobstructed viaduct scene, and the second preset result may include: an obstructed viaduct scene, A non-viaduct scene, wherein if the second preset result is the blocked viaduct scene, it is determined that the second preset result satisfies the first condition, and if the second preset result is a non-viaduct scene, the second preset result is determined The preset result satisfies the second condition; when the first recognition result is the first preset result, the second recognition result is obtained, and it is judged whether the air pressure recognition result, the satellite number recognition result, and the satellite signal-to-noise ratio recognition result are all the first prediction results. If the result is set, the first recognition result and the second recognition result are successfully compared. If there is at least one second preset result among the air pressure recognition result, the satellite number recognition result, and the satellite signal-to-noise ratio recognition result, the first recognition result The comparison with the second recognition result is unsuccessful; when the first recognition result is the second preset result and meets the first condition, that is, when the first recognition result is that the viaduct is blocked, judge whether the air pressure recognition result includes the first preset result , If yes, the first recognition result and the second recognition result are successfully compared; if not, the first recognition result and the second recognition result are not successfully compared; when the first recognition result is the second preset result and meets the first person When the condition is met, the comparison between the first recognition result and the second recognition result is unsuccessful.
[0081] It can be seen that in this embodiment of the application, the electronic device obtains the lane video of the current driving lane; determines the first recognition result of the current driving lane according to the lane video; receives the sensor data collected by the sensor module, The second recognition result of the current driving lane is determined according to the sensor data; if the first recognition result is successfully compared with the second recognition result, it is determined that the current driving lane is an overpass lane. In this way, the viaduct scene can be identified based on the lane video and sensor data, which is beneficial to improve the accuracy of the viaduct scene recognition and improve the user experience.
[0082] See image 3 , image 3 It is a schematic flow diagram of another viaduct identification method provided by an embodiment of this application, which is applied to figure 1 The electronic device described includes: a sensor module, and the viaduct identification method includes:
[0083] Step 301: Obtain a lane video of the current driving lane;
[0084] Step 302: Determine the first recognition result of the current driving lane according to the lane video;
[0085] Step 303: If the first recognition result includes the first preset result, receive first sensor data collected by the sensor module, where the first sensor data includes: air pressure change value and global Navigation satellite positioning GNSS data;
[0086] Step 304: If the first recognition result includes the second preset result, receive second sensor data collected by the sensor module, where the second sensor data includes: the air pressure change value ;
[0087] Step 305: Determine the second recognition result of the current driving lane according to the sensed data;
[0088] Step 306: If the comparison between the first recognition result and the second recognition result is successful, determine that the current driving lane is an overpass lane.
[0089] Among them, the detailed description of the above steps 301-306 can refer to the above figure 2 The corresponding steps of the described viaduct identification method.
[0090] It can be seen that in this embodiment of the application, the electronic device obtains the lane video of the current driving lane; determines the first recognition result of the current driving lane according to the lane video; if the first recognition result includes the first preset result, Receiving the first sensor data collected by the sensor module, where the first sensor data includes: barometric pressure change value and global navigation satellite positioning GNSS data; if the first recognition result includes the second preset result, then Receive second sensor data collected by the sensor module, where the second sensor data includes: the air pressure change value; determine the second recognition result of the current driving lane according to the sensor data; if the first recognition result The comparison with the second recognition result is successful, and it is determined that the current driving lane is the viaduct lane. In this way, viaduct lanes can be identified through lane video, air pressure change values, and GNSS data, which is beneficial to improve the accuracy of viaduct scene recognition and improve user experience.
[0091] See Figure 4 , Figure 4 It is a schematic flow diagram of another viaduct identification method provided by an embodiment of this application, which is applied to figure 1 The electronic device described includes: a sensor module, and the viaduct identification method includes:
[0092] Step 401: Obtain a lane video of the current driving lane;
[0093] Step 402: Determine the first recognition result of the current driving lane according to the lane video;
[0094] Step 403: Receive sensor data collected by the sensor module;
[0095] Step 404: If the sensor data includes the first sensor data, determine the air pressure recognition result according to the air pressure change value, determine the GNSS recognition result according to the GNSS data, and determine the GNSS recognition result according to the air pressure recognition result and the GNSS The recognition result determines the second recognition result;
[0096] Step 405: If the sensor data includes the second sensor data, determine the air pressure recognition result according to the air pressure change value, and determine the second recognition result according to the air pressure recognition result;
[0097] Step 406: If the comparison between the first recognition result and the second recognition result is successful, determine that the current driving lane is an overpass lane.
[0098] Among them, the detailed description of the above steps 401-406 can refer to the above figure 2 The corresponding steps of the described viaduct identification method.
[0099] It can be seen that in this embodiment of the application, the electronic device obtains the lane video of the current driving lane; determines the first recognition result of the current driving lane according to the lane video; receives the sensor data collected by the sensor module; if the transmission is If the sensor data includes the first sensor data, the air pressure recognition result is determined according to the air pressure change value, the GNSS recognition result is determined according to the GNSS data, and the second recognition result is determined according to the air pressure recognition result and the GNSS recognition result; If the sensing data includes the second sensing data, the air pressure recognition result is determined according to the air pressure change value, and the second recognition result is determined according to the air pressure recognition result; if the first recognition result is successfully compared with the second recognition result, it is determined The current driving lane is the viaduct lane. In this way, the first recognition result can be determined according to the lane video, the second recognition result can be determined by the air pressure change value and the GNSS data, and the viaduct lane and the viaduct scene can be identified by comparing the first recognition result and the second recognition result, which is beneficial to improve the viaduct scene recognition Accuracy, improve user experience.
[0100] With said figure 2 , image 3 , Figure 4 The embodiments shown are consistent, please refer to Figure 5 , Figure 5 It is a schematic structural diagram of an electronic device 500 provided by an embodiment of the present application. As shown in the figure, the electronic device 500 includes an application processor 510, a memory 520, a communication interface 530, a sensor module 540, and one or more programs 521 , Wherein the one or more programs 521 are stored in the foregoing memory 520 and configured to be executed by the foregoing application processor 510, and the one or more programs 521 include instructions for performing the following steps:
[0101] Obtain the lane video of the current driving lane;
[0102] Determining the first recognition result of the current driving lane according to the lane video;
[0103] Receiving sensor data collected by the sensor module, and determining the second recognition result of the current driving lane according to the sensor data;
[0104] If the comparison between the first recognition result and the second recognition result is successful, it is determined that the current driving lane is an overpass lane.
[0105] It can be seen that in this embodiment of the application, the electronic device obtains the lane video of the current driving lane; determines the first recognition result of the current driving lane according to the lane video; receives the sensor data collected by the sensor module, The second recognition result of the current driving lane is determined according to the sensor data; if the first recognition result is successfully compared with the second recognition result, it is determined that the current driving lane is an overpass lane. In this way, the viaduct scene can be identified based on the lane video and sensor data, which is beneficial to improve the accuracy of the viaduct scene recognition and improve the user experience.
[0106] In a possible example, in terms of determining the first recognition result of the current driving lane according to the lane video, the instructions in the program are specifically used to perform the following operations: acquiring m contained in the lane video Frame images, where m is an integer greater than 0; perform an image recognition algorithm on the m frames of images, and obtain n frames of viaduct images containing viaduct lanes in the m frames of images, where n is greater than or equal to 0, less than or An integer equal to m; calculate the m frames of images and the n frames of viaduct images according to the preset scene ratio calculation formula to obtain the viaduct scene ratio; judge whether the viaduct scene ratio is greater than the preset scene ratio threshold, if If the viaduct scene ratio is greater than the scene ratio threshold, the n frames of viaduct images are calculated according to a preset occlusion rate algorithm to determine the n viaduct occlusion rates corresponding to the n frames of viaduct images; according to the n viaducts The occlusion rate determines the average viaduct occlusion rate, and determines whether the average viaduct occlusion rate is greater than a preset occlusion rate threshold; if the average viaduct occlusion rate is not greater than the occlusion rate threshold, it is determined that the first recognition result is the first prediction Set the result; otherwise, determine that the first recognition result is the second preset result.
[0107] In a possible example, in terms of receiving the sensor data collected by the sensor module, the instructions in the program are specifically used to perform the following operations: if the first recognition result includes the first preset As a result, the first sensor data collected by the sensor module is received, where the first sensor data includes: barometric pressure change value and global navigation satellite positioning GNSS data; if the first recognition result includes the first Two preset results, receiving second sensor data collected by the sensor module, where the second sensor data includes: the air pressure change value.
[0108] In a possible example, in terms of determining the second recognition result of the current driving lane based on the sensed data, the instructions in the program are specifically used to perform the following operations: if the sensed data includes the For the first sensor data, the air pressure recognition result is determined according to the air pressure change value, the GNSS recognition result is determined according to the GNSS data, and the second recognition result is determined according to the air pressure recognition result and the GNSS recognition result; If the sensor data includes the second sensor data, the air pressure recognition result is determined according to the air pressure change value, and the second recognition result is determined according to the air pressure recognition result.
[0109] In a possible example, in terms of determining the air pressure recognition result according to the air pressure change value, the instructions in the program are specifically used to perform the following operations: obtain the air pressure change value and a preset altitude change value calculation formula Use the air pressure change value as the input of the altitude change value calculation formula to determine the altitude change value corresponding to the air pressure change value; determine whether the altitude change value is greater than a preset altitude change threshold, if the altitude changes If the value is greater than the altitude change threshold, it is determined that the air pressure recognition result is the first preset result; otherwise, it is determined that the air pressure recognition result is the second preset result.
[0110] In a possible example, in the aspect of determining the GNSS recognition result based on the GNSS data, the instructions in the program are specifically used to perform the following operations: the GNSS data includes: satellite number change value and satellite signal noise If the number of satellites is greater than the preset number change threshold, the GNSS identification result includes: the number of satellites identification result and the satellite signal-to-noise ratio identification result; The number change threshold is determined to be the first preset result; otherwise, the satellite number recognition result is determined to be the second preset result; it is determined whether the satellite signal-to-noise ratio change value is greater than the preset result Set the signal-to-noise ratio change threshold. If the satellite signal-to-noise ratio change value is greater than the signal-to-noise ratio change threshold, the satellite signal-to-noise ratio recognition result is determined as the first preset result; otherwise, the The satellite signal-to-noise ratio recognition result is the second preset result.
[0111] In a possible example, in terms of if the comparison between the first recognition result and the second recognition result is successful, the instructions in the program are specifically used to perform the following operations: judging the first recognition result and Whether the second recognition result is consistent, if the first recognition result is consistent with the second recognition result, it is determined that the comparison between the first recognition result and the second recognition result is successful; if the first recognition result Inconsistent with the second recognition result, it is determined that the comparison between the first recognition result and the second recognition result is unsuccessful.
[0112] The foregoing mainly introduces the solution of the embodiment of the present application from the perspective of the execution process on the method side. It can be understood that, in order to implement the above-mentioned functions, an electronic device includes hardware structures and/or software modules corresponding to each function. Those skilled in the art should easily realize that in combination with the units and algorithm steps of the examples described in the embodiments disclosed herein, this application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software-driven hardware depends on the specific application and design constraints of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
[0113] The embodiments of the present application may divide the electronic device into functional units according to the foregoing method examples. For example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one control unit. The above-mentioned integrated unit can be realized in the form of hardware or software functional unit. It should be noted that the division of units in the embodiments of the present application is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
[0114] Image 6 It is a block diagram of the functional unit composition of the viaduct identification device 600 involved in the embodiments of the present application. The viaduct identification device 600 is applied to electronic equipment. The viaduct identification device 600 includes an acquisition unit 601, a determination unit 602, a receiving unit 603, and a comparison unit 604 ,among them:
[0115] The acquiring unit 601 is configured to acquire a lane video of the current driving lane;
[0116] The determining unit 602 is configured to determine the first recognition result of the current driving lane according to the lane video;
[0117] The receiving unit 603 is configured to receive the sensor data collected by the sensor module, and determine the second recognition result of the current driving lane according to the sensor data;
[0118] The comparison unit 604 is configured to determine that the current driving lane is the viaduct lane if the comparison between the first recognition result and the second recognition result is successful.
[0119] It can be seen that in this embodiment of the application, the electronic device obtains the lane video of the current driving lane; determines the first recognition result of the current driving lane according to the lane video; receives the sensor data collected by the sensor module, The second recognition result of the current driving lane is determined according to the sensor data; if the first recognition result is successfully compared with the second recognition result, it is determined that the current driving lane is an overpass lane. In this way, the viaduct scene can be identified based on the lane video and sensor data, which is beneficial to improve the accuracy of the viaduct scene recognition and improve the user experience.
[0120] In a possible example, in terms of determining the first recognition result of the current driving lane according to the lane video, the determining unit 602 is specifically configured to: obtain m frames of images included in the lane video, Wherein, m is an integer greater than 0; an image recognition algorithm is performed on the m frames of images to obtain n frames of viaduct images containing viaduct lanes in the m frames of images, where n is greater than or equal to 0 and less than or equal to m Integer; calculate the m-frame image and the n-frame viaduct image according to the preset scene ratio calculation formula to obtain the viaduct scene ratio; judge whether the viaduct scene ratio is greater than the preset scene ratio threshold, if the viaduct When the scene ratio is greater than the scene ratio threshold, the n frames of viaduct images are calculated according to a preset occlusion rate algorithm, and the n viaduct occlusion rates corresponding to the n frames of viaduct images are determined; determined according to the n viaduct occlusion rates Average viaduct occlusion rate, judging whether the average viaduct occlusion rate is greater than a preset occlusion rate threshold; if the average viaduct occlusion rate is not greater than the occlusion rate threshold, determine that the first recognition result is the first preset result; Otherwise, it is determined that the first recognition result is the second preset result.
[0121] In a possible example, in the aspect of receiving the sensor data collected by the sensor module, the receiving unit 603 is specifically configured to: if the first recognition result includes the first preset result, then Receive first sensing data collected by the sensor module, where the first sensing data includes: barometric pressure change value and global navigation satellite positioning GNSS data; if the first recognition result includes the second preset As a result, the second sensor data collected by the sensor module is received, where the second sensor data includes: the air pressure change value.
[0122] In a possible example, in terms of determining the second recognition result of the current driving lane based on the sensor data, the receiving unit 603 is specifically configured to: if the sensor data includes the first transmission Sensing data, the air pressure recognition result is determined according to the air pressure change value, the GNSS recognition result is determined according to the GNSS data, the second recognition result is determined according to the air pressure recognition result and the GNSS recognition result; if the sensor If the data includes the second sensor data, the air pressure recognition result is determined according to the air pressure change value, and the second recognition result is determined according to the air pressure recognition result.
[0123] In a possible example, in terms of determining the air pressure recognition result according to the air pressure change value, the receiving unit 603 is specifically configured to: obtain the air pressure change value and a preset altitude change value calculation formula; The air pressure change value is used as the input of the altitude change value calculation formula to determine the altitude change value corresponding to the air pressure change value; it is determined whether the altitude change value is greater than a preset altitude change threshold, and if the altitude change value is greater than all The altitude change threshold is determined to be the first preset result; otherwise, the air pressure recognition result is determined to be the second preset result.
[0124] In a possible example, in the aspect of determining the GNSS identification result according to the GNSS data, the receiving unit 603 is specifically configured to: the GNSS data includes: a change value of the number of satellites and a change value of a satellite signal-to-noise ratio , The GNSS recognition result includes: the recognition result of the number of satellites and the recognition result of the satellite signal-to-noise ratio; it is determined whether the change value of the number of satellites is greater than a preset number change threshold, and if the change value of the number of satellites is greater than the number change threshold , Determine that the recognition result of the number of satellites is the first preset result; otherwise, determine that the recognition result of the number of satellites is the second preset result; determine whether the change value of the satellite signal-to-noise ratio is greater than the preset signal Noise ratio change threshold. If the satellite signal-to-noise ratio change value is greater than the signal-to-noise ratio change threshold, the satellite signal-to-noise ratio recognition result is determined as the first preset result; otherwise, the satellite signal-to-noise ratio is determined The ratio recognition result is the second preset result.
[0125] In a possible example, in terms of if the comparison between the first recognition result and the second recognition result is successful, the comparison unit 604 is specifically configured to: determine the first recognition result and the second recognition result. Whether the second recognition result is consistent, if the first recognition result is consistent with the second recognition result, it is determined that the comparison between the first recognition result and the second recognition result is successful; If the second recognition result is inconsistent, it is determined that the comparison between the first recognition result and the second recognition result is unsuccessful.
[0126] An embodiment of the present application also provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any method as recorded in the above method embodiment , The aforementioned computer includes electronic equipment.
[0127] The embodiments of the present application also provide a computer program product. The above-mentioned computer program product includes a non-transitory computer-readable storage medium storing a computer program. The above-mentioned computer program is operable to cause a computer to execute any of the methods described in the above-mentioned method embodiments. Part or all of the steps of the method. The computer program product may be a software installation package, and the above-mentioned computer includes electronic equipment.
[0128] It should be noted that for the foregoing method embodiments, for the sake of simple description, they are all expressed as a series of action combinations, but those skilled in the art should know that this application is not limited by the described sequence of actions. Because according to this application, some steps can be performed in other order or simultaneously. Secondly, those skilled in the art should also be aware that the embodiments described in the specification are all preferred embodiments, and the actions and modules involved are not necessarily required by this application.
[0129] In the above-mentioned embodiments, the description of each embodiment has its own focus. For parts that are not described in detail in an embodiment, reference may be made to related descriptions of other embodiments.
[0130] In the several embodiments provided in this application, it should be understood that the disclosed device may be implemented in other ways. For example, the device embodiments described above are merely illustrative. For example, the division of the above-mentioned units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated. To another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical or other forms.
[0131] The units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
[0132] In addition, the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above-mentioned integrated unit can be realized in the form of hardware or software functional unit.
[0133] If the above integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable memory. Based on this understanding, the technical solution of the present application essentially or the part that contributes to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a memory, A number of instructions are included to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the foregoing methods of the various embodiments of the present application. The aforementioned memory includes: U disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), mobile hard disk, magnetic disk, or optical disk and other media that can store program codes.
[0134] Those of ordinary skill in the art can understand that all or part of the steps in the various methods of the above-mentioned embodiments can be completed by a program instructing relevant hardware. The program can be stored in a computer-readable memory, and the memory can include: flash disk , Read-Only Memory (English: Read-Only Memory, abbreviated as: ROM), Random Access Memory (English: Random Access Memory, abbreviated as: RAM), magnetic disk or optical disc, etc.
[0135] The embodiments of the application are described in detail above, and specific examples are used in this article to illustrate the principles and implementation of the application. The descriptions of the above embodiments are only used to help understand the methods and core ideas of the application; Persons of ordinary skill in the art, based on the idea of ​​the application, will have changes in the specific implementation and the scope of application. In summary, the content of this specification should not be construed as limiting the application.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products