Full-audio-frequency sensing system for intelligent driving vehicle and intelligent control method thereof

A technology of intelligent driving and perception system, which is applied in the field of full audio perception system and intelligent control of intelligent driving vehicles, which can solve the problems of activity safety restrictions, the inability to predict the changing trend of the surrounding environment, and the inability to obtain environmental information, etc., to achieve the elimination of safety problems. Hidden dangers, the effect of improving the ability to obtain information

Active Publication Date: 2018-12-21
HUNAN UNIV OF TECH
11 Cites 2 Cited by

AI-Extracted Technical Summary

Problems solved by technology

The existing technology only has spatial position information but no sound information. The intelligent (unmanned) driving system is similar to human beings who only have vision but no hearing. J...
View more

Method used

On overall control method, 1, adopt the mode that full audio frequency perception system and other perception system data interact mutually, judge the accuracy that full audio frequency perception system perceives; 2, have adopted Kalman filter and stochastic gradient descent to carry out each perception Systematic data fusion, more concise methods and more accurate results.
On the system level, 1, adopted the full audio frequency perception system of vehicle, by the redundant setting of acquisition device, control from data source; 2, by the optimization of control logic, realized optimal decision-making;
The intelligent vehicle driving system of this embodiment can perceive the surrounding environment through hearing, and can realize information interaction with the surrounding environment through full audio frequency, especially in the visual blind area, it can also perceive the surrounding information of the vehicle, and can know the other party's vehicle in advance Or the driving route of pedestrians, the safety factor is high. This embodiment helps to improve the logic and rationality of safe unmanned driving by capturing the acoustic information of surrounding pedestrians, vehicles and other environments; at the same time, sending out driving information through one's own acoustic device is beneficial to the identification of other vehicles and pedestrians, thereby improving The safety of the entire mass transit system. Th...
View more

Abstract

The invention relates to the field of intelligent driving technology and discloses a full-audio-frequency sensing system for an intelligent driving vehicle and an intelligent control method thereof. The system is composed of a sound sensor, an ultrasonic echo sensor, an acoustic-electrical converter, a loudspeaker, an ultrasonic generator, an electroacoustic generator, a full-audio-frequency feature recognition and classifier, and a full-audio-frequency data fusion and decision making device. The acoustic-electrical converter is connected with the sound sensor and the ultrasonic echo sensor; the full-audio-frequency feature recognition and classifier is connected with the acoustic-electrical converter; the full-audio-frequency data fusion and decision making device is connected with the full-audio-frequency feature recognition and classifier; the loudspeaker and the ultrasonic generator are connected with the acoustic-electrical converter. Therefore, a defect that the conventional vehicle intelligent driving sensing system cannot hear and understand traveling intentions from other vehicles or pedestrians is overcome; and the system is able to sense and identify a target in a visualblind zone with obstacle obscuration.

Application Domain

Acoustic wave reradiation

Technology Topic

LoudspeakerUltrasonic generator +9

Image

  • Full-audio-frequency sensing system for intelligent driving vehicle and intelligent control method thereof
  • Full-audio-frequency sensing system for intelligent driving vehicle and intelligent control method thereof
  • Full-audio-frequency sensing system for intelligent driving vehicle and intelligent control method thereof

Examples

  • Experimental program(4)

Example Embodiment

[0056] Example 1
[0057] Such as figure 1 As shown, this embodiment provides a full-audio perception system for intelligent driving vehicles, hereinafter referred to as full-audio perception system, including sound sensors, ultrasonic echo sensors, acoustic-electric converters, speakers, ultrasonic generators, electro-acoustic generators, and Audio feature recognition and classifier and full audio data fusion and decision maker.
[0058] The sound sensor is used to collect all the sound wave signals that the human ear can feel in the space; the ultrasonic echo sensor is used to collect the specific ultrasonic signal reflected by the object, and analyze the ultrasonic signal of the vehicle identity information carried by it; sound sensor and ultrasonic The echo sensors are respectively connected with the acoustic-electric converter.
[0059] The acousto-electric converter converts the sound signal generated by the acoustic sensor and the ultrasonic signal generated by the ultrasonic echo sensor into electrical signals; the acousto-electric converter includes at least two channels, and the acousto-electric converter will be collected by the sound sensor of channel one All the acoustic signals that can be felt by the human ear are converted into electric signal one; the acoustic-electric converter converts the specific ultrasonic signal collected by the ultrasonic echo sensor through channel two into electric signal two; acoustic-electric converter and full audio frequency The feature recognition is connected with the classifier, and these electrical signals one and two are transmitted to the full-audio feature recognition and classifier.
[0060] The full audio feature recognition and classifier classifies the electrical signals input by the acousto-electric converter, extracts the ultrasonic echo positioning information and sound features contained therein, and performs sound information recognition to obtain different road conditions and other vehicle driving information. Specifically, the input electrical signal 1 and electrical signal are classified into two; the electrical signal 1 is a multi-source heterogeneous broadband signal, and the full audio feature recognition and classifier extracts the sound features contained in it, and performs sound information recognition to obtain different road conditions Information, other vehicle driving information and other vehicle surrounding information, etc.; the specific ultrasonic echo signal reflected by the electrical signal 2 is analyzed through the full-sound feature recognition and classifier, and the geospatial information of the vehicle is obtained by calculation and the detected vehicle and the Distance information between obstacles, etc.
[0061] The full audio feature recognition and classifier and full audio data fusion are connected with the decision maker, and the obtained road condition information, other vehicle driving information, geospatial information, obstacle distance information, etc. are transmitted to the full audio data fusion and decision maker, and Acquire the full audio data fusion and the return control information of the decision maker; the full audio feature recognition and classifier classifies the returned full audio control information, respectively generate sound control signals and ultrasonic control signals and transmit them to the electroacoustic generator.
[0062] The full audio data fusion and decision maker performs data fusion, logical analysis and intelligent decision-making on the road condition information data, other vehicle driving information data, geospatial information data, and obstacle distance information data received from the full audio feature recognition and classifier; Then the decision-making vehicle control information is transmitted to the vehicle central data processing and control unit, and the unit's return vehicle control information is received; the full audio data fusion and decision maker generates the return full audio control information according to the returned vehicle control information and outputs it to Full audio feature recognition and classifier.
[0063] After the electroacoustic generator receives the sound control signal and the ultrasonic control signal transmitted by the full-sound feature recognition and classifier, it is converted into a small sound signal and a small ultrasonic signal and output to the speaker and the ultrasonic generator respectively; The vehicle identity information is modulated in the small signal.
[0064] The output of the electroacoustic generator is connected with the speaker and the ultrasonic generator. The loudspeaker receives the small sound signal generated by the electroacoustic generator, and after amplifying it, the human ear can perceive a sound wave signal. It can be understood that the sound wave can be expressed as a whistle or a verbal warning sound such as steering, forward, reverse, and braking. The ultrasonic generator receives the small ultrasonic signal generated by the electroacoustic generator, and after amplification, it sends out an ultrasonic signal with a specific frequency and carrying vehicle identity information.
[0065] Among them, as a specific implementation of the full-audio perception system control method of the present invention, the full-audio feature recognition and the information preprocessing of the classifier are as follows: Figure 4 As shown, the data obtained by the different types of sound sensors and ultrasonic echo sensors are marked as Type A and Type B respectively; the data obtained by the sound sensors at different times are marked as A1, A2...An; the ultrasonic echo sensors at different times are marked The obtained data are denoted as B1, B2,...Bn. It is understandable that similar data can be a multi-dimensional data space equivalent to the superposition of multiple two-dimensional data. Such as Figure 4 As shown, the full-sound feature recognition and classifier performs classification preprocessing on the ultrasonic echo sensor and sound sensor data. Through multiple sound data sets and multiple ultrasonic data sets generated over time, sound sensors and ultrasonic echo sensors can record time-space information of vehicle and road conditions, which form a space for vehicle motion data and attributes.
[0066] Full audio data fusion and data fusion of decision makers such as Figure 5 As shown, the overlapping part of the data in the An area and the Bn area, the overlapping part of the data in the An area and the An+1 area, the overlapping part of the data in the Bn area and the Bn+1 area are recorded as redundant data (such as Figure 5 In the shaded area), the non-overlapping part is recorded as complementary data. Redundant data is a kind of high reliability data, complementary data is a kind of lower reliability data; conflict data is a special kind of complementary data.
[0067] The redundant data and complementary data are the original data of the full audio data fusion and the decision data of the decision maker. Obviously, the more raw data obtained, the easier it is to obtain reliable and optimal decisions.
[0068] Such as Figure 7 As shown, the sound sensor and ultrasonic echo sensor of this embodiment collect full audio information around the vehicle, and the collected information is converted into a signal that can be recognized by the full audio feature recognition and classifier through the acousto-electric converter, and then the feature extraction and recognition of the signal is performed With classification, the classified data is subjected to data fusion analysis to obtain the driving decision. The decision information is sent to the actuator for execution, and a full audio control signal is sent. After signal classification and conversion, the full audio information is sent out through the speaker and ultrasonic generator.
[0069] Full audio here means that the target signal frequency band covers the entire audio frequency range and the specific ultrasonic frequency range.
[0070] Such as Figure 8 As shown, the control method of the full-audio perception system of this embodiment includes the following steps:
[0071] Start the loop, configure initial information;
[0072] S11. Transmit ultrasonic echo signals and receive ultrasonic echo signals;
[0073] S12. Calculate the time interval and spatial position data according to the received ultrasonic echo signals to obtain the primary decision-making data of vehicle driving;
[0074] S13. Transmit and receive sound signals;
[0075] S14. Recognize the converted distance information and voice information data of the sound intensity within the time interval according to the received sound signal to obtain the vehicle driving secondary decision data; it can be understood that after the loop is started, the sequence of step S11 and step S13 is relative;
[0076] It is judged whether it is the first time to obtain secondary driving data; if yes, go directly to step S15; otherwise, judge whether the previous secondary decision is consistent with the current secondary decision number; if the previous secondary decision is the same as this time Step S15 is executed if the number of level decisions are consistent, otherwise, the confidence probability of the previous secondary decision data and the confidence probability of this secondary decision data are calculated and compared to obtain the secondary decision with high confidence probability;
[0077] Calculate the confidence probability of the primary decision data and judge whether it is the optimal decision;
[0078] Judge whether the confidence probability of the secondary decision is higher than the confidence probability of the primary decision, if so, execute the secondary decision, otherwise execute the primary decision to achieve the optimal decision.
[0079] Among them, the main process of calculating the confidence probability and the optimal decision is as follows.
[0080] Let the confidence probability P Z =1-α=P(θ 1 ≤θ≤θ 2 ), then 1-α means in the confidence interval (θ 1 ,θ 2 ) To estimate the correct probability;
[0081] Mean square response of detected data 2 = The probability density of θ is Then there is a confidence interval (θ 1 ,θ 2 ) Within the confidence probability
[0082] P again a (s,s′)=P(s′|s,a) represents the probability of reaching the next state s′ in a certain state s and performing action a;
[0083] The Markov optimal decision process can be expressed as π:S→A, π is the optimal decision, S is the finite state space, and A is the decision space; the behavior generated in any state S is expressed as a=π(s), then The optimal decision is to continuously iterate between possible states s and s′ until the accumulated (expected) excitation V i+1 (s) Convergence:
[0084]
[0085] S15. Judge whether the primary decision and the secondary decision are consistent, if the decision is executed in agreement, if not, then return to step S13.
[0086] After executing the decision, return to the initial configuration position and write new information.
[0087] The full-audio perception system of this embodiment adopts the structure of sound and ultrasonic dual positioning, and sound signal acquisition and recognition. Through the data fusion and decision-making method of the full-audio perception system, the voice feature recognition technology is creatively applied in the transportation field. The semantic information sent around the vehicle further improves the intelligence of unmanned driving.
[0088] The full-audio perception system of this embodiment is mainly used for intelligent driving vehicles. Of course, in fact, it can also be applied to other vehicles.

Example Embodiment

[0089] Example 2
[0090] Such as figure 2 As shown, the difference between this embodiment and Embodiment 1 is that there are m sound sensors, m ultrasonic echo sensors, m speakers, and m ultrasonic generators; accordingly, the sound The electric converter is provided with multi-channel input and output, and the electro-acoustic generator is provided with multi-channel input and output, where m is an integer greater than 2.
[0091] It should be noted that in this embodiment, there are multiple sound sensors, ultrasonic echo sensors, speakers, and ultrasonic generators, and the number is the same. Of course, in actual applications, they can be increased or decreased according to specific requirements and set to be different. Quantity.
[0092] This embodiment increases the number of sensors, which can improve the confidence of data. Such as Image 6 As shown, for example, sound sensor 1, sound sensor 2, and sound sensor m obtain data at different times as A1 1 , A1 2 , A1 m , A2 1 , A2 2 , A2 m , An 1 , An 2 , An m , The ultrasonic echo sensor 1, the ultrasonic echo sensor 2, and the sound sensor m obtain data respectively as B1 1 , B1 2 , B1 m , B2 1 , B2 2 , B2 m , B3 1 , B3 2 , B3 m. Similarly, if the sensor category is added, the marking method is similar.
[0093] The use of multiple types and numbers of sensors, adding redundant data and complementary data, provides a basis for further reliable driving decisions. In this embodiment, there are multi-channel input and output between the acoustic-electric converter and the acoustic sensor and the ultrasonic echo sensor, that is, the acoustic-electric converter can be connected with multiple acoustic sensors and multiple ultrasonic echo sensors to improve the full audio signal. The accuracy and completeness of the collection further improve the confidence of the collected data; there are multi-channel input and output between the electro-acoustic generator and the loudspeaker, and the ultrasonic generator, so that the other vehicle can better receive the sound information from the vehicle.

Example Embodiment

[0094] Example 3
[0095] Such as image 3 As shown, this embodiment provides a smart driving vehicle control system, including the full-audio perception system in Embodiment 2, and also includes other perception systems, specifically connected to the central data processing and control unit of the smart driving vehicle. GIS-T geographic transportation system, machine vision system, lidar system, millimeter wave radar system, and vehicle operation feedback unit, which is connected to each executive controller, detects the actual working status of the executive controller, and The status information is fed back to the central data processing and control unit.
[0096] The full-audio perception system of intelligent driving vehicles is connected with the central data processing and control unit of the vehicle to realize information interaction between the two parties; the full-audio perception system sends vehicle control information to the central data processing and control unit, and at the same time receives the central data processing and control unit. To return vehicle control information.
[0097] In the same way, various other perception systems, such as GIS-T geographic transportation system, machine vision system, lidar system, millimeter wave radar system are also connected with the central data processing and control unit to realize the information interaction between the two parties; other perception systems The central data processing and control unit sends vehicle control information and at the same time receives the return vehicle control information sent by the central data processing and control unit. The vehicle control information sent by the full-audio perception system is redundant and complementary to the vehicle control information sent by other perception systems.
[0098] It is understandable that other perception systems can also be of other types than those listed above, and we will not list them all here.
[0099] The control method of the intelligent vehicle driving system of this embodiment includes the following steps:
[0100] S1. Full audio perception system for full audio perception and information processing;
[0101] S2. Determine whether the output information of the full-audio perception system and other perception systems are consistent; if they are consistent, perform step S4, otherwise, perform step S3;
[0102] S3. The central data processing and control unit performs Kalman filtering and gradient descent calculations on all the information of the perception system to determine which is true; if the information of the full-sound perception system is true, continue to step S4, otherwise return to step S1;
[0103] S4. The central data processing and control unit sends the vehicle control information sent by the full audio perception system to each vehicle controller, and each controller controls the action to be executed.
[0104] Specifically, the vehicle control information sent by the full audio perception system and the vehicle control information sent by other perception systems are all determined by the central data processing and control unit for driving logic. In a logical cycle, if the full audio perception system and other perception systems If the information provided is consistent, it will be deemed true and continue to be executed; if the information provided by the two is inconsistent, the central data processing and control unit will perform Kalman filtering and stochastic gradient descent calculations to determine which is true; if the full audio perception system If the information provided is true, the execution will continue, otherwise it will be false.
[0105] Kalman filtering is used for data fusion obtained by different sensing systems. For example, the data obtained by sensing system 1 is used as a reference, and the data obtained by sensing system 2 is used as an estimation step, and the probability distribution of the next state value is estimated based on the previous state value .
[0106] Suppose the best estimate at time k is The covariance matrix is ​​P k , To estimate the system state from two independent dimensions:
[0107] Predictive value:
[0108] Measurement value: (μ 1 ,∑ 1 )=(z′ k ,R k )
[0109] The overlapping area is:
[0110]
[0111] The Kalman gain is expressed as:
[0112] Available:
[0113] P′ k =P k -K′H k P k
[0114]
[0115] Through iteration, get the update steps of each state It is the best estimated value, which can realize the data fusion and control state judgment of different perception systems. Because in the process of machine learning, generalization requires a huge training set, the calculation is very time-consuming. Therefore, stochastic gradient descent (SGD) is used as an optimization method applied in artificial intelligence and machine learning:
[0116] Cut the training set into m samples (x (1) ,...,x (m) }'S independent and identically distributed mini-batch, where sample x (i) The corresponding target is y (i);
[0117] The gradient is estimated as
[0118] Update to
[0119] Among them, the initial parameter is θ, the learning rate is ε, and the learning rate of the k-th iteration is ε k; Thus, the convergence speed of calculation in the training process of machine learning can be accelerated.
[0120] The central data processing and control unit processes various data and control information, makes driving logic judgments, and is connected to the full-audio perception system, other perception systems, various vehicle controllers, and vehicle operation feedback units; the central data processing and control unit The vehicle control information sent by the full audio perception system is judged logically. If the logic is true, the vehicle control information will continue to be sent to various vehicle controllers, and the vehicle controller will send commands to the corresponding various actuators, The actuator is responsible for executing the vehicle operation actions such as acceleration, braking, steering, gearbox and body stabilization; if it is false, it will not continue to execute, and the return information will be sent to the full audio perception system to request it to continue listening.
[0121] The vehicle operation feedback unit is connected to the corresponding various execution controllers, and detects the actual working status of the actuator, and feeds back the collected working status information of the actuator to the central data processing and control unit; the central data processing and control unit is in After receiving the feedback information, perform data processing and logical judgment, and output the vehicle return information to the corresponding full-audio perception system and other perception systems.
[0122] The following is an example of two unmanned vehicles equipped with full-audio perception systems.
[0123] Such as Picture 9 As shown, the vehicle 1 and the vehicle 2 respectively drive toward the intersection from two directions. Due to obstructions, the vehicle 1 and the vehicle 2 are in each other's visual blind zone. When the sound detection area b1 and the sound detection area b2 of the vehicle 1 and the vehicle 2 cross each other, and/or the ultrasonic detection area a1 and the ultrasonic detection area a2 of each other cross, the ultrasonic signal sent by the full-sound sensing system of the vehicle 1 is obtained The spatial positioning information of the vehicle 2 and the sound signal of the vehicle 2 are detected; the sound signal includes the alarm information issued by the whistle system and the voice movement information notified by the speaker, such as going straight, turning left, turning right, braking, acceleration, deceleration, etc., The movement state of the vehicle 1 is dynamically adjusted, and its movement information is fed back to the vehicle 2 in the form of sound. In the central control unit of the vehicle, there is pre-stored line voice information for the speaker, such as "going straight", "about to turn left" and so on.
[0124] For example, vehicle 1 plans to go straight, and at the same time, detects the space information of vehicle 2 and learns that vehicle 2 plans to go straight. After vehicle 1 data analysis and decision-making, it adopts a deceleration strategy until vehicle 2 passes through the intersection first, and informs vehicle 2 of the driving strategy; Obtain the spatial information of vehicle 1 and the informed driving strategy, adopt an acceleration strategy after comprehensive analysis and decision-making to pass through the intersection, and inform vehicle 1 of the driving strategy again.
[0125] This embodiment improves the reliability of the unmanned vehicle control system from many aspects:
[0126] At the system level, 1. Adopt the full-audio perception system of the vehicle, control from the data source through the redundant setting of the acquisition device; 2. Realize the optimal decision through the optimization of the control logic;
[0127] In terms of the overall control method, 1. Adopt the way of mutual interaction between the full audio perception system and other perception systems to judge the accuracy of the perception of the full audio perception system; 2. Use Kalman filtering and random gradient descent to perform the data of each perception system Integration, the method is more concise and the result is more accurate.
[0128] The intelligent vehicle driving system of this embodiment can perceive the surrounding environment through hearing, and can realize information interaction with the surrounding environment through full audio, especially in the visual blind area, it can also perceive the information around the vehicle, and can learn the information of the other vehicle or pedestrian in advance. The driving route has a high safety factor. This embodiment helps to improve the logic and rationality of safe unmanned driving by capturing the acoustic information of surrounding pedestrians, vehicles and other environments; at the same time, it sends driving information through its own acoustic device, which is beneficial to the recognition of other vehicles and pedestrians, thereby improving The safety of the entire transportation system. This embodiment improves the degree of intelligence of "unmanned driving", in addition to "visual", it also has "hearing", and realizes the effect of "manned driving".

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.

Similar technology patents

Classification and recommendation of technical efficacy words

  • Eliminate potential safety hazards

Voice recognition method based on mobile terminal and mobile terminal

ActiveCN101840700AEnhanced sensing abilityEliminate potential safety hazards
Owner:YULONG COMPUTER TELECOMM SCI (SHENZHEN) CO LTD

Authentication method, server, and terminal

Owner:YULONG COMPUTER TELECOMM SCI (SHENZHEN) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products