Example 1
 Such as figure 1 As shown, this embodiment provides a full-audio perception system for intelligent driving vehicles, hereinafter referred to as full-audio perception system, including sound sensors, ultrasonic echo sensors, acoustic-electric converters, speakers, ultrasonic generators, electro-acoustic generators, and Audio feature recognition and classifier and full audio data fusion and decision maker.
 The sound sensor is used to collect all the sound wave signals that the human ear can feel in the space; the ultrasonic echo sensor is used to collect the specific ultrasonic signal reflected by the object, and analyze the ultrasonic signal of the vehicle identity information carried by it; sound sensor and ultrasonic The echo sensors are respectively connected with the acoustic-electric converter.
 The acousto-electric converter converts the sound signal generated by the acoustic sensor and the ultrasonic signal generated by the ultrasonic echo sensor into electrical signals; the acousto-electric converter includes at least two channels, and the acousto-electric converter will be collected by the sound sensor of channel one All the acoustic signals that can be felt by the human ear are converted into electric signal one; the acoustic-electric converter converts the specific ultrasonic signal collected by the ultrasonic echo sensor through channel two into electric signal two; acoustic-electric converter and full audio frequency The feature recognition is connected with the classifier, and these electrical signals one and two are transmitted to the full-audio feature recognition and classifier.
 The full audio feature recognition and classifier classifies the electrical signals input by the acousto-electric converter, extracts the ultrasonic echo positioning information and sound features contained therein, and performs sound information recognition to obtain different road conditions and other vehicle driving information. Specifically, the input electrical signal 1 and electrical signal are classified into two; the electrical signal 1 is a multi-source heterogeneous broadband signal, and the full audio feature recognition and classifier extracts the sound features contained in it, and performs sound information recognition to obtain different road conditions Information, other vehicle driving information and other vehicle surrounding information, etc.; the specific ultrasonic echo signal reflected by the electrical signal 2 is analyzed through the full-sound feature recognition and classifier, and the geospatial information of the vehicle is obtained by calculation and the detected vehicle and the Distance information between obstacles, etc.
 The full audio feature recognition and classifier and full audio data fusion are connected with the decision maker, and the obtained road condition information, other vehicle driving information, geospatial information, obstacle distance information, etc. are transmitted to the full audio data fusion and decision maker, and Acquire the full audio data fusion and the return control information of the decision maker; the full audio feature recognition and classifier classifies the returned full audio control information, respectively generate sound control signals and ultrasonic control signals and transmit them to the electroacoustic generator.
 The full audio data fusion and decision maker performs data fusion, logical analysis and intelligent decision-making on the road condition information data, other vehicle driving information data, geospatial information data, and obstacle distance information data received from the full audio feature recognition and classifier; Then the decision-making vehicle control information is transmitted to the vehicle central data processing and control unit, and the unit's return vehicle control information is received; the full audio data fusion and decision maker generates the return full audio control information according to the returned vehicle control information and outputs it to Full audio feature recognition and classifier.
 After the electroacoustic generator receives the sound control signal and the ultrasonic control signal transmitted by the full-sound feature recognition and classifier, it is converted into a small sound signal and a small ultrasonic signal and output to the speaker and the ultrasonic generator respectively; The vehicle identity information is modulated in the small signal.
 The output of the electroacoustic generator is connected with the speaker and the ultrasonic generator. The loudspeaker receives the small sound signal generated by the electroacoustic generator, and after amplifying it, the human ear can perceive a sound wave signal. It can be understood that the sound wave can be expressed as a whistle or a verbal warning sound such as steering, forward, reverse, and braking. The ultrasonic generator receives the small ultrasonic signal generated by the electroacoustic generator, and after amplification, it sends out an ultrasonic signal with a specific frequency and carrying vehicle identity information.
 Among them, as a specific implementation of the full-audio perception system control method of the present invention, the full-audio feature recognition and the information preprocessing of the classifier are as follows: Figure 4 As shown, the data obtained by the different types of sound sensors and ultrasonic echo sensors are marked as Type A and Type B respectively; the data obtained by the sound sensors at different times are marked as A1, A2...An; the ultrasonic echo sensors at different times are marked The obtained data are denoted as B1, B2,...Bn. It is understandable that similar data can be a multi-dimensional data space equivalent to the superposition of multiple two-dimensional data. Such as Figure 4 As shown, the full-sound feature recognition and classifier performs classification preprocessing on the ultrasonic echo sensor and sound sensor data. Through multiple sound data sets and multiple ultrasonic data sets generated over time, sound sensors and ultrasonic echo sensors can record time-space information of vehicle and road conditions, which form a space for vehicle motion data and attributes.
 Full audio data fusion and data fusion of decision makers such as Figure 5 As shown, the overlapping part of the data in the An area and the Bn area, the overlapping part of the data in the An area and the An+1 area, the overlapping part of the data in the Bn area and the Bn+1 area are recorded as redundant data (such as Figure 5 In the shaded area), the non-overlapping part is recorded as complementary data. Redundant data is a kind of high reliability data, complementary data is a kind of lower reliability data; conflict data is a special kind of complementary data.
 The redundant data and complementary data are the original data of the full audio data fusion and the decision data of the decision maker. Obviously, the more raw data obtained, the easier it is to obtain reliable and optimal decisions.
 Such as Figure 7 As shown, the sound sensor and ultrasonic echo sensor of this embodiment collect full audio information around the vehicle, and the collected information is converted into a signal that can be recognized by the full audio feature recognition and classifier through the acousto-electric converter, and then the feature extraction and recognition of the signal is performed With classification, the classified data is subjected to data fusion analysis to obtain the driving decision. The decision information is sent to the actuator for execution, and a full audio control signal is sent. After signal classification and conversion, the full audio information is sent out through the speaker and ultrasonic generator.
 Full audio here means that the target signal frequency band covers the entire audio frequency range and the specific ultrasonic frequency range.
 Such as Figure 8 As shown, the control method of the full-audio perception system of this embodiment includes the following steps:
 Start the loop, configure initial information;
 S11. Transmit ultrasonic echo signals and receive ultrasonic echo signals;
 S12. Calculate the time interval and spatial position data according to the received ultrasonic echo signals to obtain the primary decision-making data of vehicle driving;
 S13. Transmit and receive sound signals;
 S14. Recognize the converted distance information and voice information data of the sound intensity within the time interval according to the received sound signal to obtain the vehicle driving secondary decision data; it can be understood that after the loop is started, the sequence of step S11 and step S13 is relative;
 It is judged whether it is the first time to obtain secondary driving data; if yes, go directly to step S15; otherwise, judge whether the previous secondary decision is consistent with the current secondary decision number; if the previous secondary decision is the same as this time Step S15 is executed if the number of level decisions are consistent, otherwise, the confidence probability of the previous secondary decision data and the confidence probability of this secondary decision data are calculated and compared to obtain the secondary decision with high confidence probability;
 Calculate the confidence probability of the primary decision data and judge whether it is the optimal decision;
 Judge whether the confidence probability of the secondary decision is higher than the confidence probability of the primary decision, if so, execute the secondary decision, otherwise execute the primary decision to achieve the optimal decision.
 Among them, the main process of calculating the confidence probability and the optimal decision is as follows.
 Let the confidence probability P Z =1-α=P(θ 1 ≤θ≤θ 2 ), then 1-α means in the confidence interval (θ 1 ,θ 2 ) To estimate the correct probability;
 Mean square response of detected data 2 = The probability density of θ is Then there is a confidence interval (θ 1 ,θ 2 ) Within the confidence probability
 P again a (s,s′)=P(s′|s,a) represents the probability of reaching the next state s′ in a certain state s and performing action a;
 The Markov optimal decision process can be expressed as π:S→A, π is the optimal decision, S is the finite state space, and A is the decision space; the behavior generated in any state S is expressed as a=π(s), then The optimal decision is to continuously iterate between possible states s and s′ until the accumulated (expected) excitation V i+1 (s) Convergence:
 S15. Judge whether the primary decision and the secondary decision are consistent, if the decision is executed in agreement, if not, then return to step S13.
 After executing the decision, return to the initial configuration position and write new information.
 The full-audio perception system of this embodiment adopts the structure of sound and ultrasonic dual positioning, and sound signal acquisition and recognition. Through the data fusion and decision-making method of the full-audio perception system, the voice feature recognition technology is creatively applied in the transportation field. The semantic information sent around the vehicle further improves the intelligence of unmanned driving.
 The full-audio perception system of this embodiment is mainly used for intelligent driving vehicles. Of course, in fact, it can also be applied to other vehicles.