[0021] In order to make the technical means, creative features, goals and effects achieved by the present invention easy to understand, the present invention will be further described below in conjunction with specific embodiments.
[0022] as attached figure 1 As shown, a hardware device block diagram of this embodiment includes multiple robots, odometers, cameras, wireless communication systems and storage devices. The odometer, video camera, wireless communication system, and storage device are all installed on the robot. In the multi-robot system, each robot is regarded as an intelligent body, and each robot carries an odometer, two cameras, wireless communication equipment and storage devices; the robot tracks its own position through the odometer, and uses two cameras to collect the environment Real-time images, using the wireless communication system to send the map information stored by itself to the companion robot, and receive map information from the companion robot at the same time. Each robot uses the map information obtained from other robots to perform map fusion until each individual robot obtains a complete environmental map, and uses the complete environmental map to search for targets.
[0023] as attached figure 2 As shown, the present invention is based on the multi-robot joint target search method of imitating animal space cognition technology, and its specific process includes the following steps:
[0024] (1) First judge whether the information is successfully matched; when performing a new target search task, the robot first matches the obtained information with the cognitive map information in the storage device. If the match is successful, it calls the known map to carry out target search work. Then go to step (7); if the matching is not successful, go to step (2);
[0025] (2) Establish the kinematics model and sensor model of each robot; take the center of gravity of the mobile robot as the origin of the coordinate system of each robot to establish the kinematics model of each robot, and the state variables of the robot are (x, y, θ) T , where (x, y) is the coordinates of the robot in the plane coordinate system, θ is the orientation angle of the robot; the cameras are placed in a cross direction, and the midpoint of the optical centers of the two cameras is used as the origin to establish a visual sensor model, and the state variable for (x, y, z, x l ,y l , x r ,y r ) T , where (x, y, z) is a target point in space, (x l ,y l ) and (x r ,y r ) are the projection point coordinates of the target point on the planes of the left and right cameras, respectively.
[0026] (3) Complete the calibration and matching of the internal parameters of the camera, and then read the depth information of the target object on the basis of the established visual sensor model to identify objects in the environment.
[0027] In this embodiment, the calibration and matching steps of the internal parameters of the camera are as follows:
[0028] (3a) Calibration of parameters; the calibration method is as follows: there are n images about the template plane, and there are m calibration points on the template plane, then an evaluation function can be established:
[0029] C = Σ i = 1 n Σ j = 1 m | | m ij - m ( A , R i , t i , M j ) | | 2
[0030] Among them, m ij is the j-th image point in the i-th image, R i is the rotation matrix of the i-th image coordinate system, t i is the translation vector of the i-th image coordinate system, M j is the space coordinate of the jth point, m(A, R i , t i ,M j ) is the image point coordinate obtained by these known quantities. A, R that minimizes the evaluation function C i , t i ,M j is the optimal solution to this problem.
[0031] (3b) Image matching; the matching method is: set the image to be matched (M×M) and the template image (N×N), and define the normalized correlation coefficient as:
[0032] r ( u , v ) = Σ x , y [ f ( u + x , v + y ) - f ‾ u , v ] [ t ( x , y ) - t ‾ ] Σ x , y [ f ( u + x , v + y ) - f ‾ u , v ] 2 Σ x , y [ t ( x , y ) - t ‾ ] 2
[0033] Among them, u, v=0, 1, ..., M-1, f(u+x, v+y) represents the pixel gray value of the image to be matched at the position (u+x, v+y), t (x, y) represents the pixel gray value of the template image at the (x, y) position, Indicates the average gray value of f in the area of the same size as the template image at the (u, v) position; r(u, v) constitutes a correlation plane, and the correlation plane corresponding to the matching position will have a peak value, which is generally close to 1.
[0034] (4) Use the self-organizing attractor network algorithm model to locate the robot; each robot uses the self-organizing attractor network algorithm model imitating animal space cognition to locate the robot, and each robot uses the detected environmental information to construct its own cognitive map, and use the map for positioning and path planning.
[0035] The self-organizing attractor network algorithm model of imitating animal space cognition in this embodiment is as follows:
[0036] (4a) The method of building a real-time map based on the self-organizing attractor network model is as follows. First, the image obtained by the camera is processed to obtain real-time environmental information, and a neural network is constructed according to the detection range. According to the recognition distance of the detector, the plane space is discretized, and each discrete point (neuron) is a 3-dimensional space, which is determined by constitute, (x, y) is the geographic coordinates of the discrete point, is the activity value of neurons in the self-organizing attractor network, calculated by the following formula:
[0037] τ dh i P ( t ) dt = - h i P ( t ) + φ 0 C P Σ j ( ω il RC - ω INH ) r j P ( t ) + I i V + φ 1 C P × HD × FV Σ j , k , l ω ijkl FV r j P r k HD r l FV
[0038] in, is the firing rate of place cell j, is the excitation rate of the head direction cell k, is the excitation rate of the velocity cell l, is the corresponding connection weight, φ 0 , φ 1 and ω INH is a constant, C P is the number of other nodes associated with the current node, C P×HD×FV is the number of all other neurons associated with the current node, is the visual input, i.e. the current position, and τ is the decay rate.
[0039] (4b) The influence of the surrounding nodes in the self-organizing attractor network algorithm model on the node, and the influence of other position cells, direction cells, and speed cells on the node are respectively weighted and express, Calculated as follows:
[0040] δω ij RC = k r ‾ i P r ‾ j P
[0041] in, is the tracking value of place cell excitation rate, calculated by the following formula:
[0042] r ‾ P ( r + δt ) = ( 1 - η ) r P ( t + δt ) + η r ‾ P ( t )
[0043] Among them, η is any value in [0, 1], which determines the proportion of current excitation and tracking at the previous moment. Calculated as follows:
[0044] δω ijkl FV = k ~ r i P r ‾ j P r k HD r l FV
[0045] in, is the change in synaptic weights, is the instantaneous firing rate of place cell i. is the tracking value of the firing rate of place cell j. is the excitation value of cell k in the head direction, is the excitation value of forward speed cell l, is the learning rate.
[0046] (4c) Calculate the dynamic activity value of each neuron according to the self-organizing attractor network algorithm model, which can ensure that the dynamic activity value of the neuron is the smallest in the place where there are obstacles or other robots, and the neuron’s dynamic activity value is the smallest at the target position. The dynamic activity value is the largest, so that the robot can calculate the optimal formation path in real time according to the dynamic activity value of each neuron, and navigate. The specific process is as follows:
[0047] (θ r ) t+1 =angle(p r ,p n ),
[0048] p n ⇐ s p n = max { s j , j = 1,2 , · · · , k }
[0049] Among them, (θ r ) t+1 is the orientation angle of the robot’s next move, angle(p r ,p n ) is to calculate the current position of the robot p r and neuron p n The angle formula between two points, and p n It is the one with the largest dynamic activity value among all neurons within the detection range of the robot.
[0050] (4d) With the movement of the robot, the environmental information detected by the robot changes at all times. According to the real-time changing information, the activity value of each neuron in the self-organizing neural network is continuously updated, thereby continuously updating the environmental map, and then guiding the robot to move. Thinking, the trajectory of the robot will be an optimal path that can automatically avoid obstacles, will not collide with other robots, and can quickly reach the required target search.
[0051] like Figure 4 As shown, the self-organizing attractor network model used in the present invention can ensure that the activity value of the neuron at the position of the target is the largest through the calculation formula of the activity value of the self-organizing attractor network, and the neuron at the position of the obstacle has the maximum activity value. The activity value is the smallest, so according to this model, the trajectory of the robot will be an optimal path that can automatically avoid obstacles, will not collide with other robots, and can quickly reach the required target search.
[0052] (5) Build environment map and multi-robot map fusion; each robot performs search tasks independently, and at the same time broadcasts its own map information, that is, the activity value of each location cell in the self-organizing attractor network, and each robot uses the obtained The map information of other robots is fused until a complete environmental map is obtained in each single robot, and the map is saved in the storage device of the robot.
[0053] as attached image 3 As shown, in the present invention, the map of a plurality of robots is carried out the method for fusing, and its concrete process comprises the following steps:
[0054] (5a) Carry out map segmentation first, each robot divides the detected map into blocks, and each block becomes an independent individual, eliminating the difficulty of map matching during clustering;
[0055] (5b) Map clustering. The clustering of maps is realized through SOM, that is, the matching of maps is realized through SOM. SOM is a powerful neural network model that can detect the internal correlation of its input vectors and classify them according to their similarity; the specific algorithm is as follows:
[0056] Consider an input sample x(k)∈R 2 , where k is the sample index, and the weight calculation formula of the i-th neuron is ω i (k)∈R 2. When there is a new input, the weight of the neuron is continuously improved. The specific weight update iteration formula is:
[0057] ω i (k+1)=ω i (k)+h i (k)(x(k)-ω i (k))
[0058] Among them, h is the neighborhood function, and the neuron with the smallest distance to the input variable is called the winner. Through continuous iteration, the SOM self-organizing neural network can realize automatic clustering of similar maps, that is, map matching.
[0059] (5c) Then adjust the clustered map and perform Radon transformation to realize the fusion of the map. For a given mapping m(x,y), the Radon transform of the angle θ along the radial line is defined as
[0060] r θ ( x ′ ) = ∫ - ∞ ∞ m ( x ′ cos θ - y ′ sin θ , x ′ sin θ + y ′ cos θ ) dy
[0061] in
[0062] x ′ y ′ = cos θ sin θ - sin cos θ x y
[0063] (5d) After the map is relatively translated, it is verified. This process is used to eliminate false matches or select the best result among several candidate maps. This verification method is based on the convergence performance index J. The smaller J is, the better the map matching is. The calculation formula of J is as follows:
[0064] J = Σ i = 1 n | | p 1 { i } - p 2 { i } | |
[0065] where n is the number of clusters, p 1 and p 2 respectively figure 1 peacefully figure 2 corresponding cluster points. J is the sum of squared Euclidean distances between corresponding matching points.
[0066] (6) Navigation and target search: the robot uses the complete environment map to search for targets, and plans the best path to reach the target, and cooperates to complete the target search task in an unknown environment;
[0067] (7) Determine whether the task is completed; if the task is completed, the search task ends; otherwise, proceed to step (2) to redo the map construction and target search process, which greatly improves the search efficiency.
[0068] When performing a new target search task, the robot first matches the obtained information with the cognitive map information in the storage device. If the match is successful, it directly calls the corresponding path for target search and navigation; otherwise, the robot re-constructs the map And the target search process, greatly improving the search efficiency.
[0069] The present invention can adopt binocular vision to automatically perceive unknown environmental information without any artificial road signs; the present invention adopts the self-organizing attractor network algorithm model, which expands the application range of bionic technology, so that it can be applied to robots in unknown environments Synchronous positioning and map creation; the present invention adopts multi-robots to build maps separately, and then fuses all the obtained maps to improve the accuracy and efficiency of map construction; when the present invention is used for robot navigation and target search, when the robot enters the same environment , it can directly call the existing cognitive map for navigation, which greatly improves the working efficiency of the system. It is especially suitable for areas that are dangerous or inaccessible to humans, and has high practical application value.
[0070] The basic principles and main features of the present invention and the advantages of the present invention have been shown and described above. Those skilled in the industry should understand that the present invention is not limited by the above-mentioned embodiments. What are described in the above-mentioned embodiments and the description only illustrate the principle of the present invention. Without departing from the spirit and scope of the present invention, the present invention will also have Variations and improvements are possible, which fall within the scope of the claimed invention. The protection scope of the present invention is defined by the appended claims and their equivalents.