[0042] In order to more clearly illustrate the technical solutions in the prior art, the specific implementation of the present invention will be described below with reference to the accompanying drawings. Obviously, the following description is only some embodiments of the present invention. For those of ordinary skill in the art, without creative work, other drawings can be obtained based on these drawings, and other implementations can be obtained. the way.
[0043] The invention proposes to provide a drone autonomous landing system with high precision, low cost, safety and reliability based on a two-dimensional code containing authorization information, using a camera, an inertial sensor and a GNSS receiver carried by the drone. The system consists of figure 1 As shown, it includes drone 1, airborne camera 2, inertial measurement unit (IMU) 3, GNSS positioning device 4, airborne computer 5, and two-dimensional code 6.
[0044] The drone 1 can be a rotary-wing drone. The on-board camera 2 is installed under the drone 1, with the camera facing downwards, so that the Z axis of the camera coordinate system is parallel to the central axis of the rotary-wing drone 1;
[0045] The IMU 3, the GNSS positioning device 4, and the onboard computer 5 are all installed above the drone. The IMU 3 is used to measure the three-axis angular velocity and acceleration of the drone, and the GNSS receiver 4 is used for satellite positioning to determine the geodetic coordinates of the drone. IMU3 and GNSS positioning device are collectively called INS/GNSS combined measurement system.
[0046] The INS/GNSS combined measurement system of the present invention includes, but is not limited to, an inertial measurement unit, a GNSS receiver, and a GNSS antenna. It also includes any of INS/GNSS as the core but fusion of odometer, magnetometer, and barometric altitude sensor. One or more integrated integrated navigation systems.
[0047] The INS/GNSS combined measurement system of the present invention is not limited to the “split” integrated navigation system that is not integrated in the same housing used in this specific embodiment, but may also be an “integrated” INS/GNSS receiver integrated with INS. GNSS system. The onboard computer 5 is used to process sensor data and send instructions to the drone controller. The onboard camera 2, the inertial measurement unit 3 and the GNSS positioning device 4 are respectively connected to the onboard computer 5.
[0048] The two-dimensional code 6 is a two-dimensional code with authorization information, which is set on the landing point, and the two-dimensional code can be posted on a landing point with known geodetic coordinates in a certain medium, or by a display device (such as a liquid crystal screen). ) Is displayed on the landing point.
[0049] Preferably:
[0050] Rotor drones include, but are not limited to, drones that can directly control the motor speed at the bottom level (another example: drones that have a good integrated package and are controlled by an open SDK in the form of commands), including but not limited to quadrotor drones (Another example: a six-rotor drone).
[0051] The camera is installed under the drone body with the camera facing downwards. The camera includes various forms of image acquisition devices, including fisheye cameras, global shutter cameras, and rolling shutter cameras.
[0052] The GNSS positioning module, inertial measurement unit and onboard computer are installed on the drone.
[0053] Airborne computers include but are not limited to ARM development boards equipped with operating systems (such as FPGA+ARM architecture chips).
[0054] A black and white two-dimensional code containing authorization information is posted on the ground or platform with known geodetic coordinates.
[0055] The shape of the QR code is not unique, it can be rectangular, square, or circular, etc. The size is known. It can be regarded as composed of several different black and white units. The circular QR code is divided into sectors with the same central angle by the nested rings.
[0056] In order to avoid the central symmetry of the image and the ambiguity of the direction definition, when choosing the black square, please note that the final two-dimensional code cannot be a centrally symmetric figure but can be an axisymmetric figure. The outer circle is designed in black, and the inner body is white. A fixed number of m squares can be selected to fill in black, corresponding to different coded information.
[0057] According to the size of the QR code, it can be calculated that there are M types of coding schemes. According to different coding schemes, the M types of QR codes are assigned different numbers and other authorization information in the database, so that the drone can recognize one of them when landing. When a two-dimensional code is compared with the database, it is confirmed that the area can be landed. On the contrary, if the identified two-dimensional code does not contain authorization information or does not match the database, the landing is refused. During specific implementation, authorization information may preferably include, but is not limited to, the actual physical size of the QR code, the geometric specifications of the QR code (such as providing the number of squares), the number of the QR code in the database, and the actions of the drone after landing ( The return journey is to fly to the next target point) and authorized landing password. For example, the authorization landing password means that when the drone is delivered to the customer's home, the customer needs to enter the authorization password corresponding to the QR code to open the cargo box and take out the goods. The medium used to display the QR code includes but is not limited to ordinary paper (also: glass, oily paper, etc.)
[0058] The QR code information database is stored in the drone's onboard computer or back-end server. If it is stored on the back-end server, the drone and the server need to communicate remotely to determine whether the QR code is successfully matched.
[0059] In the embodiment, the two-dimensional code 6 is posted on the landing point with known geodetic coordinates. Taking a square two-dimensional code as an example, the two-dimensional code is composed of N×N black and white squares, and the surrounding circle is black. The principle of asymmetry is to fill the black squares appropriately, and assign different authorization information to the QR codes formed by different filling methods. The coding information is correspondingly stored in the QR code information database. figure 2 Shown.
[0060] The embodiment of the present invention provides a two-dimensional code containing authorization information and an inertial navigation assisted drone autonomous landing method. After the drone moves to the vicinity of the landing point based on the GNSS/INS combined navigation result, it starts to use the camera to collect The image search for the QR code includes: the UAV takes the current hovering point as the center of the circle, takes the appropriate length as the radius and makes a circle flight at a fixed angular velocity, and gradually increases the flying radius until the QR code is found. In this way, the position of the drone is determined according to the GNSS/INS integrated navigation, and after reaching the target point, the positioning and attitude are determined based on the two-dimensional code and inertial navigation that have been deployed in advance to assist the drone to accurately land.
[0061] Including the following steps:
[0062] 1. Calibrate the internal parameters of the airborne camera to obtain the internal parameter matrix and distortion coefficient of the camera
[0063] 2. Use GNSS static positioning technology to determine the geodetic coordinates of the landing point of the drone;
[0064] 3. Establish a QR code database according to the preset QR code size. Take a square QR code as an example. The QR code database contains the physical size of the QR code. The four corners of the QR code are in the local coordinate system of the QR code. Under the coordinates, the total number of squares and the number of black squares in the middle area, and each type of code corresponds to different authorization information. In the database, a (N-2)×(N-2) two-dimensional matrix is used to represent the encoding information of the two-dimensional code (N is the number of squares in the entire row or column of the two-dimensional code, and the surrounding circle is black), among which small black squares The corresponding matrix element value is 0, and the matrix element value corresponding to the white square is 1;
[0065] 4. Post the QR code on the landing point of the drone so that the center of the QR code coincides with the landing point;
[0066] 5. Send the geodetic coordinates of the landing point to the UAV, and the UAV will automatically take off and fly to the landing point. The specific steps are as follows:
[0067] 51. UAVs rely on GNSS/INS integrated navigation to obtain their position;
[0068] 52. The drone's onboard computer calculates the coordinates of the to-be-landed point in the drone's navigation coordinate system according to the difference between the current position of the drone and the geodetic coordinates of the point to be landed;
[0069] 53. The onboard computer of the UAV calculates the motor control quantity according to the coordinates and sends the control signal to the motor to continuously reduce the deviation until the UAV stabilizes near the landing point;
[0070] 6. The drone uses the vision module to search the QR code. According to the specific situation, there are two ways to search:
[0071] (1) Surround search mode, the specific steps are as follows:
[0072] 61. The UAV takes the current hovering point as the center of the circle, the appropriate length as the radius, and the appropriate angular velocity to fly around;
[0073] 62. During the flight, the drone turns on the camera, performs morphological processing on the images collected at a fixed frequency to eliminate spots, and then performs grayscale and binarization;
[0074] 63. Extract edge contour information from the binarized image to remove unreasonable contours;
[0075] 64. Perform polygon fitting on the remaining contours, eliminate unreasonable polygon contours, and correct the remaining quadrilateral contours according to the corresponding two-dimensional code corner plane coordinates;
[0076] 65. Extract the image gray information of the corrected contour area, and if a match can be found in the two-dimensional code database, it is determined that the two-dimensional code is searched. The matching method is to divide the corrected contour into N×N small squares. If more than half of the gray information of each small square is black (or white), the small square is regarded as black (or white). Calculate the Hamming distance between the identified two-dimensional code and the two-dimensional code in the database, and if the matching is successful, read the authorization information to confirm the search for the two-dimensional code.
[0077] 66. If the matching fails, increase the radius and repeat step 61 until the QR code is found.
[0078] (2) The specific steps are as follows:
[0079] 61. Ascending search mode, the UAV uses the current hovering point as the starting point and rises vertically at an appropriate speed;
[0080] Because the camera is vertically downward and the viewing angle is fixed, the search range on the ground after the drone rises will increase, and the probability of successful QR code search will increase if the camera resolution is guaranteed. Therefore, it can be realized in theory, and the steps similar to steps (1)-62, 63, 64, 65 can be performed.
[0081] 66. If the QR code is not successfully searched, it will continue to rise until the QR code is found.
[0082] The above two search methods should be determined according to the camera resolution, pixel size, focal length, and two-dimensional code specifications when implementing the method of the present invention.
[0083] After determining the QR code, the drone starts to determine the position of the center point of the QR code in the drone body coordinate system and the attitude of the drone in the local coordinate system of the QR code based on the combined navigation results of vision and inertial navigation. Use it as the input of the UAV controller, adjust the UAV's posture according to the output value of the controller to stabilize the deviation from the landing point and landing posture within a smaller preset threshold, and then start to land automatically.
[0084] The combination of vision and inertial navigation needs to extract the corner points of the QR code in the image. There are two ways to process the image collected by the camera to extract the four corner points of the QR code:
[0085] The first is to perform binarization and contour extraction operations on each image collected separately, and then compare the area within the quadrilateral contour in the image with the QR code database; the second is to match the QR code for the first time Then use descriptors to describe the four corner points, and then no longer perform binarization and contour extraction, but extract image feature points, and use the same descriptor to describe and match with the previous two-dimensional code corner points. After that, the corresponding two-dimensional code corner coordinates and descriptors in the current image are obtained, and the next frame of image is matched with them.
[0086] In specific implementation, the types of descriptors that can be used include DoG (Difference of Guassians), Brief (Binary Robust Independent Elementary Features), SIFT (Scale-invariant feature transform), etc.
[0087] The vision and inertial navigation integrated navigation method includes two modes of loose combination and tight combination. The loose combination method is that the camera determines the geodetic coordinates and camera attitude of the camera according to the geodetic coordinates of the corner points of the two-dimensional code and the position of the camera relative to the local coordinate system of the two-dimensional code. The inertial navigation calculates the position and posture of the IMU according to the accelerometer and gyroscope data. , According to the installation relationship between the IMU and the camera, the position and attitude of the camera are calculated. The difference between the two is used as the Kalman filter observation to update the estimated position and attitude error and the IMU zero offset error and other parameters to obtain the updated positioning result. Convert it to the relative pose of the UAV relative to the local coordinate system of the QR code. The tight combination method is that after the camera observes the feature points (ie, the corner points of the two-dimensional code), the camera does not perform positioning alone, and is combined with the inertial navigation at the observation level to determine the camera's pose relative to the local coordinate system of the two-dimensional code.
[0088] The specific steps of the embodiment are as follows:
[0089] First, obtain the geodetic coordinates of the landing point through high-precision maps or GNSS static positioning. Lay out the QR code so that its center coincides with it, so that the local coordinate system of the QR code coincides with the local navigation coordinate system (NED, X-axis refers to north, Y-axis refers to east, and Z-axis is perpendicular to the XY plane and points to the center of the earth). This determines the coordinates of the four corner points of the QR code in the local coordinate system.
[0090] Then the drone will automatically take off after receiving the geodetic coordinates of the landing point. According to the combination of the airborne GNSS positioning device and the IMU measurement data, the drone will calculate the position deviation between itself and the target point, and convert the deviation into the control input of the drone control system. The drone flies to the target point at a steady speed. The INS/GNSS integrated navigation solution method can be loose combination solution mode, tight combination solution mode or PPK tight combination solution mode. Among them, the GNSS solution method in loose combination solution mode includes real-time dynamic differential positioning (RTK) , Precision single point positioning (PPP), single point positioning (SPP) and other solution modes. In this specific implementation, the real-time single-point positioning loose combination solution method is preferred.
[0091] When the drone reaches a certain preset threshold near the target point and stabilizes for a period of time, it starts to search for the QR code in the field of view with monocular vision. Take the flight search as an example. If the drone does not detect the QR code at the current position, the target point is the center of the circle, preferably at a fixed angular velocity, starting from the preset minimum search radius, and gradually increasing the radius according to the size Fly around to search for the QR code. The fixed angular velocity can have a more stable control of the drone, and at the same time make the collected photos more uniform. In order to improve the search efficiency and avoid overlapping searches when flying around, the minimum search radius and the incremental gradient of the radius can be determined according to the current UAV ground height and internal parameters. The coordinates of the pixel points obtained from the image are expressed in the pixel coordinate system. The origin of the pixel coordinate system is at the upper left corner of the image, with the u axis to the right and the v axis to the down. Then there is
[0092]
[0093] Among them, K is the camera's internal parameter matrix, where f x , F y Is the focal length, c x , C y Represents the translation from the origin of the camera's imaging plane to the origin of the pixel coordinate system. P=[X Y Z] T Represents the coordinates of the actual point corresponding to the pixel in the camera coordinate system. The current height of the camera can be calculated according to the current height of the drone from the ground, and then the position of the edge pixels of the camera imaging in the camera coordinate system can be calculated from the above formula. Taking the smaller value of the X and Y coordinates as r, the current position of the drone is taken as the center of the circle, 2r-m is the minimum search radius, the incremental gradient is 2r-m, and m is the size of the QR code, that is The current position of the human-machine is the center of the circle, and the search is carried out with 2r-m as the radius. When the search is not possible, the search is increased to (2r-m)+(2r-m). 2r-m)+(2r-m)+(2r-m) is the radius to search.... When the drone is flying around, the onboard computer performs morphological processing on the image obtained by the camera, reduces noise, and performs grayscale and binarization, and then extracts contour information in the image, and performs polygonal simulation on several extracted contours. Close, according to the outline size and the distance between the points, the outline that does not meet the requirements is eliminated, and the outline of the two-dimensional code to be detected is left. Correct the contour to be detected according to the four-point perspective transformation, decode the two-dimensional code, extract the pixel information contained in the contour area, and compare the information with the database. If the comparison is successful, confirm whether to land according to the corresponding authorization information, and there is no one after landing Machine action. The image processing flow is as image 3 Shown.
[0094] If the QR code is not searched when the search radius is increased to the limit, the minimum search radius at the beginning is reduced to 1/2 of the value taken in the previous iteration, and the gradient remains unchanged and the search is performed again, that is, no one The current position of the machine is the center of the circle. Use (2r-m)/2 as the radius to search. If the search is not possible, increase the search with (2r-m)/2+2r-m as the radius. Press this to iterate until you find QR code. During specific implementation, the limit value can be preset as needed.
[0095] After the UAV has searched for the QR code, it will start the visual and inertial navigation (Visual/INS) combined positioning module. The combined positioning mode of vision and inertial navigation includes two modes of loose combination and tight combination. You can choose one of them for specific implementation. . In this embodiment, the loose combination mode is selected and the corresponding observation equation is derived. Proceed as follows:
[0096] 1) Binarize the image captured by the camera after noise reduction, extract the edge contour information in the binary image, store all the extracted contours in the vector and perform contour polygon fitting one by one, eliminating non-quadrilateral and concave Unreasonable contours such as polygons and perimeters that are too short or too long.
[0097] 2) Then in the remaining outline, the corresponding relationship between the pixel coordinates of the four corner points and the local coordinates of the four corner points of the known two-dimensional code is obtained to obtain the transformation matrix of the perspective projection transformation, and then use the matrix to transform The image of the area containing the contour is transformed into a square image.
[0098] 3) Compare the pixel information in the image with the two-dimensional code in the database to obtain the correct contour, and then refine the pixel coordinates of the corner points to obtain sub-pixel coordinates.
[0099] 4) Finally, according to the n-point perspective projection transformation (PNP, Perspective-n-points), the rotation matrix and translation vector of the local coordinate system of the QR code relative to the camera coordinate system are obtained, and then the camera's geodetic coordinates and relative two-dimensional The posture of the local coordinate system. After the camera estimates the position and attitude, it can be combined with the position and attitude estimated by the inertial navigation to correct errors such as the zero offset of the inertial navigation component, and obtain a more robust navigation and positioning result.
[0100] 5) Convert the combined camera positioning result and the position deviation of the center of the QR code into the position of the center of the QR code in the drone coordinate system, and then this position can be used as the input of the drone flight controller, and the output is The motor speed of the drone is continuously controlled until the position deviation is stabilized within a small threshold, which can be regarded as meeting the landing conditions and start to land automatically. The overall flow chart of the system is as Figure 4 Shown.
[0101]
[0102] In the above formula, with Respectively represent the rotation matrix and translation vector from the local coordinate system w of the QR code to the camera coordinate system c, P w , P c Respectively indicate the position of the corner point in the local coordinate system of the QR code and the camera coordinate system. Let P w =0, the coordinates of the origin of the local coordinate system of the QR code (ie the center point of the QR code) in the camera coordinate system can be obtained Rotate the drone body coordinate system b 90° clockwise to get the intermediate coordinate system b′, then
[0103]
[0104] among them, Respectively represent the rotation matrix between the intermediate coordinate system b′ to the camera coordinate system c and the drone coordinate system b to b′, the former is the identity matrix, Represents the translation vector from system b'to system c, which is a constant constant value. And,
[0105]
[0106] Among them, I represents the identity matrix;
[0107] Then there is,
[0108]
[0109] among them, Means The inverse.
[0110] Therefore, the position of the center point of the QR code in the drone coordinate system is:
[0111]
[0112] The UAV flight controller adopts classic PID control. For ease of implementation and reference, the output expression of PID controller is provided as
[0113]
[0114] ev(t)=sp(t)-pv(t)(8)
[0115] Among them, sp(t) is the set value, pv(t) is the process variable, that is, the feedback value, ev(t) is the corresponding difference; mv(t) is the output signal of the controller, K p Is the scale factor, T I And T D These are the integration time and the derivative time respectively. By continuously adjusting PID parameters to reduce the overshoot, shorten the response time and reduce the steady-state error, the controller can achieve a better state.
[0116] Derivation of Visual/INS loose combination observation equation:
[0117] The Kalman filter state equation can be obtained from the error differential equation of the inertial navigation (position, velocity and attitude)
[0118]
[0119] x(t)=[δr n δv n φ b g b a ] (10)
[0120] x(t) is the state vector, including position error, velocity error, attitude error, gyroscope and accelerometer zero offset and other fifteen dimensions. F(t) represents the state transition matrix, w(t) is the system noise, and G(t) is the noise driving matrix.
[0121] For the observation equation, the visual measurement obtains the camera's pose relative to the local coordinate system of the QR code, and the INS calculates the position of the IMU. The camera position can be converted by lever arm compensation. The camera is obtained by the camera and the INS respectively. The position difference can be used as the observation measure of position error z rc.
[0122]
[0123] In the above formula, Is the camera position calculated by inertial navigation, Is the camera position measured by vision, Is the camera position error calculated by inertial navigation, Is the directional cosine matrix from series b to n, Indicates the deviation of the installation position of the camera and the inertial navigation, that is, the lever arm, Representation vector The antisymmetric matrix of, φ represents the attitude error, ε rc Indicates measurement noise. In the text (α×) represents the antisymmetric matrix corresponding to the vector α.
[0124] Let's take a look at the observation equation of the attitude error. The attitude obtained by the IMU is from the system b to the system n. By aligning the local coordinate system of the feature point with the local navigation coordinate system n system, the attitude of the camera relative to the n system can be obtained. The observation equation can be established.
[0125] Camera posture calculated by inertial navigation:
[0126]
[0127] among them, Is the camera posture calculated by inertial navigation, which is the posture of the camera coordinate system relative to the carrier coordinate system. Is the true value of IMU pose, It is the IMU posture calculated by the inertial navigation, the error between the two is represented by φ, and (φ×) is the antisymmetric matrix corresponding to the error.
[0128] The posture calculated by the camera:
[0129]
[0130] among them, The posture calculated for the camera, It is the real camera pose, v represents the measurement noise, and (v×) represents the antisymmetric matrix corresponding to the measurement noise.
[0131] Use ρ to represent the deviation between the camera posture calculated by the camera and the camera posture calculated by the inertial navigation, (ρ×) represents the antisymmetric matrix corresponding to the deviation, then
[0132]
[0133] [I-(φ×)]=[I-(ρ×)][I+(v×)] (15)
[0134] Simplify to get
[0135] ρ=φ+v (16)
[0136] From the 14 formula,
[0137]
[0138] In summary:
[0139]
[0140] Z represents the observation vector.
[0141] In specific implementation, software can be used to realize automatic process operation.
[0142] The foregoing embodiments are used to specifically illustrate the present invention. Although specific terms are used in the text to illustrate, the protection scope of the present invention cannot be limited by this. Those familiar with this technical field can understand the spirit and principle of the present invention. It is changed or modified to achieve the equivalent purpose, and this equivalent change and modification shall be covered within the scope defined by the scope of the claims.