Sensor fusion using inertial and image sensors
An image sensor and inertial sensor technology, applied in image enhancement, image analysis, image data processing, etc., can solve problems such as inaccuracy and unfavorable functions of unmanned aerial vehicles
Active Publication Date: 2018-03-27
SZ DJI TECH CO LTD +1
5 Cites 21 Cited by
AI-Extracted Technical Summary
Problems solved by technology
for example, some techniques for estimating UAV state information m...
Method used
[0083] The systems, methods, and apparatus of the present disclosure support determining information related to the operation of a movable object, such as an unmanned aerial vehicle (UAV). In some implementations, the present disclosure utilizes sensor fusion techniques to combine sensor data from different sensor types in order to determine various types of information useful for UAV operations, such as for state estimation, initialization, error recovery, and/or or parameter calibration information. For example, an unmanned aerial vehicle may include at least one inertial sensor and at least two image sensors. Data from different types of sensors can be combined using various methods such as iterative optimization algorithms. In some implementations, an iterative optimization algorithm involves iteratively linearizing and solving a nonlinear function (eg, a nonlinear objective function). The sensor fusion techniques described in this paper can be used to improve the accuracy and robustness of UAV state estimation, initialization, error recovery, and/or parameter calibration, thus extending the capabilities of UAVs for "smart" autonomous or semi-autonomous operations .
[0092] In some embodiments, sensing results are generated by combining sensor data obtained by multiple sensors (also referred to as "sensor fusion"). For example, sensor fusion may be used to combine sensing data obtained by different sensor types including, for example, GPS sensors, inertial sensors, image sensors, lidar, ultrasonic sensors, and the like. As another example, sensor fusion can be used to combine different types of sensed data, such as absolute measurement data (e.g., data provided with respect to a global coordinate system, such as GPS data) and relative measurement data (e.g., data provided with respect to a local coordinate system). data, such as visual sensing data, lidar data, or ultrasonic sensing data). Sensor fusion can be used to compensate for limitations or inaccuracies associated with individual sensor types, thereby improving the accuracy and reliability of final sensing results.
[0104] The combination of inertial sensors and image sensors may provide various benefits for UAV operations. For example, the accuracy of inertial data from inertial sensors such as IMUs may degrade over time due to noise and drift, or may be affected by acceleration due to gravity. This problem can be mitigated or overcome by correcting and/or combining inertial data with image data from one or more image sensors. As another example, the use of multiple image sensors may allow the UAV to continue to operate even if some image sensors are obscured and/or malfunctioning because the remaining image sensors are still available to collect data. Accordingly, sensor fusion algorithms can be used to process and combine inertial data from one or more inertial sensors and image data from one or more image sensors in order to provide more robust and accurate sensing results for UAV operate.
[0109] In some embodiments, the image processing module 620 is configured to meet certain performance criteria in order to ensure that image processing occurs at a sufficiently fast rate and with sufficient accuracy. For example, image processing module 620 may be configured to process three or more channels of data at a real-time processing frequency of approximately 20 Hz. For another example, the image processing module 620 may be configured to perform feature tracking and matching, so that, for example, when the reprojection error is 2, the number of inliers in the RANSAC algorithm is greater than or equal to 70%.
[0256] Method 1300 may provide sever...
Abstract
Systems, methods, and devices are provided for controlling a movable object using multiple sensors. In one aspect, a method for calibrating one or more extrinsic parameters of a movable object is provided. The method can comprise: receiving initial values for the one or more extrinsic parameters, wherein the one or more extrinsic parameters comprise spatial relationships between at least two imagesensors carried by the movable object; receiving inertial data from at least one inertial sensor carried by the movable object during the operation of the movable object; receiving image data from the at least two image sensors carried by the movable object during the operation of the movable object; and determining estimated values for the one or more extrinsic parameters based on the initial values, the inertial data, and the image data using an iterative optimization algorithm during the operation of the movable object.
Application Domain
Image enhancementImage analysis +3
Technology Topic
Mobile objectImage sensor +6
Image
Examples
- Experimental program(1)
Example Embodiment
[0083] The systems, methods, and devices of the present disclosure support the determination of information related to the operation of movable objects such as unmanned aerial vehicles (UAV). In some embodiments, the present disclosure utilizes sensor fusion technology to combine sensor data from different sensor types in order to determine various types of information useful for UAV operation, such as for state estimation, initialization, error recovery and/ Or parameter calibration information. For example, the unmanned aerial vehicle may include at least one inertial sensor and at least two image sensors. Various methods such as iterative optimization algorithms can be used to combine data from different types of sensors. In some embodiments, an iterative optimization algorithm involves iteratively linearizing and solving a nonlinear function (eg, a nonlinear objective function). The sensor fusion technology introduced in this article can be used to improve the accuracy and robustness of UAV state estimation, initialization, error recovery and/or parameter calibration, thus extending the UAV's ability to be used for "smart" autonomous or semi-autonomous operations .
[0084] In some embodiments, the present disclosure provides systems, methods, and devices related to performing initialization, error recovery, parameter calibration, and/or state estimation using multiple imaging sensors (eg, multiple cameras) combined with inertial sensors . Compared to methods using a single image sensor, the use of multiple imaging sensors can provide various advantages, such as improved accuracy (for example, because more image data is available for estimation) and enhanced stability and robustness (for example, If one of the image sensors fails, multiple image sensors can provide redundancy). The sensor fusion algorithm used to combine data from multiple image sensors may be different from the algorithm used for a single image sensor method. For example, an algorithm for multiple image sensors may consider the spatial relationship (for example, relative position and/or orientation) between different image sensors, and such spatial relationship will not be applicable to a single image sensor method.
[0085] Although some embodiments of this document are introduced in the context of an unmanned aerial vehicle, it should be understood that the present disclosure can be applied to other types of movable objects, such as ground vehicles. Examples of movable objects suitable for use with the systems, methods, and devices provided herein are further detailed below.
[0086] The unmanned aerial vehicle described herein may be fully autonomous (e.g., by a suitable computing system such as an onboard controller), semi-autonomously, or manually (e.g., by a human user). The unmanned aerial vehicle may receive commands from a suitable entity (for example, a human user or an autonomous control system) and respond to such commands by performing one or more actions. For example, the unmanned aerial vehicle can be controlled to take off from the ground, move in the air (for example, move with up to three translational degrees of freedom and up to three rotational degrees of freedom), move to a target position or a series of target positions, and hover in the air , Landing on the ground, etc. For another example, the unmanned aerial vehicle can be controlled to move at a specified speed and/or acceleration (for example, with up to three translational degrees of freedom and up to three rotational degrees of freedom) or along a specified movement path. Additionally, the commands may be used to control one or more UAV components, such as the components described herein (e.g., sensors, actuators, power units, payloads, etc.). For example, some commands may be used to control the position, orientation, and/or operation of unmanned aerial vehicle payloads such as cameras.
[0087] In some embodiments, the unmanned aerial vehicle may be configured to perform more complex functions that may involve a higher degree of autonomy, or may perform completely autonomously without any user input. Examples of such functions include, but are not limited to, maintaining a designated position (e.g., hovering in place), navigation (e.g., planning and/or following a route to move to a target destination), obstacle avoidance, and environmental mapping. The realization of these functions can be based on information related to the UAV and/or the surrounding environment.
[0088] For example, in some embodiments, it is beneficial to determine status information that indicates the UAV's past status, current status, and/or predicted future status. The status information may include information about the spatial layout of the UAV (for example, positioning or position information, such as longitude, latitude, and/or altitude; orientation or attitude information, such as roll, pitch, and/or yaw). The status information may also include information about the movement of the UAV (for example, translational velocity, translational acceleration, angular velocity, angular acceleration, etc.). The status information may include information related to the spatial layout and/or movement of the UAV relative to up to six degrees of freedom (for example, three degrees of freedom of position and/or degrees of translation, three degrees of freedom of orientation and/or rotation) information. The status information may be provided relative to a global coordinate system or relative to a local coordinate system (for example, relative to an unmanned aerial vehicle or another entity). In some embodiments, the status information is provided relative to the previous state of the UAV, such as the spatial layout of the UAV when it starts operation (eg, power-on, takeoff, flight). The determination of state information may be referred to as "state estimation" in this article. Optionally, the state estimation may be performed during the entire operation of the unmanned aerial vehicle (for example, continuously or at predetermined time intervals) in order to provide updated state information.
[0089] figure 1 The state estimation of the unmanned aerial vehicle 100 according to the embodiment is illustrated. Unmanned aerial vehicle 100 in figure 1 Is depicted as being in the first state 102 at a first time k, and in the second state 104 at a subsequent second time k+1. A local coordinate system or reference system 106 may be defined relative to the UAV 100, which indicates the position and orientation of the UAV 100. The UAV 100 may have a first position and orientation when in the first state 102 and may have a second position and orientation when in the second state 104. The change of state can be expressed as a translation T from the first position to the second position and/or a rotation θ from the first orientation to the second orientation. If the position and orientation of the UAV 100 in the previous state 102 are known, T and θ can be used to determine the position and orientation of the UAV 100 in the subsequent state 104. Moreover, if the length of the time interval [k, k+1] is known, the translational velocity and angular velocity of the UAV 100 in the second state 104 can also be estimated based on T and θ, respectively. Therefore, in some embodiments, state estimation involves determining changes in states T, θ, and then using state change information and information related to the first state 102 in order to determine the second state 104.
[0090] In some embodiments, state estimation is performed based on sensor data obtained by one or more sensors. Exemplary sensors suitable for use with the embodiments disclosed herein include position sensors (e.g., global positioning system (GPS) sensors, mobile device transmitters that support position triangulation), image sensors, or vision sensors (e.g., capable of detecting visible light , Infrared or ultraviolet imaging devices, such as cameras), distance sensors or range sensors (for example, ultrasonic sensors, lidars, time-of-flight cameras, or depth cameras), inertial sensors (for example, accelerometers, gyroscopes, inertial measurement units) (IMU)), altitude sensor, attitude sensor (for example, compass), pressure sensor (for example, barometer), audio sensor (for example, microphone), or field sensor (for example, magnetometer, electromagnetic sensor). Any suitable number and combination of sensors can be used, such as one, two, three, four, five or more sensors. Alternatively, data can be received from sensors of different types (for example, two, three, four, five, or more types). Different types of sensors can measure different types of signals or information (for example, position, orientation, speed, acceleration, distance, pressure, etc.) and/or use different types of measurement techniques to obtain data. For example, sensors may include any suitable combination of active sensors (e.g., sensors that generate and measure energy from its own energy source) and passive sensors (e.g., sensors that detect available energy). For another example, some sensors can generate absolute measurement data provided by the global coordinate system (for example, position data provided by the GPS sensor, attitude data provided by the compass or magnetometer), while other sensors can generate relative measurement data provided by the local coordinate system. Measurement data (for example, the relative angular velocity provided by the gyroscope; the relative translational acceleration provided by the accelerometer; the projected view of the specific surrounding environment provided by the image sensor; the relative distance information provided by the ultrasonic sensor, lidar or time-of-flight camera) . In some cases, the local coordinate system may be a fuselage coordinate system defined relative to the unmanned aerial vehicle.
[0091] The sensors described herein can be carried by unmanned aerial vehicles. The sensor may be located on any suitable part of the unmanned aerial vehicle, such as above, below, on one or more sides or within the unmanned aerial vehicle's fuselage. In some embodiments, one or more sensors may be enclosed within the housing of the UAV, located outside the housing, coupled to the surface of the housing (e.g., inner or outer surface), or may form the housing a part of. Some sensors may be mechanically coupled to the unmanned aerial vehicle so that the spatial layout and/or movement of the unmanned aerial vehicle correspond to the spatial layout and/or movement of the sensor. The sensor may be coupled to the unmanned aerial vehicle via a rigid coupling so that the sensor does not move relative to the part of the unmanned aerial vehicle to which it is attached. Alternatively, the coupling between the sensor and the UAV may allow movement of the sensor relative to the UAV. The coupling may be permanent or non-permanent (e.g., detachable). Suitable coupling methods may include adhesives, bonding, welding, and/or fasteners (eg, screws, nails, pins, etc.). In some embodiments, the coupling between the sensor and the UAV includes shock absorbers or dampers that reduce vibration or other undesired mechanical movement transmitted from the fuselage of the UAV to the sensor . Alternatively, the sensor may be integrally formed with a part of the unmanned aerial vehicle. In addition, the sensor can be electrically coupled with a part of the UAV (for example, processing unit, control system, data storage), so that the data collected by the sensor can be used for various functions of the UAV (for example, navigation, Controlling, advancing, communicating with users or other devices, etc.), such as the embodiments discussed herein.
[0092] In some embodiments, the sensing result is generated by combining sensor data obtained by multiple sensors (also referred to as "sensor fusion"). For example, sensor fusion can be used to combine sensing data obtained by different sensor types, including, for example, GPS sensors, inertial sensors, image sensors, lidar, ultrasonic sensors, and so on. For another example, sensor fusion can be used to combine different types of sensing data, such as absolute measurement data (for example, data provided with respect to a global coordinate system, such as GPS data) and relative measurement data (for example, data provided with respect to a local coordinate system). Data, such as visual sensing data, lidar data, or ultrasonic sensing data). Sensor fusion can be used to compensate for limitations or inaccuracies associated with individual sensor types, thereby improving the accuracy and reliability of the final sensing results.
[0093] In embodiments using multiple sensors, each sensor can be located on the UAV in its own position and orientation. Information about the spatial relationship (for example, relative position and orientation) of the plurality of sensors with each other can be used as a basis for fusing data from the plurality of sensors. These spatial relationships may be referred to as the "external parameters" of the sensor herein, and the process of determining the spatial relationships may be referred to as "external parameter calibration" herein. In some embodiments, the sensor is installed at a predetermined position on the UAV, so that the initial value of the external parameter is known. However, in other embodiments, the initial value may be unknown, or the known accuracy may be insufficient. Moreover, the external parameters may change during the operation of the UAV, for example, due to shocks, collisions, or other events that change the relative position and/or orientation of the sensors. Therefore, in order to ensure that the sensor fusion result is accurate and robust, it is useful to perform external parameter calibration during the entire operation of the UAV (for example, continuously or at predetermined time intervals) to provide updated parameter information.
[0094] figure 2 The figure illustrates the external parameter calibration of the unmanned aerial vehicle according to the embodiment. The unmanned aerial vehicle includes a reference sensor 200 and a plurality of additional sensors 202. Each of the sensors 200, 202 is associated with a corresponding local coordinate system or reference system that indicates the position and orientation of the sensor. In some embodiments, with respect to the coordinate system of the reference sensor 200, each sensor 202 is in a different position and orientation. The external parameters of each sensor 202 can be expressed as the corresponding translation T and/or the corresponding rotation θ from the coordinate system of the sensor 202 to the coordinate system of the reference sensor 200. Therefore, external parameter calibration may involve determining the external parameter set (T) that makes each of the sensors 202 correlate with the reference sensor 200. 1 , Θ 1 ), (T 1 , Θ 1 ),..., (T m , Θ m ). Alternatively or additionally, external parameter calibration may involve determining a set of external parameters that correlates each of the sensors 202 with each other rather than the coordinate system of a single reference sensor 200. Those skilled in the art will understand that determining the spatial relationship between the sensor 202 and the reference sensor 200 is equivalent to determining the spatial relationship between the sensors 202.
[0095] The UAV described in this article can implement various algorithms to perform state estimation and external parameter calibration. In some embodiments, the same algorithm is used to estimate the state information and the external parameter value at the same time. In other embodiments, different algorithms are used to evaluate the status information and parameter values separately. This article also describes examples of such algorithms. In some embodiments, it may be necessary or beneficial to initialize these algorithms with certain information before performing state estimation and/or parameter calibration for the first time (for example, when the UAV has been powered on, started operation, or started flying) . This information may be referred to herein as "initialization information", which may include the initial state and/or external parameters of the UAV at the initial moment (for example, when the UAV has been powered on, started operation, or started flying). The initial value is used to initialize the state estimation and/or parameter calibration algorithm. The accuracy of the initialization information may affect the accuracy of the subsequent state estimation and/or parameter calibration process. The process of determining the initialization information of the unmanned aerial vehicle may be referred to as "initialization" herein.
[0096] For example, in some embodiments, the initialization information includes information indicating the orientation of the unmanned aerial vehicle relative to the direction of gravity. This information may be particularly useful for adjusting data from sensors that are affected by gravity, such as inertial sensors. In some embodiments, it may be beneficial or necessary to subtract the contribution of gravity from the acceleration measurement obtained by the inertial sensor so that the resulting inertial sensor data only indicates the acceleration of the unmanned aerial vehicle. Therefore, the initialization process may involve determining the initial orientation of the UAV relative to the direction of gravity in order to support the correction of the inertial sensor data. Alternatively or in combination, the initialization information may include the position, orientation, speed, acceleration, and/or external parameters of the UAV at the initial time before or during the operation of the UAV. The initialization information may be provided relative to the local coordinate system of the unmanned aerial vehicle, the global coordinate system, and/or the coordinate system of another entity (for example, the remote controller of the unmanned aerial vehicle).
[0097] image 3 with Figure 4 The initialization of the unmanned aerial vehicle according to the embodiment is illustrated. image 3 The initialization of the UAV 300 in the horizontal orientation is illustrated. "Horizontal orientation" may mean that the UAV 300 has a horizontal axis 302 that is substantially orthogonal to the direction of the gravity vector g (e.g., an axis passing through opposite lateral sides of the UAV). The unmanned aerial vehicle 300 may be in a horizontal orientation when taking off from a horizontal plane, for example. Figure 4 The initialization of UAV 400 in an inclined orientation is illustrated. "Tilt orientation" may mean that UAV 400 has a horizontal axis 402 that is non-orthogonal to the direction of the gravity vector g (for example, an axis passing through opposite lateral sides of the UAV). Unmanned aerial vehicle 400 may be in an inclined orientation, for example, when taking off from an inclined surface or when launched from a non-stationary state (for example, from mid-air or thrown into the air by a user). In some embodiments, the initialization method described herein can be used to determine whether the unmanned aerial vehicle is initially in a horizontal orientation or an inclined orientation and/or the inclination of the unmanned aerial vehicle with respect to the gravity vector.
[0098] In some embodiments, if an error occurs in the state estimation and/or external parameter calibration algorithm, it may be necessary to reinitialize and restart the algorithm. Errors can involve: one or more sensors provide sensor data that causes the algorithm to fail (for example, fail to provide sufficient data) or the algorithm fails to produce a result (for example, fail to converge to a result within a specified time period). The re-initialization can be substantially similar to the initialization process described in this document, except that the re-initialization information is obtained at or around the time when the error occurs, rather than at the initial time of the operation of the UAV. In some embodiments, after the error is detected, re-initialization is subsequently performed, and the process of re-initialization after the error may be referred to herein as "error recovery". Alternatively or in combination, re-initialization can be performed at any time after initialization as needed, for example, at predetermined time intervals.
[0099] Figure 5 Illustrated is a method 500 for operating an unmanned aerial vehicle according to an embodiment. In step 502, the unmanned aerial vehicle starts to operate (eg, power on, take off, etc.). In step 504, initialization is performed to determine the initialization information of the UAV (for example, the orientation relative to the direction of gravity as discussed above). In step 506, external parameter calibration and state estimation are performed for the current moment, for example, to determine the position, orientation, speed, and relative position and orientation of the UAV sensor. As discussed above, parameter calibration and state estimation can be performed simultaneously or separately. In step 508, it is determined whether an error has occurred during the operation of the UAV, for example, a sensor failure or malfunction during the parameter calibration and/or state estimation process. If an error does occur, the method 500 returns to step 504 to reinitialize the UAV for error recovery. If there is no error, in step 510, the results of parameter calibration and state estimation are output to, for example, the flight control module, remote terminal, or remote controller, for subsequent storage and/or use. For example, the flight control module can use the determined parameter values and/or status information to help the unmanned aerial vehicle navigate, survey and map, avoid obstacles, and so on. The method 500 then returns to step 506 to repeat the parameter calibration and state estimation process for the next moment. The method 500 can be repeated at any rate, such as at a rate of at least once every 0.1 seconds, in order to provide updated status information and parameter values during the operation of the UAV.
[0100] In some embodiments, it is advantageous to perform UAV initialization and/or error recovery as described herein, because this method does not require any changes regarding the initial state of the UAV (eg, position, orientation, speed, acceleration, etc.). Any assumptions. For example, the methods herein may allow UAV initialization and/or error recovery without assuming that the UAV is initially stationary (eg, speed and acceleration are equal to 0). This assumption may be suitable for certain situations (for example, when an unmanned aerial vehicle is taking off from the ground or other planes), and in such an implementation, the direction of the gravity vector may be obtained directly from the inertial sensor data. However, this assumption may not be suitable for other situations (for example, if the UAV is initially sliding down an inclined plane, if the UAV is thrown into the air by the user, if the UAV is in mid-air when the error occurs, etc.). In such an embodiment, since there may be other acceleration values that affect the sensor result, it may not be possible to determine the direction of the gravity vector only from the inertial sensor data. Therefore, the method herein allows determining the gravity vector regardless of the initial state of the UAV (for example, whether the UAV is stationary or moving), thus improving the flexibility and accuracy of initialization and error recovery.
[0101] The UAV described in this article can use data from multiple sensors to perform the initialization, state estimation, and external parameter calibration methods provided in this article. Various types and combinations of sensors can be used. In some embodiments, the UAV utilizes at least one inertial sensor and at least one image sensor. Alternatively, the unmanned aerial vehicle may utilize at least one inertial sensor and multiple image sensors, such as two or more, three or more, four or more, five or more, six Or more, seven or more, eight or more, nine or more, or ten or more image sensors.
[0102] "Inertial sensors" may be used herein to refer to motion sensors (e.g., speed sensors, acceleration sensors, such as accelerometers), orientation sensors (e.g., gyroscopes, inclinometers) or having one or more integrated motion sensors and/ Or one or more IMUs with integrated orientation sensors. Inertial sensors can provide sensing data relative to a single axis of motion. The axis of motion may correspond to the axis of the inertial sensor (e.g., the longitudinal axis). Multiple inertial sensors can be used, each inertial sensor providing measurements along a different axis of motion. For example, three accelerometers can be used to provide acceleration data along three different axes of motion. The three movement directions can be orthogonal axes. One or more of the accelerometers may be linear accelerometers configured to measure acceleration along the translation axis. Conversely, one or more of the accelerometers may be angular accelerometers, which are configured to measure angular acceleration around the axis of rotation. For another example, three gyroscopes can be used to provide orientation data on three different rotation axes. The three rotation axes may be orthogonal axes (eg, roll axis, pitch axis, yaw axis). Alternatively, at least some or all of the inertial sensors may provide measurements relative to the same axis of motion. Such redundancy can be achieved, for example to improve measurement accuracy. Alternatively, a single inertial sensor may be able to provide sensed data with respect to multiple axes. For example, an IMU including multiple integrated accelerometers and gyroscopes can be used to generate acceleration data and orientation data on up to six axes of motion. Alternatively, a single accelerometer can be used to detect acceleration along multiple axes, and a single gyroscope can be used to detect rotation around multiple axes.
[0103] The image sensor may be any device configured to detect electromagnetic radiation (eg, visible light, infrared light, and/or ultraviolet light) and generate image data based on the detected electromagnetic radiation. The image data generated by the image sensor may include one or more images, and the images may be static images (for example, photos), dynamic images (for example, videos), or a suitable combination thereof. The image data may be multi-color (for example, RGB, CMYK, HSV) or monochromatic (for example, grayscale, black and white, sepia). In some embodiments, the image sensor may be a camera. Although certain embodiments provided herein are described in the context of a camera, it should be understood that the present disclosure can be applied to any suitable image sensor, and any description related to the camera in this paper can also be applied to other types of images sensor. The camera can be used to generate a 2D image of a 3D scene (eg, environment, one or more objects, etc.). The image generated by the camera can represent the projection of a 3D scene onto a 2D image plane. Therefore, each point in the 2D image corresponds to a 3D space coordinate in the scene. Multiple cameras may be used to capture 2D images of a 3D scene, so as to allow reconstruction of 3D spatial information of the scene (for example, depth information indicating the distance between objects in the scene and the UAV). Alternatively, a single image sensor can be used to obtain 3D spatial information, for example, using a structure from motion technology. The 3D spatial information can be processed to determine the UAV status (e.g., position, orientation, speed, etc.).
[0104] The combination of inertial sensors and image sensors can provide various benefits for UAV operations. For example, the accuracy of inertial data from inertial sensors such as IMUs may decrease over time due to noise and drift, or may be affected by acceleration due to gravity. This problem can be alleviated or overcome by using image data from one or more image sensors to correct and/or combine inertial data. For another example, even if some image sensors are blocked and/or malfunction, the use of multiple image sensors can allow the UAV to continue to operate because the remaining image sensors can still be used to collect data. Therefore, sensor fusion algorithms can be used to process and combine the inertial data from one or more inertial sensors and the image data from one or more image sensors to provide more robust and accurate sensing results for unmanned aerial vehicles operating.
[0105] Image 6 Illustrated is a system 600 for processing inertial data and image data according to an embodiment. In some embodiments, the system 600 includes a data collection module 610, an image processing module 620, a sensor fusion module 630, and a flight control module 640. As described further herein, any suitable combination of hardware components and software components may be used to implement the various modules of the system 600. For example, each module may include appropriate hardware components, such as one or more processors and a memory that stores instructions executable by the one or more processors to perform the functions described herein. Alternatively or in combination, the same set of hardware components (e.g., the same processor) may be used to implement two or more modules.
[0106] The data collection module 610 may be used to obtain inertial data and image data from one or more inertial sensors and one or more image sensors, respectively. In some embodiments, inertial data and image data are collected at substantially the same frequency. In other embodiments, the inertial data and the image data are collected at different frequencies, for example, the inertial data is collected at a higher frequency than the frequency of collecting the image data, or vice versa. For example, the inertial sensor can output inertial data at a frequency greater than or equal to about 50 Hz, 100 Hz, 150 Hz, 200 Hz, 250 Hz, 300 Hz, 350 Hz, 400 Hz, 450 Hz, 500 Hz, or more. The image sensor can output image data at a frequency greater than or equal to about 1 Hz, 5 Hz, 10 Hz, 15 Hz, 20 Hz, 25 Hz, 30 Hz, 40 Hz, 50 Hz, or 100 Hz. In an embodiment using multiple image sensors, the data collection module 610 may synchronously collect image data from each of the image sensors at the same time.
[0107] The image processing module 620 may be used to process image data received from the data collection module 610. In some embodiments, the image processing module 620 implements a feature point algorithm that detects and/or extracts one or more feature points from one or more images. Feature points (also referred to herein as "features") can be a part of an image (for example, edges, corners, points of interest, spots, wrinkles, etc.), which can be connected to the rest of the image and/or The other feature points in the image are uniquely distinguished. Optionally, the transformation (for example, translation, rotation, zoom) of the feature point with respect to the imaged object and/or the change of the image characteristics (for example, brightness, exposure) may be relatively constant. Feature points can be detected in parts of the image that are rich in information content (for example, prominent 2D textures, textures exceeding a threshold). Feature points can be detected in the part of the image that remains stable under disturbance conditions (for example, when the illumination and brightness of the image are changed). The feature detection as described herein can be accomplished using various algorithms that can extract one or more feature points from image data. The algorithm may be an edge detection algorithm, a corner detection algorithm, a spot detection algorithm, or a wrinkle detection algorithm. In some embodiments, the corner detection algorithm may be "Fastened Segmentation Test Feature (FAST)". In some embodiments, the feature detector can extract feature points and use FAST to calculate related feature points. In some embodiments, the feature detector may be Canny edge detector, Sobel operator, Harris&Stephens/Plessy/Shi-Tomasi corner detection algorithm, SUSAN corner detector, horizontal curve curvature method, Laplacian of Gaussian of Gaussian), difference of Gaussian, Hessian determinant, MSER, PCBR or Grey-level spots, ORB, FREAK or their suitable combination.
[0108] Optionally, the image processing module 620 may match the same feature points in different images, thereby generating a set of matched feature points. Feature point matching may involve matching between images captured by the same image sensor (for example, images captured at different times such as consecutive moments), between images captured by different image sensors (for example, in the same Time or image captured at different time) or their combination. The feature point matching may be performed using a corner point detection algorithm, an optical flow algorithm, and/or a feature matching algorithm. For example, optical flow can be used to determine the motion between consecutive images and thereby predict the location of feature points in subsequent images. Alternatively or in combination, feature point descriptors (for example, the characteristics of the descriptor can be used to uniquely identify feature points) can be used to determine the position of the same feature point in other images. This article further details exemplary algorithms suitable for use with the embodiments herein.
[0109] In some embodiments, the image processing module 620 is configured to meet certain performance criteria in order to ensure that image processing occurs at a sufficiently fast rate and with sufficient accuracy. For example, the image processing module 620 may be configured to process three or more data channels at a real-time processing frequency of about 20 Hz. For another example, the image processing module 620 may be configured to perform feature tracking and matching, so that, for example, when the reprojection error is 2, the number of inliers in the RANSAC algorithm is greater than or equal to 70%.
[0110] The sensor fusion module 630 may obtain inertial data from the data collection module 610 and obtain processed image data (for example, matched feature points) from the image processing module 620. In some embodiments, the sensor fusion module 630 implements one or more sensor fusion algorithms that process the inertial data and processed image data in order to determine information related to the operation of the UAV, such as initialization information , Status information or external parameters. For example, sensor fusion algorithms can be used to calculate UAV position and/or motion information, such as UAV position, attitude, and/or speed. Exemplary sensor fusion algorithms suitable for use with the embodiments herein are described in further detail below.
[0111] The flight control module 640 may use the sensor fusion result generated by the sensor fusion module 630 in order to determine the control signal for the unmanned aerial vehicle. The control signal can be used to control the operation of one or more UAV components, such as one or more power units, one or more sensors, one or more payloads, one Or multiple communication modules, etc. In some embodiments, the control signal is used to realize autonomous or semi-autonomous unmanned aerial vehicle operations, for example, for navigation, obstacle avoidance, path planning, environmental mapping, and so on.
[0112] Figure 7 Illustrated is an algorithm 700 for performing feature point matching (also referred to as "single channel processing") between images captured by a single image sensor at different times according to an embodiment. Method 700, like all other methods described herein, can be performed using any embodiment of the systems and devices described herein. In some embodiments, one or more steps of the method 700 are performed by one or more processors associated with the image processing module 620 of the system 600.
[0113] The input 702 of the algorithm 700 may include a sequence of images obtained by a single image sensor at multiple times. In step 704, one of the images in the sequence ("current image") is received. In step 706, a set of one or more feature points ("temporary points") can be calculated from the current image, for example, using a corner detection algorithm or any other suitable feature point detection algorithm. Step 706 may involve using only the current image without using any other images to calculate feature points.
[0114] In step 708, it is determined whether the current image is the first image in the sequence (for example, the image captured at the earliest moment). If it is not the first image and there is at least one previous image captured at an earlier time, the algorithm proceeds to step 710. In step 710, an optical flow algorithm is used to compare the current image with one or more previous images. In some embodiments, the optical flow algorithm tracks one or more feature points ("previous points") detected in one or more previous images, so that the previous points are comparable to the feature points in the current image. match. Alternatively or in combination, other types of tracking algorithms can be used to match feature points, for example, a tracking algorithm that performs matching based on feature point descriptors. If a tracking error occurs during the tracking of a previous point, for example, the tracking is inaccurate or cannot be performed, the point may be excluded from the analysis in step 712. The rest of the time-matched feature point sets are considered as "current points" for analysis.
[0115] In step 714, the temporary points are added to the current point set, and the resulting feature point set is corrected (for example, using external parameters to correct), so as to generate the final feature point set as the output 718 of the algorithm. Alternatively, if it is determined in step 708 that the current image is not the first image, step 710 and step 712 can be omitted, so as to obtain the final feature point set only from the temporary points determined in step 706. In step 720, the current image is set as the previous image, and the current point is set as the previous point. The algorithm 700 returns to step 704 for the next iteration, in which step 704, the next image in the sequence is received for processing.
[0116] Figure 8 Illustrated is an algorithm 800 for performing feature point matching (also referred to as "multi-channel processing") between images captured by different image sensors according to an embodiment. In some embodiments, one or more steps of the method 800 are performed by one or more processors associated with the image processing module 620 of the system 600.
[0117] The input 802 to the algorithm 800 may include multiple images obtained by multiple different image sensors. In some embodiments, each image sensor provides a sequence of images obtained at multiple times. The image data collection can be synchronized so that the image sensor acquires images at substantially the same time. Alternatively, image data collection may be asynchronous, so that different image sensors obtain images at different times. This article further describes additional examples of asynchronous image data collection.
[0118] The algorithm 800 may use the single-channel processing techniques described herein (for example, the algorithm 700) to obtain time-matched feature points from each image sequence from each image sensor. For example, multiple parallel processors 804a...804n may be used to obtain time-matched feature points from images of a corresponding number of image sensors. Similar to the steps of algorithm 700 described herein, single-channel processing may involve receiving a current image from a sequence of images generated by a single image sensor (step 806), using an optical flow algorithm to track feature points in the current image and combining them with one Match the feature points in or multiple previous images (step 808), obtain a set of matched feature points by eliminating the feature points that cannot be tracked (step 810), and use the external calibration parameters of the image sensor to correct the feature points (Step 812). As described herein, the resulting set of feature points can be tracked and matched with feature points in subsequent images. The single-channel matching process can be repeated until all images in the image sequence have been processed.
[0119] In step 814, once a time-matched feature point set is obtained for each image sequence from each image sensor, a spatial matching algorithm can be used to make feature points from different image sequences match each other spatially. Since different image sensors can be configured to capture images with different fields of view, the same feature points in the scene can appear at different spatial positions in the images from different sensors. At least some image sensors may have overlapping fields of view to ensure that at least some feature points will be present in an image sequence from more than one image sensor. In some embodiments, the spatial matching algorithm analyzes images obtained by different image sensors (for example, images obtained at the same time or at different times) in order to identify and match feature points in different image sequences. The final output 816 of the algorithm 800 may be a set of feature points, which includes time-matched feature points in the corresponding image sequence generated by each image sensor and spatially-matched feature points in different image sequences.
[0120] In embodiments using multiple image sensors, collecting and/or processing image data from each of the image sensors can occur synchronously or asynchronously. For example, a synchronized image data collection scheme may involve each image sensor obtaining image data at substantially the same time, and may simultaneously transmit the image data to one or more processors. In contrast, an asynchronous image data collection scheme may involve different image sensors obtaining image data at different times, and may transmit the image data to one or more processors at different times (eg, sequentially). In the asynchronous scheme, some image sensors can obtain images at the same time, while other image sensors can obtain images at different times.
[0121] Picture 20 Illustrated is a synchronized image data collection scheme 2000 according to an embodiment. Scheme 2000 can be used to obtain and process data from any number of image sensors, such as from two, three, four, five, six, seven, eight, nine, 10, 15, 20 Data from one, 30, 40, 50 or more image sensors. in Picture 20 In the drawing, image data is received synchronously from n image sensors. For example, at the first time k, each of the image sensors generates its own image data. Image data from each of the sensors can be simultaneously transmitted to one or more processors (for example, one or more processors of an image processing module). The image data may be transmitted together with a time stamp, which indicates the time at which the data was generated. Similarly, at the second time k+1, each image sensor obtains image data and simultaneously transmits it to one or more processors. During the operation of the unmanned aerial vehicle, this process can be repeated. Synchronized image data collection may be beneficial in improving the ease and accuracy of feature point matching in images from different sensors. For example, images taken at the same time may exhibit fewer changes in exposure time, brightness, or other image characteristics that may affect the ease of feature point matching.
[0122] Figure 21 Illustrated is an asynchronous image data collection scheme 2100 according to an embodiment. Scheme 2100 can be used to obtain and process data from any number of image sensors, such as from two, three, four, five, six, seven, eight, nine, 10, 15, 20 Data from one, 30, 40, 50 or more image sensors. in Figure 21 In the drawing, image data is received asynchronously from n image sensors. For example, at the first time k, the first image sensor obtains image data and transmits the image data to one or more processors (for example, one or more processors of an image processing module). At the second time k+1, the second image sensor obtains image data and transmits the image data to the one or more processors. At the third time k+2, the nth image sensor obtains image data and transmits the image data to the one or more processors. Each image data can be transmitted along with a time stamp indicating the time at which the image data was obtained, for example, to facilitate downstream image processing. The time interval between different moments can be constant or variable. In some embodiments, the time interval between different moments is about 0.02 s, or in the range from about 0.05 s to about 0.2 s. During the operation of the unmanned aerial vehicle, this process can be repeated until each of the image data is received from each of the image sensors. The order of acquiring and receiving image data can be changed as needed. In addition, although Figure 21 It is illustrated that image data is obtained from a single image sensor at each time, but it should be understood that image data may be received from multiple sensors at some or all of the time. In some embodiments, as described further herein, asynchronous image data collection is achieved by selectively coupling different subsets of image sensors to the one or more processors via a switching mechanism. Asynchronous image data collection can provide various advantages, such as relative ease of implementation, compatibility with a wider range of hardware platforms, and reduction in computational load.
[0123] As described herein, the system and apparatus of the present disclosure may include a sensor fusion module (for example, the sensor fusion module 630 of the system 600) configured to implement a sensor fusion algorithm that processes image data and inertial data To obtain status information, initialization information and/or external parameters. In some embodiments, the sensor fusion algorithm may be executed throughout the operation of the UAV (eg, continuously or at predetermined time intervals) to provide updates in real time. Various types of sensor fusion algorithms are suitable for use with the implementations described herein, such as Kalman filter-based algorithms or optimization algorithms. In some embodiments, the optimization method introduced herein can be considered as a bundle-based algorithm. Kalman filter-based algorithms can also be considered as a special case of beam-based algorithms. In some embodiments, the main difference between the Kalman filter-based algorithm and the beam-based algorithm introduced herein is the number of states to be optimized. Kalman filter-based algorithms can use one or two states, while the beam-based algorithm in this article can use more than three states (for example, at least 5 states, at least 10 states, at least 20 states, at least 30 states , At least 40 states or at least 50 states). In some embodiments, the beam-based algorithm introduced herein may provide improved accuracy because the optimization process utilizes more information. In some embodiments, Kalman filter-based algorithms can provide improved speed and stability.
[0124] In some embodiments, the sensor fusion algorithm involves an optimization algorithm. The optimization algorithm can be used to determine a set of solution parameters that minimize or maximize the value of the objective function. In some embodiments, the optimization algorithm is an iterative optimization algorithm that generates estimates iteratively until the algorithm converges to a solution or the algorithm stops (for example, because a time threshold is exceeded). In some embodiments, the optimization algorithm is linear, while in other embodiments, the optimization algorithm is non-linear. In some embodiments, the optimization algorithm involves iteratively linearizing and solving the nonlinear function. The embodiments herein may use a single optimization algorithm or multiple different types of optimization algorithms to estimate different types of UAV information. For example, the methods described herein may involve using linear optimization algorithms to estimate certain values and non-linear optimization algorithms to estimate other values.
[0125] In some embodiments, the present disclosure provides an iterative optimization algorithm for using inertial data from at least one inertial sensor and image data from at least two image sensors to estimate no data at one or more moments during the operation of the unmanned aerial vehicle. Human aircraft status information (for example, position, orientation, speed) and/or external parameters (for example, the spatial relationship between one or more inertial sensors and/or image sensors). Optionally, the iterative optimization algorithm may also determine the estimated value of the state information and/or the external parameter based on the initial value of the state information and/or the external parameter. Iterative optimization algorithms may, for example, involve calculating state information and/or maximum posterior probability (MAP) estimates for external parameters based on inertial data, image data, and/or initial values. In some embodiments, the objective function of the iterative optimization algorithm combines the actual values of the state information, initialization information, and/or external parameters with the state information, initialization information, and/or calculated based on inertial data, image data, and/or initial values. The estimated values of external parameters are correlated. The objective function can be a linear function or a non-linear function. An iterative solution technique can be used to minimize or maximize the objective function in order to obtain a solution of state information, initialization information, and/or actual values of external parameters. For example, as discussed further herein, the nonlinear objective function can be linearized and solved iteratively.
[0126] Picture 9 Illustrated is a method 900 for determining initialization information of an unmanned aerial vehicle (or any other movable object) using multiple sensors according to an embodiment. The steps of method 900 may be performed using any embodiment of the systems and devices described herein. For example, one or more processors carried by unmanned aerial vehicles may be used to perform some or all of the steps of method 900. The method 900 can be performed in conjunction with any of the various methods described herein.
[0127] As discussed herein, the initialization information to be determined using method 900 may include one or more of the following: the orientation of the UAV (for example, the orientation relative to the direction of gravity), the position of the UAV, or The speed of the human aircraft. For example, the initialization information may include the orientation of the unmanned aerial vehicle (for example, the orientation relative to the direction of gravity), the position of the unmanned aerial vehicle, and the speed of the unmanned aerial vehicle. In some embodiments, the initialization information is determined approximately at the time when the UAV starts operation. For example, the time may be no more than about 50ms, 100ms, 200ms, 300ms, 400ms, 500ms, 600ms, 700ms, 800ms, 900ms, 1s, 2s, 3s, 4s, 5s, 6s, after the unmanned aerial vehicle starts operating. 7s, 8s, 9s, 10s, 30s or 60s. For another example, the time may be no more than about 50ms, 100ms, 200ms, 300ms, 400ms, 500ms, 600ms, 700ms, 800ms, 900ms, 1s, 2s, 3s, 4s, 5s, 6s before the unmanned aerial vehicle starts to operate. , 7s, 8s, 9s, 10s, 30s or 60s.
[0128] In step 910, it is detected that the unmanned aerial vehicle has started operation. Step 910 may involve detecting one or more of the following: the unmanned aerial vehicle has been powered on, the unmanned aerial vehicle has taken off from the surface, or the unmanned aerial vehicle has started flying. The unmanned aerial vehicle can take off from an inclined surface or a non-inclined surface (non-inclined surface). The unmanned aerial vehicle can start flying from the surface (for example, the ground), start flying from a free fall state (for example, in the air), start flying from a launched device, or start flying from being thrown into the air by a user. Optionally, step 910 may involve detecting one or more of the following: the power unit of the unmanned aerial vehicle has been activated, the output of the power unit is greater than or equal to the threshold, the altitude of the unmanned aerial vehicle is greater than or equal to the threshold, and none The speed of the human aircraft is greater than or equal to the threshold, or the acceleration of the unmanned aircraft is greater than or equal to the threshold.
[0129] In step 920, inertial data is received from at least one inertial sensor carried by the unmanned aerial vehicle. The inertial data may include one or more measurements indicating the three-dimensional acceleration and three-dimensional angular velocity of the unmanned aerial vehicle. In some embodiments, the inertial data includes one or more measurement values obtained by at least one inertial sensor within a time interval from when the unmanned aerial vehicle starts operation and/or when the unmanned aerial vehicle is in flight.
[0130] In step 930, image data is received from at least two image sensors carried by the unmanned aerial vehicle. The image data may include one or more images of the surrounding environment of the unmanned aerial vehicle. In some embodiments, the image data may include one or more images obtained by each of the at least two image sensors in a time interval from when the unmanned aerial vehicle starts operation and/or when the unmanned aerial vehicle is in flight.
[0131] In step 940, the initialization information of the UAV is determined based on the inertial data and the image data. The initialization information may include the position, speed, and/or orientation of the UAV when the UAV starts to operate. In some embodiments, step 940 includes processing the image data according to the image processing algorithm described herein. For example, the image data may include one or more images obtained by each of the at least two image sensors within a time interval from the start of the operation of the movable object, and may use feature point detection algorithms, optical flow algorithms, and/or based on The feature matching of the descriptor algorithm is used to process the one or more images. As discussed herein, images obtained from different image sensors can be compared with each other, for example, in order to perform feature point matching. For example, one or more images obtained by the first image sensor may be compared with one or more images obtained by the second image sensor. Alternatively, only inertial data and image data may be used to determine the initialization information, without any other data, such as sensor data from other types of sensors. In some embodiments, the initialization information is directly determined from the inertial data and image data without any initial estimation or initial value for the initialization information. It is possible to use only data obtained when the UAV is in operation (for example, after the UAV has been powered on, during the flight of the UAV) to determine the initialization information, without relying on anything before the operation of the UAV. Data obtained (for example, before the UAV is powered on and/or in flight). For example, the initialization information can be determined during the operation of the unmanned aerial vehicle without pre-initialization performed before the unmanned aerial vehicle starts operation.
[0132] In some embodiments, step 940 involves using inertial data and image data to generate an estimate of initialization information. For example, step 940 may be performed by using inertial data to generate a first estimate of initialization information and using image data to generate a second estimate of initialization information. The first estimation and the second estimation can be combined to obtain the initialization information of the unmanned aerial vehicle.
[0133] In some embodiments, optimization algorithms such as linear optimization algorithms, nonlinear optimization algorithms, or iterative nonlinear optimization algorithms are used to determine the initialization information. Exemplary algorithms suitable for use with the embodiments herein are described below.
[0134] The unmanned aerial vehicle may have a sensing system that includes m cameras (or other image sensor types) and one IMU (or other inertial sensor types). The IMU can be configured to output angular velocities around three axes and linear acceleration values along the three axes. The output frequency of the IMU can be higher than the output frequency of the camera. For example, the sampling rate of the camera can be assumed to be f cam Hz, and N=f cam ×T. The system may receive (N+1)×m images in a time interval T after starting operation (for example, after power-on). Time interval T can correspond to multiple times t 0 , T 1 , T 2 , T 3 , T 4 , T 5 ,..., t N And multiple UAV states among them Indicates the current position of the UAV (relative to t 0 Time position, the t 0 Is the moment when the operation started), Represents the current speed of the UAV (relative to the fuselage coordinate system of the UAV), and g k Represents the acceleration of gravity (relative to the fuselage coordinate system of the UAV). The initial conditions can be
[0135] The number of feature points observed in the received (N+1)×m images may be M+1. It can be assumed that the i-th feature point is at time t k (0≤k≤N) is initially observed by the j-th camera, and λ i Is the feature point at time t k The depth in the direction perpendicular to the plane of the j-th camera at time. All UAV status And feature depth λ 0 , Λ 1 , Λ 2 , Λ 3...λ M Can form an overall state X.
[0136] Can be at time (t k , T k+1 ) Receives multiple IMU data, each IMU data corresponds to the time interval t, the following equation can be defined:
[0137]
[0138]
[0139] among them Represents the rotation from time t to time k and is obtained by integrating the angular velocity from the IMU, Represents the acceleration relative to the UAV fuselage coordinate system at time t, and α and β represent the integral of the original IMU data. Can determine estimates from IMU data As follows, where the covariance matrix Represents the error caused by noise in the original IMU data:
[0140]
[0141] among them Is the left matrix in the above equation, X is the overall state, and It is additive noise. with It can be calculated using pre-integration techniques known to those of ordinary skill in the art.
[0142] For the camera, if it is assumed that the feature point l is at time t i Time is first observed by the camera cn, then at time t j When the camera c m Observe that the following estimates can be obtained
[0143]
[0144] Where X is the overall state, with Is the feature point l at time t by the i-th camera j The image coordinates in the image obtained when Represents the rotation of the i-th camera relative to the IMU, and Indicates the translation of the i-th camera relative to the IMU. Similar to IMU data It can be derived from the above equation:
[0145]
[0146] The goal may be to optimize the estimation of the UAV state by using (1) the estimation from the IMU and (2) the geometric constraints between the image sequence. The objective equation is to minimize the error from the IMU and the geometric constraint error between the image sequence:
[0147]
[0148] It can be further defined as
[0149]
[0150]
[0151]
[0152]
[0153] among them It is a diagonal matrix that measures the uncertainty in the camera observation data.
[0154] In some embodiments, the solution process includes storing estimates from the IMU in Λ D , The geometric constraints between image sequences are stored in Λ C , And then solve the following linear equation:
[0155] (Λ D +Λ C )X=(b D +b C )
[0156] Method 900 can provide various advantages of unmanned aerial vehicle operation. For example, the method 900 can be used to determine the direction of the unmanned aerial vehicle relative to gravity for various takeoff types (such as take off from an inclined surface, take off from mid-air or in free fall, or take off by manually launching or throwing by the user). Towards. Therefore, the method 900 may also include using the orientation information to correct subsequent inertial data obtained from the inertial sensor. Moreover, the method 900 can be used to provide automatic initialization after the UAV has begun operation, even when the initial UAV state (eg, position, orientation, speed, etc.) is completely unknown.
[0157] In addition to allowing automatic initialization of the UAV, the method of the present disclosure can also be used to perform re-initialization or error recovery after an error occurs in the UAV system. The method for reinitialization may be substantially similar to the method for initialization described herein, with the difference that the reinitialization is performed after the error has been detected, rather than after the unmanned aerial vehicle has started operation.
[0158] Picture 10 Illustrated is a method 1000 for error recovery of an unmanned aerial vehicle (or any other movable object) using multiple sensors according to an embodiment. The steps of method 1000 can be performed using any embodiment of the systems and devices described herein. For example, some or all of the steps of method 1000 may be performed using one or more processors carried by an unmanned aerial vehicle. The method 1000 can be performed in conjunction with any of the various methods described herein.
[0159] The method 1000 can be used to determine reinitialization information, which is used to reinitialize the unmanned aerial vehicle after an error has occurred, so that normal operations can be restored. Similar to the initialization information described herein, the reinitialization information can include one or more of the following: the orientation of the unmanned aerial vehicle (for example, the orientation relative to the direction of gravity), the position of the unmanned aerial vehicle, or the position of the unmanned aerial vehicle speed. For example, the reinitialization information may include the orientation of the unmanned aerial vehicle (for example, the orientation relative to the direction of gravity), the position of the unmanned aerial vehicle, and the speed of the unmanned aerial vehicle. In some embodiments, the reinitialization information is determined approximately at the moment when the error occurs. For example, the time may be no more than about 50ms, 100ms, 200ms, 300ms, 400ms, 500ms, 600ms, 700ms, 800ms, 900ms, 1s, 2s, 3s, 4s, 5s, 6s, 7s, 8s after the occurrence of the error. , 9s, 10s, 30s or 60s. For another example, the time may be no more than about 50ms, 100ms, 200ms, 300ms, 400ms, 500ms, 600ms, 700ms, 800ms, 900ms, 1s, 2s, 3s, 4s, 5s, 6s, 7s, before the error occurs. 8s, 9s, 10s, 30s or 60s.
[0160] In step 1010, it is detected that an error has occurred during the operation of the UAV. The operation of the unmanned aerial vehicle may involve powering on the unmanned aerial vehicle, the unmanned aerial vehicle has taken off from the surface, or the unmanned aerial vehicle has started flying. The error may involve a failure in one or more sensors, such as a failure in one or more of at least one inertial sensor or at least two inertial sensors. The error may involve a fault in an unmanned aerial vehicle component, such as a data collection module, an image processing module, a sensor fusion module, or a flight control module. For example, an unmanned aerial vehicle may include a sensor fusion module that uses an iterative optimization algorithm, and the failure may involve the iterative optimization estimation algorithm failing to converge to a solution. Optionally, the UAV may include a state estimation module, as described further herein, and the state estimation module may use an iterative state estimation algorithm. The failure may involve the iterative state estimation algorithm failing to converge to a solution.
[0161] In step 1020, inertial data is received from at least one inertial sensor carried by the unmanned aerial vehicle. The inertial data may include one or more measurement values that indicate the three-dimensional acceleration and three-dimensional angular velocity of the UAV. In some embodiments, the inertial data includes one or more measured values obtained by the at least one inertial sensor in a time interval from when the error occurred.
[0162] In step 1030, image data is received from at least two image sensors carried by the unmanned aerial vehicle. The image data may include one or more images of the surrounding environment of the unmanned aerial vehicle. In some embodiments, the image data may include one or more images obtained by each of the at least two image sensors in a time interval from when the error occurred.
[0163] In step 1040, the reinitialization information of the UAV is determined based on the inertial data and the image data. The reinitialization information may include the position, speed, and/or orientation of the unmanned aerial vehicle when the error occurs. In some embodiments, step 1040 includes processing the image data according to the image processing algorithms described herein. For example, the image data may include one or more images obtained by each of the at least two image sensors within a time interval from when the error occurred, and a feature point detection algorithm, an optical flow algorithm, and/or a feature matching algorithm may be used To process the one or more images. As discussed herein, images obtained from different image sensors can be compared with each other, for example, in order to perform feature point matching. For example, one or more images obtained by the first image sensor may be compared with one or more images obtained by the second image sensor. Alternatively, only inertial data and image data may be used to determine the reinitialization information, without any other data, such as sensor data from other types of sensors. In some embodiments, the reinitialization information is directly determined from the inertial data and image data, without any initial estimation or initial value for the reinitialization information.
[0164] In some embodiments, step 1040 involves using inertial data and image data to generate an estimate of reinitialization information. For example, step 1040 may be performed by using inertial data to generate a first estimate of reinitialization information and using image data to generate a second estimate of reinitialization information. The first estimation and the second estimation can be combined to obtain reinitialization information of the UAV.
[0165] In some embodiments, optimization algorithms such as linear optimization algorithms, nonlinear optimization algorithms, iterative optimization algorithms, iterative linear optimization algorithms, or iterative nonlinear optimization algorithms are used to determine the reinitialization information. The optimization algorithm used to evaluate the reinitialization information can be substantially similar to the algorithm used to evaluate the initialization information described in this article. The difference is that the relevant moment is in the time interval after the error occurs, rather than after the unmanned aerial vehicle starts operating. Time interval.
[0166] Method 1000 can provide various advantages of unmanned aerial vehicle operation. For example, the method 1000 may be used to determine the orientation of the unmanned aerial vehicle relative to the direction of gravity under various possible error conditions (such as when the unmanned aerial vehicle is in mid-air during flight). Therefore, the method 1000 may involve using the orientation information to correct subsequent inertial data obtained from an inertial sensor. Moreover, the method 1000 can be used to provide automatic reinitialization after the UAV has begun operation, even when the initial UAV state (e.g., position, orientation, speed, etc.) after the occurrence of an error is completely unknown. For example, the method 1000 may involve using the determined reinitialization information to reinitialize the iterative state estimation algorithm implemented by the state estimation module. Advantageously, the reinitialization technique described herein can be used to detect and respond to errors during the operation of the UAV in real time, thereby improving the reliability of the operation of the UAV.
[0167] Picture 11 Illustrated is a method 1100 for calibrating one or more external parameters of an unmanned aerial vehicle (or any other movable object) using multiple sensors during operation of an unmanned aerial vehicle according to an embodiment. The steps of method 1100 can be performed using any embodiment of the systems and devices described herein. For example, some or all of the steps of method 1100 may be executed using one or more processors carried by an unmanned aerial vehicle. The method 1100 can be performed in combination with any of the various methods described herein.
[0168] The method 1100 may be used to determine the external parameters of the UAV during operation (for example, when the UAV is powered on, in flight, etc.), which may be referred to herein as "online" calibration. In some embodiments, the online calibration is performed continuously or at predetermined time intervals during the operation of the UAV, so as to allow real-time updating of external parameters. For example, the method 1100 may be executed every 0.1s during the operation of the unmanned aerial vehicle.
[0169] In step 1110, the initial value of one or more external parameters is received. In some embodiments, the external parameter includes the spatial relationship between at least two image sensors carried by the UAV. For example, the spatial relationship may include the relative position and relative orientation of the image sensor. The relative position and relative orientation of the at least two image sensors may be determined with respect to each other and/or with respect to the position and orientation of the at least one inertial sensor carried by the unmanned aerial vehicle.
[0170] Various methods can be used to obtain the initial value. In some embodiments, the initial value is received from a memory device associated with the UAV (eg, a memory device carried on the UAV). The initial value can be determined before the operation of the unmanned aerial vehicle. For example, the iterative optimization algorithm described herein can be used to determine the initial value before the operation of the unmanned aerial vehicle. For another example, the initial value can be measured by the user before the operation. Alternatively, the initial value may be a factory calibration value determined when the UAV is manufactured. In some embodiments, the initial value may be determined based on the knowledge of the configuration of the UAV. For example, image sensors and/or inertial sensors may be coupled to the unmanned aerial vehicle at certain fixed positions (for example, a selected set of positions available for installing sensors), and the initial determination may be made based on information related to the fixed positions. value.
[0171] In some embodiments, the initial value is intended to provide a rough estimate of the actual value of the external parameter and is not intended to be very accurate. Contrary to other calibration methods, the method provided in this article does not require accurate initial values of external parameters to perform online calibration. For example, the initial value of the external parameter (for example, the relative position) may not differ from the actual value of the external parameter by more than about 0.1cm, 0.25cm, 0.5cm, 0.75cm, 1cm, 1.25cm, 1.5cm, 1.75cm, 2cm, 2.25 cm, 2.5cm, 2.75cm, 3cm or 5cm. Or, the initial value of the external parameter (for example, the relative position) may differ from the actual value of the external parameter by at least about 0.1cm, 0.25cm, 0.5cm, 0.75cm, 1cm, 1.25cm, 1.5cm, 1.75cm, 2cm, 2.25cm , 2.5cm, 2.75cm, 3cm or 5cm. For another example, the initial value of the external parameter (for example, the relative orientation) may not differ from the actual value of the external parameter by more than about 0.1 degrees, 0.25 degrees, 0.5 degrees, 0.75 degrees, 1 degree, 1.25 degrees, 1.5 degrees, 1.75 degrees, 2. Degrees, 2.25 degrees, 2.5 degrees, 2.75 degrees, 3 degrees or 5 degrees. Alternatively, the initial value of the external parameter (for example, the relative position) may differ from the actual value of the external parameter by at least about 0.1 degree, 0.25 degree, 0.5 degree, 0.75 degree, 1 degree, 1.25 degree, 1.5 degree, 1.75 degree, 2 degree, 2.25 degrees, 2.5 degrees, 2.75 degrees, 3 degrees or 5 degrees.
[0172] In step 1120, during operation of the unmanned aerial vehicle, inertial data is received from at least one inertial sensor carried by the unmanned aerial vehicle. The inertial data may include one or more measurement values that indicate the three-dimensional acceleration and three-dimensional angular velocity of the UAV. In some embodiments, the inertial data includes data generated by the at least one inertial sensor at least two, three, four, five, six, seven, eight, nine, 10, 20, 30 , One or more measurements obtained at 40 or 50 moments.
[0173] In step 1130, during operation of the unmanned aerial vehicle, image data is received from at least two image sensors carried by the unmanned aerial vehicle. The image data may include one or more images of the surrounding environment of the movable object. In some embodiments, the image data may include data generated by each of the at least two image sensors at least two, three, four, five, six, seven, eight, nine, 10 , 20, 30, 40 or 50 moments of one or more images.
[0174] In step 1140, an estimated value of one or more external parameters is determined based on the initial value, inertial data, and image data. The estimated value can be determined during the operation of the unmanned aerial vehicle. Optionally, the inertial data and the image data may be the only sensor data used to determine the estimated value of the external parameter. Step 1140 may involve using feature point detection algorithms, optical flow algorithms, and/or feature matching algorithms to process one or more images obtained by the image sensor. Optionally, step 1140 may involve comparing one or more images obtained by the first sensor with one or more images obtained by the second sensor.
[0175] In some embodiments, an optimization algorithm such as a non-linear optimization algorithm, a linear optimization algorithm, an iterative optimization algorithm, or an iterative non-linear optimization algorithm is used to determine the estimated value. The iterative optimization algorithm may include calculating a maximum posterior probability (MAP) estimate of one or more external parameters based on initial values, inertial data, and image data. Exemplary algorithms suitable for use with the embodiments herein are described below.
[0176] The unmanned aerial vehicle may have a sensing system including m cameras (or other image sensor type) and one IMU (or other inertial sensor type). The external parameters of the camera can be with Where 1≤i≤m, similar to the notation method previously described in this article. Similar to other optimization algorithms provided in this article, an optimization objective function can be established, which includes the integral estimation from the IMU and the response from the camera.
[0177] It can be assumed that from t 0 To t N Time period ((t 0 , T 1 , T 2 , T 3 , T 4 , T 5 ,..., t N ]), the UAV status is among them Indicates the current position of the UAV (relative to t 0 Time position, the t 0 Is the moment when the operation started), Represents the current speed of the UAV at time k (relative to the UAV's fuselage coordinate system), and Indicates the current orientation of the UAV at time k (relative to t 0 When facing). The initial conditions can be And It can be assumed that the i-th feature point is at time t k (0≤k≤N) is initially observed by the j-th camera, and λ i Is the feature point at time t k The depth in the direction perpendicular to the plane of the j-th camera. The camera parameters to be estimated are among them Represents the rotation of the i-th camera relative to the IMU, Indicates the translation of the i-th camera relative to the IMU, and 1≤i≤m. The unknown quantity to be estimated is the UAV state External calibration parameters And feature depth λ 0 , Λ 1 , Λ 2 , Λ 3...λ M. All these unknown quantities form a vector X, which is referred to herein as the "overall state".
[0178] In some embodiments, the target equation is defined as:
[0179]
[0180] among them Estimate the overall state X with the integral from the IMU Connected, and Combine the overall state X with the estimate from the image sensor Connect. ||b p -Λ p X|| encodes the prior information of X. due to with About X is non-linear, and thus may be difficult to solve, so the algorithm can operate on the error state representation δX, and the function can be linearized by the first-order Qinler expansion:
[0181]
[0182]
[0183] Is the estimated value of the overall state X within some error terms δX. Note that the error state represents the May not be But (Minimum representation).
[0184] Considering the actual physical model of the UAV dynamics and the camera geometry principle, the residual with Can be defined as:
[0185]
[0186]
[0187] among them Yes The three elements of It is the coordinate estimated by the pose information of the feature l in the j frame. The other symbols in the equation are defined above.
[0188] As discussed below, with Yes with Regarding the derivative of δX, there are two cases.
[0189] In one case, suppose that feature l is first observed by camera ci at time i, and then by camera cj at the same time:
[0190]
[0191] among them Are normalized coordinates. The following equation can be obtained:
[0192]
[0193]
[0194] In another case, suppose that feature l is first observed by camera ci at time i, and then observed by camera cj at subsequent time j:
[0195]
[0196] If ci=cj, the following equation can be obtained:
[0197]
[0198]
[0199]
[0200]
[0201]
[0202]
[0203]
[0204] If ci≠cj, the following equation can be obtained:
[0205]
[0206]
[0207]
[0208]
[0209]
[0210]
[0211]
[0212]
[0213]
[0214] By substituting the above equation into the target equation, the following equation can be obtained:
[0215]
[0216] In some embodiments, an initial value can be provided, and the target equation can be solved iteratively. It can be done by using the initialization algorithm or reinitialization algorithm as described in this article, adding the integral estimate from the IMU to the optimal estimate at the previous moment, or by initializing as To get the initial value. The target equation can be solved iteratively to obtain δX, and then updated according to the following formula
[0217]
[0218] Until δX≈0. Finally, after convergence, It is the UAV status output by the system.
[0219] In some embodiments, if δX does not approach zero in multiple iterations, so that the parameter calibration algorithm fails to converge to a solution, then this can be considered an error in the system and can be performed as described herein Reinitialize to recover from the error.
[0220] The optimal estimation of the external parameters after factory calibration and/or the parameters from the previous moment can be input into the target equation as part of the UAV state, and then the equation can be solved and updated iteratively to modify the estimated external parameters And minimize any errors in the target equation. The system can detect errors in external parameters in real time (solve δX) and correct the errors (update ).
[0221] Optionally, after step 1140, the state of the unmanned aerial vehicle (for example, position, orientation, and/or speed) may be determined based on the estimated value of the external parameter. For example, external parameters can be used to fuse sensor data from image sensors and inertial sensors to calculate status information. In some embodiments, the state information is determined relative to the previous UAV state at a previous time during UAV operation. The previous moment may be the first moment when the status information is available, for example, when the UAV starts to operate, when initialization occurs, or when reinitialization occurs. Alternatively, the state information may be determined with respect to the global coordinate system rather than with respect to the previous UAV state.
[0222] Picture 12 Illustrated is a method 1200 for calibrating one or more external parameters of an unmanned aerial vehicle (or any other movable object) having multiple sensors in an initial configuration according to an embodiment. The steps of method 1200 can be performed using any embodiment of the systems and devices described herein. For example, some or all of the steps of method 1200 may be performed using one or more processors carried by an unmanned aerial vehicle. The method 1200 can be performed in combination with any of the various methods described herein. Similar to method 1100, method 1200 can be used to perform online calibration during the operation of the unmanned aerial vehicle.
[0223] In step 1210, it is detected that the initial configuration of the plurality of sensors carried by the UAV has been modified. The plurality of sensors may include at least one inertial sensor and at least two image sensors. The initial configuration may have been achieved by removing at least one sensor of the plurality of sensors, adding at least one sensor to the plurality of sensors, changing the position and/or orientation of the sensors of the plurality of sensors, or a combination thereof Subject to modification. In some embodiments, the initial configuration is modified before the operation of the UAV (eg, before the UAV is in flight and/or has been powered on), and the modification is detected during the operation of the UAV. In other embodiments, the configuration is modified during operation of the unmanned aerial vehicle. Various methods can be used to detect the modification. For example, if in the iterative process discussed herein, the values of δθ and δT within δX are not close to zero, then the modification can be detected, which means that additional updates should be performed.
[0224] In step 1220, during the operation of the UAV, inertial data is received from at least one inertial sensor carried by the UAV. The inertial data may include one or more measurements indicating the three-dimensional acceleration and three-dimensional angular velocity of the unmanned aerial vehicle. In some embodiments, the inertial data includes data generated by the at least one inertial sensor at least two, three, four, five, six, seven, eight, nine, 10, 20, 30 , One or more measurement values obtained at 40 or 50 moments.
[0225] In step 1230, during the operation of the UAV, image data is received from at least two image sensors carried by the UAV. The image data may include one or more images of the surrounding environment of the movable object. In some embodiments, the image data may include data generated by each of the at least two image sensors at least two, three, four, five, six, seven, eight, nine, 10 , 20, 30, 40 or 50 moments of one or more images.
[0226] In step 1240, one or more external parameters are estimated in response to detecting that the initial configuration has been modified. The one or more external parameters may include the spatial relationship between the multiple sensors in the modified configuration. In some embodiments, the spatial relationship includes the relative position and relative orientation of the image sensor, which may be determined with respect to each other and/or with respect to the inertial sensor. The external parameters can be estimated in various ways. In some embodiments, one or more external parameters are estimated based on the inertial data and image data received in step 1220 and step 1230. Optionally, the inertial data and the image data may be the only sensor data used to determine the estimated value of the external parameter. Step 1240 may involve using feature point detection algorithms, optical flow algorithms, and/or feature matching algorithms to process one or more images obtained by the image sensor. Optionally, step 1240 may involve comparing one or more images obtained by the first sensor with one or more images obtained by the second sensor.
[0227] In some embodiments, the external parameters are estimated based on the initial values of one or more external parameters. Various methods can be used to obtain the initial value. In some embodiments, the initial value is received from a memory device associated with the UAV (eg, a memory device carried on the UAV). The initial value can be determined before the operation of the unmanned aerial vehicle. For example, the iterative optimization algorithm described herein can be used to determine the initial value before the operation of the UAV. For another example, the initial value can be measured by the user before the operation. Alternatively, the initial value may be a factory calibration value determined when the UAV is manufactured. In some embodiments, the initial value may be determined based on the knowledge of the configuration of the UAV. For example, image sensors and/or inertial sensors may be coupled to the unmanned aerial vehicle at certain fixed positions (for example, a selected set of positions available for installing sensors), and the initial determination may be made based on information related to the fixed positions. value.
[0228] In some embodiments, optimization algorithms such as nonlinear optimization algorithms, linear optimization algorithms, iterative optimization algorithms, iterative nonlinear optimization algorithms, or iterative linear optimization algorithms are used to estimate the external parameters. The iterative optimization algorithm may include calculating a maximum posterior probability (MAP) estimate of one or more external parameters based on initial values, inertial data, and image data. The iterative optimization algorithm may be similar to the algorithm previously described herein with respect to method 1100.
[0229] Optionally, after step 1240, the state of the unmanned aerial vehicle (for example, position, orientation, and/or speed) may be determined based on the estimated external parameters. For example, external parameters can be used to fuse sensor data from image sensors and inertial sensors to calculate status information. In some embodiments, the state information is determined relative to a previous UAV state at a previous time during UAV operation. The previous moment may be the first moment when the status information is available, for example, when the UAV starts to operate, when initialization occurs, or when reinitialization occurs. Alternatively, the state information may be determined with respect to the global coordinate system rather than with respect to the previous UAV state.
[0230] The automatic parameter calibration methods described herein (for example, methods 1100, 1200) can be beneficial in ensuring the accuracy and reliability of UAV sensor data processing, because external parameters may change during UAV operation. For example, due to shocks, collisions, or other events that change the spatial relationship of the sensors with respect to each other. For example, the method described herein can be used to continuously estimate the spatial relationship (eg, relative position and orientation) between the image sensor and the inertial sensor during the operation of the UAV, and to detect and correct the estimated value of the spatial relationship during the operation. Error. In addition, the implementation of the online calibration described herein can also advantageously avoid the need for accurate calibration of the UAV before operation (this may be referred to herein as "offline" calibration). This can allow a "plug and play" approach, where the sensor configuration of the UAV can be modified (for example, by adding one or more sensors, removing one or more sensors, moving one or more sensors), and The UAV can be operated immediately after modification, without the need for a lengthy offline calibration process to determine the new external parameters of the sensor configuration. In some embodiments, the parameter calibration method provided herein enables external parameters to be determined during the operation of the UAV without performing any parameter calibration before the operation of the UAV.
[0231] Figure 13 Illustrated is a method 1300 for using multiple sensors to estimate status information of an unmanned aerial vehicle (or any other movable object) during operation of an unmanned aerial vehicle according to an embodiment. The steps of method 1300 can be performed using any embodiment of the systems and devices described herein. For example, some or all of the steps of the method 1300 may be performed using one or more processors carried by the unmanned aerial vehicle. The method 1300 can be performed in conjunction with any of the various methods described herein.
[0232] The method 1300 may be used to estimate the current state of the UAV during operation (eg, when the UAV is powered on, in flight, etc.). State estimation may involve determining various types of state information, such as the position, orientation, speed, and/or acceleration of the UAV. The state information may be determined relative to the previous state of the UAV at an earlier time, or may be determined in an absolute manner (for example, relative to the global coordinate system). In some embodiments, the state estimation is performed continuously or at predetermined time intervals during the operation of the UAV in order to allow real-time updates of external parameters. For example, the method 1300 may be executed every 0.1 s during the operation of the unmanned aerial vehicle.
[0233] In step 1310, the previous status information of the UAV is received. The previous state information may include the position, orientation, speed, and/or acceleration of the unmanned aerial vehicle at a previous time during the operation of the unmanned aerial vehicle. In some embodiments, an iterative optimization algorithm (eg, the same algorithm used in step 1340 of method 1300) is used to obtain the previous state information. State information from one or more previous moments can be stored on a memory device associated with the UAV to aid in the estimation of updated state information at subsequent moments.
[0234] In step 1320, inertial data from at least one inertial sensor carried by the unmanned aerial vehicle is received. The inertial data may include at least two, three, four, five, six, seven, eight, nine, 10, 20, 30 during the operation of the unmanned aerial vehicle by the inertial sensor. , 40 or 50 moments of inertial measurement data. Inertial data can indicate the three-dimensional acceleration and three-dimensional angular velocity of the UAV.
[0235] In step 1330, image data from at least two image sensors carried by the unmanned aerial vehicle are received. The image data may include at least two, three, four, five, six, seven, eight, nine, 10, 20, 30 by each image sensor during the operation of the UAV , 40 or 50 moments. The image may be an image of the surrounding environment of the unmanned aerial vehicle.
[0236] In step 1340, during the operation of the unmanned aerial vehicle, the updated state information of the unmanned aerial vehicle is determined based on the previous state information, inertial data and/or image data. The updated status information may include the position, orientation, speed, and/or acceleration of the UAV. The updated status information may be the current status information of the UAV at the current moment. Optionally, the inertial data and image data may be the only sensor data used to determine updated status information. Step 1340 may involve using feature point detection algorithms, optical flow algorithms, and/or feature matching algorithms to process the images obtained by each image sensor. Optionally, step 1340 may involve comparing one or more images obtained by the first sensor with one or more images obtained by the second sensor.
[0237] In some embodiments, optimization algorithms such as nonlinear optimization algorithms, linear optimization algorithms, iterative optimization algorithms, iterative nonlinear optimization algorithms, or iterative linear optimization algorithms are used to determine the estimated value. The iterative optimization algorithm may include calculating a maximum posterior probability (MAP) estimate of one or more external parameters based on initial values, inertial data, and image data. Exemplary algorithms suitable for use with the embodiments herein are described below.
[0238] The unmanned aerial vehicle may have a sensing system including m cameras (or other image sensor type) and one IMU (or other inertial sensor type). The output frequency of the IMU can be higher than the output frequency of the camera. The external parameters of the camera can be with among them Represents the rotation of the i-th camera relative to the IMU, Indicates the translation of the i-th camera relative to the IMU, and 1≤i≤m. Similar to other optimization algorithms provided in this article, an optimization objective function can be established, which includes the integral estimation from the IMU and the response from the camera.
[0239] It can be assumed that from t 0 To t N Time period ((t 0 , T 1 , T 2 , T 3 , T 4 , T 5 ,..., t N ]) The status of the UAV is among them Indicates the current position of the UAV (relative to t 0 (The position when the operation started), Represents the current speed of the UAV at time k (relative to the UAV's fuselage coordinate system), and Indicates the current orientation of the UAV at time k (relative to t 0 When facing). The initial conditions can be And The number of feature points observed in (N+1)×m images may be M. It can be assumed that the i-th feature point is at time t k (0≤k≤N) is initially observed by the j-th camera, and λ i Is the feature point at time t k The depth in the direction perpendicular to the plane of the j-th camera. The unknown quantity to be estimated is the UAV state External calibration parameters And feature depth λ 0 , Λ 1 , Λ 2 , Λ 3...λ M. All these unknown quantities form a vector X, which is referred to herein as the "overall state".
[0240] In some embodiments, the target equation is defined as:
[0241]
[0242] Where ||b p -Λ p X|| stores prior information (representing an estimate of X), Is the overall state X and the integral estimate from the IMU Relationship between Is the overall state X and the estimate from the image sensor The relationship between. equation with It can be derived as discussed in this article. due to with Are nonlinear, so they can be expanded with the first-order Qinler expansion to obtain
[0243]
[0244]
[0245] The following equation can be obtained by substituting the above two equations into the target equation:
[0246]
[0247] In some embodiments, an initial value can be provided, and a Gauss-Newton algorithm can be used to iteratively solve the target equation. This can be done by using the initialization algorithm or reinitialization algorithm as described in this article, adding the integral estimate from the IMU to the optimal estimate from the previous moment, or by initializing as To get the initial value. Then the target equation can be solved iteratively to obtain δX, and then updated according to the following formula
[0248]
[0249] Until δX≈0 to minimize the target equation. therefore, It is the UAV status output by the system.
[0250] In some embodiments, if δX does not approach zero in multiple iterations, so that the state estimation algorithm fails to converge to a solution, then this can be considered an error in the system and can be performed as described herein Reinitialize to recover from the error.
[0251] In some embodiments, sensor data from one or more previous moments can be used to estimate current state information. For example, previous inertial data from at least one, two, three, four, five, six, seven, eight, nine, 10, 20, 30, 40, or 50 previous moments And/or image data can be used to estimate current state information. Various methods such as sliding window filters can be used to determine the amount of previous state information to be used to estimate the current state of the UAV.
[0252] Figure 14 Illustrated is a sliding window filter for selecting previous state information according to an embodiment. The size of the sliding window may be K, so that sensor data (for example, inertial data and/or image data) from K moments is used to estimate the current state. although Figure 14 Where K=4, but it should be understood that K can be any suitable value, such as 20. The K moments may include K-1 previous moments and the current moment. When new sensor data is obtained at a later time, the new data can be added to the sliding window, and data from the previous time can be removed from the sliding window in order to maintain a constant window size K. For example, in Figure 14 In the drawing, the first sliding window 1400a contains data from time 1 to time 4, the second sliding window 1400a contains data from time 2 to time 5, and the third sliding window 1400c contains data from time 3 to time 6, and The fourth sliding window 1400d contains data from time 4 to time 7. In some embodiments, the time of culling is the earliest time in the sliding window (e.g., first in first out (FIFO)). In other embodiments, the time of elimination may not be the earliest time, and other marginalization methods other than FIFO may be used to determine which time to keep and which time to exclude, such as first in last out (FILO) or FIFO Mix with FILO. In some embodiments, the time is eliminated based on checking the parallax between the specified time and its neighboring time. If the parallax is not large enough for a stable arithmetic solution, for example, the Schur complement rejection method can be used to reject and discard the moment.
[0253] In some embodiments, b p And Λ p It can store information from previous moments that is used to estimate the current state. For example, b p And Λ p The sensor data (for example, inertial data and/or image data) at a previous time may be stored, the previous time being from the time when the UAV started operation (or reinitialized from an error) to the current time. Optionally, b p And Λ p It is possible to exclude data from K moments that have been included in the sliding window described herein. In such an implementation, when data is removed from the sliding window, it can be used to update b p And Λ p.
[0254] In some embodiments, algorithms can be used to detect errors in input sensor data in real time. For example, if one of the image sensors malfunctions or is blocked, an error may occur. For another example, if there is excessive noise in the image data obtained by the image sensor, an error may occur. When solving for δX and updating At this time, error detection can be performed by detecting whether δX corresponding to the image data and/or inertial data converges (for example, whether δX becomes smaller and smaller and/or approaches zero within multiple iterations). If δX converges, the sensor data can be considered error-free. However, if δX does not converge, the sensor data can be considered erroneous, and the sensor data that causes convergence problems can be removed from the optimization process.
[0255] Optionally, after step 1340, the updated status information may be output, for example, to a control module for controlling the movement of the UAV. For example, the control module may use the updated status information to control one or more power units of the unmanned aerial vehicle, so as to realize autonomous or semi-autonomous navigation, obstacle avoidance, path planning, environmental mapping, etc., for example.
[0256] Method 1300 may provide several advantages for unmanned aerial vehicle operation. For example, the state estimation method described herein can be more accurate than other types of state estimation methods, while saving computing resources. The estimated state information can be more accurate than state information estimated using only inertial data or image data. In some embodiments, the estimated value of the state information (eg, position information) does not differ from the actual value of the state information by more than about 0.1 cm, 0.25 cm, 0.5 cm, 0.75 cm, 1 cm, 1.25 cm, 1.5 cm, 1.75 cm, 2cm, 2.25cm, 2.5cm, 2.75cm, 3cm or 5cm. The accuracy can be further improved by using more image sensors. In addition, the embodiments herein are suitable for use with multiple image sensors, thereby allowing the system to continue to operate even if one or more of the image sensors fails or is blocked. In some embodiments, the state estimation method reduces noise and therefore improves the robustness and stability of the UAV system.
[0257] Figure 15 Illustrated is a method 1500 for performing state estimation and/or parameter calibration for an unmanned aerial vehicle (or any other movable object) according to an embodiment. As described herein, in method 1500, inertial data 1502 and image data 1504 are received from at least one inertial sensor and at least two image sensors carried by an unmanned aerial vehicle. The inertial data 1502 and the image data 1504 may be input into the linear sliding window filter 1506 and the nonlinear sliding window filter 1508. The linear sliding window filter 1506 can use the FIFO method to eliminate data from the previous time when new data is received. The non-linear sliding window filter 1508 can eliminate the data from the previous time by using an exclusion method when new data is received.
[0258] In some embodiments, state estimation and/or parameter calibration are performed by inputting inertial data and image data selected by the nonlinear sliding window filter 1508 into the nonlinear solver 1510, thereby generating a solution, which provides state information And/or the estimated value of the external parameter. The non-linear solver 1510 may be a non-linear optimization algorithm, such as the embodiments described herein. The method 1500 may include detecting whether the nonlinear solver 1510 has converged in order to produce a solution 1512. If the nonlinear solver 1510 has converged and produced a solution, the solution may be output 1514, for example, to another UAV component, such as a flight control module. The method can then continue to receive new sensor data and reject the previous sensor data 1516 via the nonlinear sliding window filter 1508, and the process can be repeated to continue to generate updated state estimates.
[0259] If the nonlinear solver 1510 fails to converge, the method 1500 may continue to use the linear solver 1518 to generate a solution that provides an estimate of state information and output the solution 1514. The linear solver 1518 may be a linear optimization algorithm, such as the embodiments described herein. In some embodiments, failure of the nonlinear solver 1510 to converge is considered an error, while the linear solver 1518 may implement a reinitialization algorithm to recover from the error. The solution (eg, reinitialization information) provided by the linear solver 1518 can be used as an initial value for subsequent state estimation performed by the nonlinear solver 1510.
[0260] The various methods described herein can be implemented by any suitable system or device. In some embodiments, the method herein may be executed by a hardware platform, which includes a computing platform, a flight control module, a sensor module, and a data acquisition subsystem. The computing platform may include a sensor fusion module that receives and processes sensor data according to the various sensor fusion algorithms described herein. Any suitable combination of hardware components and software components can be used to implement the computing platform. For example, in some embodiments, the computing platform uses NVIDIA TegraK1 as the basis for designing the ARM architecture CPU and GPU computing platform. Computing platforms can be designed to meet certain performance requirements, such as stability and scalability. Table 1 provides some exemplary requirements and test methods for the computing platform:
[0261] Table 1: Exemplary requirements and test methods
[0262]
[0263] The flight control module may include any suitable combination of hardware components and software components for controlling the operation of the unmanned aerial vehicle, such as components for controlling the position, orientation, speed, and/or acceleration of the unmanned aerial vehicle. In some embodiments, the flight control module includes one or more components for controlling the actuation of the power unit of the unmanned aerial vehicle, so as to achieve a desired position, orientation, speed, etc., for example. The flight control module may be communicatively coupled to a computing platform to receive data (eg, status information) from the platform and use the data as input to one or more flight control algorithms. For example, the flight control module may include a low-level API to receive data from the computing platform, for example, to receive data at a frequency of at least 50 Hz.
[0264] The sensor module may include one or more sensors, such as at least one inertial sensor and at least two image sensors described herein. In some embodiments, the sensor module includes three global shutter cameras, two of which are set to face forward in a binocular configuration (for example, along the direction of movement of the UAV), and one camera is set to face downward. Monocular configuration. The camera may have a sampling frequency of about 20 Hz. In some embodiments, the sensor module includes an IMU configured to obtain the angular velocity and linear acceleration of the UAV at a frequency of at least 100 Hz. The IMU can also be configured to provide an angle value, for example using uncorrected integrals from a gyroscope to provide an angle value. Optionally, the sensor module may also include other types of sensors, such as compasses, barometers, GPS, and/or ultrasonic sensors (for example, a forward-facing ultrasonic sensor and a downward-facing ultrasonic sensor). The sensor for the sensor module may be coupled to the unmanned aerial vehicle fuselage using a rigid connection that restricts the movement of the sensor relative to the fuselage. Optionally, a shock absorption system such as a damper or shock absorber may be installed between the sensor and the UAV module in order to reduce undesired movement of the sensor.
[0265] The data acquisition subsystem may be operably coupled to the sensor module to transmit sensor data from each sensor to the computing platform and/or flight control module. The data acquisition system can utilize any suitable type of communication interface, such as a USB interface, a serial port, or a combination thereof. In some embodiments, the image sensor and the ultrasound sensor are coupled via a USB interface, while the IMU and other sensor types are coupled via a serial port.
[0266] Figure 16 Illustrated is a system 1600 for controlling an unmanned aerial vehicle using multiple sensors according to an embodiment. The system 1600 can be used to implement any embodiment of the methods described herein. At least some or all of the components of the system 1600 may be carried by an unmanned aerial vehicle. The system 1600 can be considered as being divided into two different functional units: the sensor fusion unit 1601a (the component above the dotted line) and the flight control unit 1601b (the component below the dotted line). The sensor fusion unit 1601a and the flight control unit 1601b may be communicably coupled to each other for exchanging data, control signals, and the like. In some embodiments, the sensor fusion unit 1601a and the flight control unit 1601b are configured to operate independently of each other so that if one unit fails, the other unit can continue to operate. For example, if the sensor fusion unit 1601a encounters a failure, the flight control unit 1601b may be configured to continue working independently, for example to perform an emergency landing operation.
[0267] In some embodiments, the system 1600 includes a sensor fusion module 1602 that is operatively coupled to the flight control module 1604. The sensor fusion module 1602 may also be coupled to multiple sensors, such as one or more inertial sensors 1606 and one or more image sensors 1608. The sensor fusion module 1602 may include one or more processors that use inertial data and image data from one or more inertial sensors 1606 and one or more image sensors 1608 in order to perform what is described herein Initialization, error recovery, state estimation and external parameter calibration methods. Optionally, as described above and herein, the sensor fusion module 1602 may be coupled to or include an image processing module for processing image data. The results generated by the sensor fusion module 1602 may be transmitted to the flight control module 1604 to facilitate various flight operations. Flight operations may be performed based on results from the sensor fusion module, user commands received from the remote terminal 1610, sensor data received from other sensors 1612 (eg, GPS sensors, magnetometers, ultrasonic sensors), or combinations thereof. For example, the flight control module 1604 may determine control signals to be transmitted to one or more power units 1614 (eg, rotors) in order to control the position, orientation, speed, and/or acceleration of the unmanned aerial vehicle, for example, for operation of the unmanned aerial vehicle , Such as navigation, obstacle avoidance, etc.
[0268] The components of the system 1600 can be implemented using various types and combinations of hardware elements. For example, the sensor fusion module 1602 may be implemented using any suitable hardware platform, the hardware platform including one or more processors and a memory storing instructions executable by the one or more processors. The connection between different components of the system 1600 can be implemented using various types of communication interfaces, for example, an analog interface or a digital interface, such as a USB interface, a serial port, a pulse width modulation channel, or a combination thereof. Such an interface may use a wired communication method or a wireless communication method. For example, one or more image sensors 1608 may be coupled to the sensor fusion module 1602 via a USB interface, and one or more inertial sensors 1606 may be coupled to the sensor fusion module 1606 via a serial port. The sensor fusion module 1602 may be coupled to the flight control module 1604 via a serial port. The flight control module 1604 may be coupled to the remote terminal 1610 via a wireless communication interface such as a radio control (RC) interface. The flight control module 1604 may transmit commands to one or more power units 1614 via pulse width modulation.
[0269] In some embodiments, as described herein, the sensor fusion module 1602 is used to combine inertial data from one or more inertial sensors 1606 and image data from one or more image sensors 1608. Optionally, the sensor fusion module 1602 may combine internal data from the flight control module 1604 and sensor data from the sensors 1606, 1608 to estimate the status information (for example, movement information, such as speed or acceleration) of the UAV. The sensor fusion module 1602 can use any suitable computing platform, such as the Jetson TK1 (TK1) platform. One or more image sensors 1608 may be communicatively coupled to the TK1 platform via a USB interface. The TK1 platform can use appropriate drivers to run any operating system, such as Ubuntu 14.04. In some embodiments, Robot Operating System (ROS) can be used for data transmission. Table 2 provides some exemplary hardware and software components of the sensor fusion module 1602:
[0270] Table 2: Exemplary hardware components and software components
[0271]
[0272] The systems, devices, and methods herein can utilize any suitable number of image sensors, such as one, two, three, four, five, six, seven, eight, nine, 10, 15, 20. One, 30, 40, 50 or more image sensors. Multiple image sensors may be coupled to one or more processors of the imaging processing module at the same time for receiving and analyzing image data (for example, to perform feature point detection and matching) via a corresponding number of communication interfaces, for example. However, in some embodiments, certain hardware platforms may not be able to support simultaneous coupling to a large number of image sensors (for example, more than six image sensors, more than 10 image sensors). For example, some image processing modules may not include a sufficient number of communication interfaces so that all image sensors can be coupled at the same time. In such an implementation, multiple image sensors may not all be coupled to the module simultaneously at the same time. For example, a switching mechanism can be used to selectively couple a certain subset of image sensors to the module at different times.
[0273] Figure 22 Illustrated is a system 2200 with an image sensor with switchable coupling according to an embodiment. The various components of the system 2200 can be combined with and/or replace any embodiment of other systems and devices described herein. The system 2200 can be used in combination with any of the methods described herein (eg, asynchronous data collection scheme 2100). The system 2200 may include multiple image sensor subsets 2202 (e.g., n different subsets as depicted herein). Any number of image sensor subsets can be used, such as two, three, four, five, six, seven, eight, nine, 10, 15, 20, 30, 40, 50 or more subsets. Each image sensor subset can include any number of individual image sensors, such as one, two, three, four, five, six, seven, eight, nine, 10, 15, 20 , 30, 40, 50 or more image sensors. The subset of image sensors may each include the same number of image sensors. Alternatively, some or all of the image sensor subsets may include different numbers of image sensors. Each image sensor subset may be coupled to a switching mechanism 2204, which in turn is coupled to an image processing module 2206. The switching mechanism 2204 may include any suitable combination of hardware components and software components (eg, switches, relays, etc.) for selectively coupling a subset of sensors to the module 2206. The image processing module 2206 may be used to process image data in order to perform feature point detection and/or matching as discussed herein. In some embodiments, the image processing module 2206 is coupled to the sensor fusion module and/or is a component of the sensor fusion module, which is used to perform initialization, error recovery, parameter calibration, and/or state estimation.
[0274] The switching mechanism 2204 may be configured to couple only a single subset of image sensors to the image processing module 2206 at a time, so that the image processing module 2206 receives and processes image data from a single subset at a time. In order to obtain data from all image sensors of the system 2200, the switching mechanism 2204 can be controlled to alternately select which subsets are coupled to the image processing module 2206. For example, image data can be received by coupling a first subset of image sensors to module 2206, receiving image data from the first subset, coupling a second subset of image sensors to module 2206, and receiving image data from the second subset. The collection receives image data, and so on, until the images from all subsets are received. The order and frequency of the switching mechanism 2204 switching between different subsets can be changed as needed. This method allows image data to be received from a relatively large number of image sensors without the need for the image processing module 2206 to always maintain simultaneous connections with each sensor. This can be advantageous in improving the flexibility of the system to accommodate any number of image sensors (eg, for plug-and-play) while reducing the computational load associated with processing image data from a large number of sensors at once.
[0275] The systems, devices, and methods described herein can be applied to a variety of movable objects. As mentioned above, any description of aircraft in this article can be applied to and used for any movable objects. The movable object of the present invention may be configured to move in any suitable environment, such as in the air (for example, a fixed-wing aircraft, a rotary-wing aircraft, or an aircraft with neither fixed wings nor rotors), in water (for example, , Ships or submarines), on the ground (for example, motor vehicles, such as cars, trucks, buses, vans, motorcycles; movable structures or frames, such as rods, fishing rods; or trains), underground (E.g. subway), in space (e.g. space shuttle, satellite or probe), or any combination of these environments. The movable object may be a vehicle, such as the vehicles described elsewhere herein. In some embodiments, the movable object may be installed on a living body such as a human or animal. Suitable animals may include avians, dogs, cats, horses, cattle, sheep, pigs, dolphins, rodents or insects.
[0276] A movable object may be able to move freely with respect to six degrees of freedom (for example, three degrees of freedom in translation and three degrees of freedom in rotation) within the environment. Alternatively, the movement of the movable object may be constrained with respect to one or more degrees of freedom, such as by a predetermined path, trajectory, or orientation. The movement can be actuated by any suitable actuation mechanism such as an engine or a motor. The actuation mechanism of the movable object can be powered by any suitable energy source, such as electric energy, magnetic energy, solar energy, wind energy, gravity energy, chemical energy, nuclear energy, or any suitable combination thereof. The movable object can be self-propelled via a power system as described elsewhere herein. The power system may optionally operate on energy sources, such as electrical energy, magnetic energy, solar energy, wind energy, gravity energy, chemical energy, nuclear energy, or any suitable combination thereof. Alternatively, the movable object can be carried by a living thing.
[0277] In some cases, the movable object may be a vehicle. Suitable vehicles may include water vehicles, aircraft, space vehicles or ground vehicles. For example, the aircraft may be a fixed-wing aircraft (e.g., airplane, glider), a rotary-wing aircraft (e.g., helicopter, rotorcraft), an aircraft with both fixed and rotor wings, or an aircraft with neither fixed wings nor rotors (e.g., airship, hot air balloon). The vehicle may be self-propelled, such as in the air, on water or water, in space, or on the ground or underground. Self-propelled vehicles may utilize a power system, such as a power system including one or more engines, motors, wheels, axles, magnets, rotors, propellers, blades, nozzles, or any suitable combination thereof. In some cases, a power system can be used to enable a movable object to take off from a surface, land on a surface, maintain its current position and/or orientation (e.g., hover), change orientation, and/or change position.
[0278] The movable object can be remotely controlled by the user or locally controlled by the occupant inside or on the movable object. In some embodiments, the movable object is an unmanned movable object, such as an unmanned aerial vehicle. An unmanned movable object, such as an unmanned aerial vehicle, may not have an occupant on the movable object. The movable object can be controlled by a human or an autonomous control system (e.g., a computer control system) or any suitable combination thereof. The movable object may be an autonomous or semi-autonomous robot, such as a robot equipped with artificial intelligence.
[0279] The movable object can have any suitable size and/or dimensions. In some embodiments, the movable object may have a size and/or size that can accommodate a human occupant in or on the vehicle. Alternatively, the movable object may have a size and/or size smaller than that which can accommodate a human occupant in or on the vehicle. The movable object may have a size and/or dimensions suitable for being carried or carried by humans. Alternatively, the movable object may be larger than the size and/or size suitable for being carried or carried by a human. In some cases, the largest dimension (eg, length, width, height, diameter, diagonal) that the movable object can have is less than or equal to about: 2cm, 5cm, 10cm, 50cm, 1m, 2m, 5m, or 10m. The maximum dimension may be greater than or equal to approximately: 2cm, 5cm, 10cm, 50cm, 1m, 2m, 5m or 10m. For example, the distance between the shafts of the opposing rotors of the movable object may be less than or equal to approximately: 2 cm, 5 cm, 10 cm, 50 cm, 1 m, 2 m, 5 m, or 10 m. Alternatively, the distance between the shafts of opposing rotors may be greater than or equal to approximately: 2 cm, 5 cm, 10 cm, 50 cm, 1 m, 2 m, 5 m, or 10 m.
[0280] In some embodiments, the movable object may have a volume of less than 100 cm x 100 cm x 100 cm, less than 50 cm x 50 cm x 30 cm, or less than 5 cm x 5 cm x 3 cm. The total volume of movable objects can be less than or equal to approximately: 1cm 3 , 2cm 3 , 5cm 3 , 10cm 3 , 20cm 3 , 30cm 3 , 40cm 3 , 50cm 3 , 60cm 3 , 70cm 3 , 80cm 3 , 90cm 3 , 100cm 3 , 150cm 3 , 200cm 3 , 300cm 3 , 500cm 3 , 750cm 3 , 1000cm 3 , 5000cm 3 , 10,000cm 3 , 100,000cm 3 , 1m 3 Or 10m 3. Conversely, the total volume of movable objects can be greater than or equal to approximately: 1cm 3 , 2cm 3 , 5cm 3 , 10cm 3 , 20cm 3 , 30cm 3 , 40cm 3 , 50cm 3 , 60cm 3 , 70cm 3 , 80cm 3 , 90cm 3 , 100cm 3 , 150cm 3 , 200cm 3 , 300cm 3 , 500cm 3 , 750cm 3 , 1000cm 3 , 5000cm 3 , 10,000cm 3 , 100,000cm 3 , 1m 3 Or 10m 3.
[0281] In some embodiments, the movable object may have a footprint (this may refer to the cross-sectional area enclosed by the movable object) less than or equal to approximately: 32,000 cm 2 , 20,000cm 2 , 10,000cm 2 , 1,000cm 2 , 500cm 2 , 100cm 2 , 50cm 2 , 10cm 2 Or 5cm 2. Conversely, the footprint can be greater than or equal to approximately: 32,000 cm 2 , 20,000cm 2 , 10,000cm 2 , 1,000cm 2 , 500cm 2 , 100cm 2 , 50cm 2 , 10cm 2 Or 5cm 2.
[0282] In some cases, movable objects may weigh no more than 1000 kg. The weight of movable objects can be less than or equal to approximately: 1000kg, 750kg, 500kg, 200kg, 150kg, 100kg, 80kg, 70kg, 60kg, 50kg, 45kg, 40kg, 35kg, 30kg, 25kg, 20kg, 15kg, 12kg, 10kg, 9kg , 8kg, 7kg, 6kg, 5kg, 4kg, 3kg, 2kg, 1kg, 0.5kg, 0.1kg, 0.05kg or 0.01kg. Conversely, the weight may be greater than or equal to about: 1000kg, 750kg, 500kg, 200kg, 150kg, 100kg, 80kg, 70kg, 60kg, 50kg, 45kg, 40kg, 35kg, 30kg, 25kg, 20kg, 15kg, 12kg, 10kg, 9kg, 8kg, 7kg, 6kg, 5kg, 4kg, 3kg, 2kg, 1kg, 0.5kg, 0.1kg, 0.05kg or 0.01kg.
[0283] In some embodiments, the movable object may be relatively small in relation to the load carried by the movable object. As described in further detail below, the payload may include a payload and/or carrier. In some examples, the ratio of the weight of the movable object to the weight of the load may be greater than, less than or equal to about 1:1. In some cases, the ratio of the weight of the movable object to the weight of the load may be greater than, less than or equal to about 1:1. Optionally, the ratio of carrier weight to load weight may be greater than, less than or equal to about 1:1. When needed, the ratio of the weight of the movable object to the weight of the load can be less than or equal to: 1:2, 1:3, 1:4, 1:5, 1:10 or even less. Conversely, the ratio of the weight of the movable object to the weight of the load can also be greater than or equal to: 2:1, 3:1, 4:1, 5:1, 10:1 or even greater.
[0284] In some embodiments, movable objects may have low energy consumption. For example, movable objects can use less than about: 5W/h, 4W/h, 3W/h, 2W/h, 1W/h or less. In some cases, the carrier of the movable object may have low energy consumption. For example, the carrier may use less than about: 5W/h, 4W/h, 3W/h, 2W/h, 1W/h or less. Alternatively, the payload of the movable object may have low energy consumption, such as less than about: 5W/h, 4W/h, 3W/h, 2W/h, 1W/h or less.
[0285] Figure 17 Illustrated is an unmanned aerial vehicle (UAV) 1700 according to an embodiment of the present invention. The unmanned aerial vehicle may be an example of a movable object as described herein. Unmanned aerial vehicle 1700 may include a power system having four rotors 1702, 1704, 1706, and 1708. Any number of rotors (e.g., one, two, three, four, five, six, or more) can be provided. The rotor, rotor assembly, or other power system of the unmanned aerial vehicle may enable the unmanned aerial vehicle to hover/hold position, change orientation, and/or change position. The distance between the shafts of opposing rotors can be any suitable length 1710. For example, the length 1710 may be less than or equal to 2m, or less than or equal to 5m. In some embodiments, the length 1710 may range from 40 cm to 1 m, from 10 cm to 2 m, or from 5 cm to 5 m. Any description of unmanned aerial vehicles herein can be applied to movable objects, such as different types of movable objects, and vice versa.
[0286] In some embodiments, movable objects can be used to carry loads. The load may include one or more of passengers, cargo, equipment, equipment and the like. The load may be provided in the housing. The housing may be separate from the housing of the movable object or part of the housing of the movable object. Alternatively, the load may be provided with a housing, while the movable object does not have a housing. Alternatively, some part of the load or the entire load may be provided without a housing. The load can be rigidly fixed relative to the movable object. Optionally, the load may be movable relative to the movable object (for example, it may be translated or rotated relative to the movable object).
[0287] In some embodiments, the load includes a payload. The payload may be configured to not perform any operations or functions. Alternatively, the payload may be a payload used to perform an operation or function, also referred to as a functional payload. For example, the payload may include one or more sensors for surveying one or more targets. Any suitable sensor can be incorporated into the payload, such as an image capture device (for example, a camera), an audio capture device (for example, a parabolic microphone), an infrared imaging device, or an ultraviolet imaging device. The sensor may provide static sensing data (e.g., photos) or dynamic sensing data (e.g., videos). In some embodiments, the sensor provides sensing data for the target of the payload. Alternatively or in combination, the payload may include one or more emitters for providing signals to one or more targets. Any suitable emitter can be used, such as an illumination source or a sound source. In some embodiments, the payload includes one or more transceivers, such as transceivers used to communicate with modules remote from movable objects. Optionally, the payload can be configured to interact with the environment or target. For example, the payload may include tools, instruments, or mechanisms capable of manipulating objects, such as robotic arms.
[0288] Optionally, the load may include a carrier. The carrier may be provided for a payload, and the payload may be coupled to the movable object directly (for example, directly contacting the movable object) or indirectly (for example, not contacting the movable object) via the carrier. Conversely, the payload can be mounted on a movable object without a carrier. The payload can be integrally formed with the carrier. Alternatively, the payload may be detachably coupled to the carrier. In some embodiments, the payload may include one or more payload elements, and one or more of the payload elements may be movable relative to the movable object and/or carrier as described above.
[0289] The carrier may be integrally formed with the movable object. Alternatively, the carrier may be detachably coupled to the movable object. The carrier may be directly or indirectly coupled to the movable object. The carrier may provide support to the payload (e.g., carry at least a portion of the weight of the payload). The carrier may include a suitable installation structure (for example, a pan-tilt platform) capable of stabilizing and/or guiding the movement of the payload. In some embodiments, the carrier may be adapted to control the state (e.g., position and/or orientation) of the payload relative to the movable object. For example, the carrier may be configured to move relative to a movable object (for example, with respect to one, two, or three degrees of freedom of translation and/or one, two, or three degrees of freedom of rotation) so that the payload remains Its position and/or orientation relative to a suitable reference frame has nothing to do with the movement of the movable object. The reference frame may be a fixed reference frame (for example, the surrounding environment). Alternatively, the frame of reference may be a mobile frame of reference (e.g., movable object, payload target).
[0290] In some embodiments, the carrier may be configured to allow movement of the payload relative to the carrier and/or movable object. The movement can be a translation with up to three degrees of freedom (for example, along one, two or three axes), or a rotation with up to three degrees of freedom (for example, about one, two or three Axes), or any suitable combination thereof.
[0291] In some cases, the carrier may include a carrier frame assembly and a carrier actuation assembly. The carrier frame assembly can provide structural support to the payload. The carrier frame assembly may include individual carrier frame components, some of which are movable relative to each other. The carrier actuation assembly may include one or more actuators (e.g., motors) that actuate the movement of a single carrier frame component. The actuator may allow multiple carrier frame components to move simultaneously, or may be configured to allow a single carrier frame component to move at a time. The movement of the carrier frame components can produce a corresponding movement of the payload. For example, the carrier actuation assembly can actuate one or more carrier frame components to rotate about one or more rotation axes (eg, roll axis, pitch axis, or yaw axis). The rotation of the one or more carrier frame components can cause the payload to rotate about one or more rotation axes relative to the movable object. Alternatively or in combination, the carrier actuation assembly may actuate one or more carrier frame components to translate along one or more translation axes, and thereby generate a payload relative to the movable object along one or more corresponding Axis translation.
[0292] In some embodiments, the movement of movable objects, carriers, and payloads relative to a fixed frame of reference (for example, the surrounding environment) and/or relative to each other can be controlled by the terminal. The terminal may be a remote control device located far away from the movable object, carrier and/or payload. The terminal can be placed on the supporting platform or fixed to the supporting platform. Alternatively, the terminal may be a handheld or wearable device. For example, the terminal may include a smart phone, a tablet computer, a laptop computer, a computer, glasses, gloves, a helmet, a microphone, or a suitable combination thereof. The terminal may include a user interface such as a keyboard, mouse, joystick, touch screen or display. Any suitable user input can be used to interact with the terminal, such as manual input of commands, voice control, gesture control, or position control (for example, movement, position, or tilt via the terminal).
[0293] The terminal can be used to control any suitable state of movable objects, carriers, and/or payloads. For example, the terminal may be used to control the position and/or orientation of the movable object, the carrier, and/or the payload relative to a fixed reference object and/or relative to each other. In some embodiments, the terminal may be used to control a movable object, a carrier, and/or a separate element of the payload, such as an actuation component of the carrier, a sensor of the payload, or an emitter of the payload. The terminal may include a wireless communication device adapted to communicate with one or more of a movable object, a carrier, or a payload.
[0294] The terminal may include a suitable display unit for viewing information of movable objects, carriers and/or payloads. For example, the terminal may be configured to display information about a movable object, carrier, and/or payload, the information regarding position, translation speed, translation acceleration, orientation, angular speed, angular acceleration, or any suitable combination thereof. In some embodiments, the terminal may display information provided by the payload, such as data provided by the functional payload (for example, an image recorded by a camera or other image capturing device).
[0295] Optionally, the same terminal can simultaneously control the movable object, carrier and/or payload or the state of the movable object, carrier and/or payload, and receive and/or display data from the movable object, carrier, and / Or payload information. For example, the terminal can control the positioning of the payload relative to the environment, while displaying image data captured by the payload or information about the location of the payload. Or, different terminals can be used for different functions. For example, the first terminal can control the movement or state of the movable object, carrier, and/or payload, and the second terminal can receive and/or display information from the movable object, carrier, and/or payload. For example, the first terminal may be used to control the positioning of the payload relative to the environment, while the second terminal displays the image data captured by the payload. Various communication modes can be utilized between a movable object and an integrated terminal that simultaneously controls the movable object and receives data, or between a movable object and multiple terminals that simultaneously control the movable object and receive data . For example, at least two different communication modes may be formed between a movable object and a terminal that simultaneously controls the movable object and receives data from the movable object.
[0296] Figure 18 A movable object 1800 including a carrier 1802 and a payload 1804 according to an embodiment is illustrated. Although the movable object 1800 is depicted as an aircraft, such depiction is not intended to be limiting, and any suitable type of movable object may be used as described above. Those skilled in the art will understand that any of the embodiments described herein in the context of an aircraft system can be applied to any suitable movable object (for example, an unmanned aerial vehicle). In some cases, the payload 1804 may be provided on the movable object 1800 without the carrier 1802. The movable object 1800 may include a propulsion mechanism 1806, a sensing system 1808, and a communication system 1810.
[0297] As described above, the propulsion mechanism 1806 may include one or more of a rotor, a propeller, a blade, an engine, a motor, a wheel, an axle, a magnet, or a nozzle. For example, as disclosed elsewhere herein, the propulsion mechanism 1806 may be a self-tightening rotor, a rotor assembly, or other rotary power unit. The movable object may have one or more, two or more, three or more, or four or more propulsion mechanisms. The propulsion mechanisms can all be of the same type. Alternatively, the one or more propulsion mechanisms may be different types of propulsion mechanisms. The propulsion mechanism 1806 may be mounted on the movable object 1800 using any suitable device, such as the support element (e.g., drive axis) described elsewhere herein. The propulsion mechanism 1806 can be mounted on any suitable part of the movable object 1800, such as the top, bottom, front, back, side, or a suitable combination thereof.
[0298] In some embodiments, the propulsion mechanism 1806 may enable the movable object 1800 to take off vertically from the surface or land vertically on the surface without any horizontal movement of the movable object 1800 (eg, without traveling along the runway). Optionally, the propulsion mechanism 1806 may be operable to allow the movable object 1800 to hover in the air in a designated position and/or orientation. One or more propulsion mechanisms 1800 can be controlled independently of other propulsion mechanisms. Alternatively, the propulsion mechanism 1800 may be configured to be controlled at the same time. For example, the movable object 1800 may have a plurality of horizontally oriented rotors, and the rotors may provide lift and/or thrust to the movable object. The plurality of horizontally oriented rotors may be actuated to provide the movable object 1800 with vertical take-off, vertical landing, and hovering capabilities. In some embodiments, one or more of the horizontally oriented rotors can rotate in a clockwise direction, while one or more of the horizontal rotors can rotate in a counterclockwise direction. For example, the number of clockwise rotors may be equal to the number of counterclockwise rotors. The rotation rate of each horizontally oriented rotor can be changed independently in order to control the lift and/or thrust generated by each rotor and thereby adjust the spatial layout, speed and/or acceleration of the movable object 1800 (for example, with regard to up to Three degrees of freedom in translation and up to three degrees of freedom in rotation).
[0299] The sensing system 1808 can include one or more sensors that can sense the spatial layout, speed, and/or acceleration of the movable object 1800 (for example, regarding up to three translational degrees of freedom and up to three rotational degrees of freedom ). The one or more sensors may include a global positioning system (GPS) sensor, a motion sensor, an inertial sensor, a distance sensor, or an image sensor. The sensing data provided by the sensing system 1808 can be used to control the spatial layout, speed and/or orientation of the movable object 1800 (for example, using a suitable processing unit and/or control module, as described below). Alternatively, the sensing system 1808 can be used to provide data about the surrounding environment of the movable object, such as weather conditions, distance from potential obstacles, location of geographic features, location of man-made structures, and so on.
[0300] The communication system 1810 supports communication with a terminal 1812 having a communication system 1814 via a wireless signal 1816. The communication systems 1810, 1814 may include any number of transmitters, receivers and/or transceivers suitable for wireless communication. The communication may be one-way communication, so that data can only be transmitted in one direction. For example, one-way communication may only involve the movable object 1800 transmitting data to the terminal 1812, or vice versa. Data may be transmitted from one or more transmitters of the communication system 1810 to one or more receivers of the communication system 1812, or vice versa. Alternatively, the communication may be two-way communication, so that data can be transmitted in both directions between the movable object 1800 and the terminal 1812. Two-way communication may involve the transmission of data from one or more transmitters of communication system 1810 to one or more receivers of communication system 1814, and vice versa.
[0301] In some embodiments, the terminal 1812 can provide control data to one or more of the movable object 1800, the carrier 1802, and the payload 1804, and from one or more of the movable object 1800, the carrier 1802, and the payload 1804. Receive information (for example, position and/or movement information of a movable object, carrier, or payload; data sensed by the payload, such as image data captured by the payload camera). In some cases, the control data from the terminal may include instructions for the relative position, movement, actuation, or control of the movable object, carrier, and/or payload. For example, the control data may result in modification of the position and/or orientation of the movable object (e.g., via the control of the propulsion mechanism 1806), or the movement of the payload relative to the movable object (e.g., via the control of the carrier 1802). The control data from the terminal can lead to the control of the payload, such as the control of the operation of the camera or other image capture device (for example, taking a still or moving picture, zooming in or out, turning it on or off, switching the imaging mode, changing the image resolution , Change focus, change depth of field, change exposure time, change angle of view or field of view). In some cases, communications from movable objects, carriers, and/or payloads may include information from one or more sensors (e.g., sensors of the sensing system 1808 or the payload 1804). The communication may include sensed information from one or more different types of sensors (for example, GPS sensors, motion sensors, inertial sensors, distance sensors, or image sensors). Such information may pertain to the position (e.g., position, orientation), movement or acceleration of the movable object, carrier, and/or payload. Such information from the payload may include data captured by the payload or sensed state of the payload. The control data provided and transmitted by the terminal 1812 may be configured to control the state of one or more of the movable object 1800, the carrier 1802, or the payload 1804. Alternatively or in combination, the carrier 1802 and the payload 1804 may also each include a communication module configured to communicate with the terminal 1812, so that the terminal can independently communicate with the movable object 1800 and the carrier 1802. Communicate with and control each of the payload 1804.
[0302] In some embodiments, the movable object 1800 may be configured to communicate with another remote device—in addition to or in place of the terminal 1812. The terminal 1812 may also be configured to communicate with another remote device and the movable object 1800. For example, movable object 1800 and/or terminal 1812 may communicate with another movable object or a carrier or payload of another movable object. When needed, the remote device may be a second terminal or other computing device (for example, a computer, a laptop computer, a tablet computer, a smart phone, or other mobile device). The remote device may be configured to transmit data to the movable object 1800, receive data from the movable object 1800, transmit data to the terminal 1812, and/or receive data from the terminal 1812. Optionally, the remote device may be connected to the Internet or other telecommunication network so that the data received from the movable object 1800 and/or the terminal 1812 can be uploaded to a website or server.
[0303] Figure 19 It is a schematic block diagram of a system 1900 for controlling a movable object according to an embodiment. The system 1900 can be used in conjunction with any suitable implementation of the systems, devices, and methods disclosed herein. The system 1900 may include a sensing module 1902, a processing unit 1904, a non-transitory computer readable medium 1906, a control module 1908, and a communication module 1910.
[0304] The sensing module 1902 may utilize different types of sensors that collect information related to movable objects in different ways. Different types of sensors can sense different types of signals or signals from different sources. For example, the sensor may include an inertial sensor, a GPS sensor, a distance sensor (e.g., lidar), or a vision/image sensor (e.g., a camera). The sensing module 1902 may be operably coupled to a processing unit 1904 having multiple processors. In some embodiments, the sensing module may be operably coupled to the transmission module 1912 (eg, Wi-Fi image transmission module), which is configured to directly transmit the sensing data to a suitable external device or system . For example, the transmission module 1912 may be used to transmit the image captured by the camera of the sensing module 1902 to the remote terminal.
[0305] The processing unit 1904 may have one or more processors, such as a programmable processor (for example, a central processing unit (CPU)). The processing unit 1904 may be operably coupled to the non-transitory computer readable medium 1906. The non-transitory computer readable medium 1906 may store logic, code, and/or program instructions executable by the processing unit 1904 to perform one or more steps. The non-transitory computer readable medium may include one or more memory units (for example, removable media or external storage such as SD card or random access memory (RAM)). In some embodiments, the data from the sensing module 1902 can be directly transferred to and stored in the memory unit of the non-transitory computer readable medium 1906. The memory unit of the non-transitory computer readable medium 1906 may store logic, code, and/or program instructions executable by the processing unit 1904 to perform any suitable implementation of the method described herein. For example, the processing unit 1904 may be configured to execute instructions so that one or more processors of the processing unit 1904 analyze the sensing data generated by the sensing module. The memory unit may store the sensing data from the sensing module to be processed by the processing unit 1904. In some embodiments, the memory unit of the non-transitory computer readable medium 1906 may be used to store the processing result generated by the processing unit 1904.
[0306] In some embodiments, the processing unit 1904 may be operatively coupled to the control module 1908, which is configured to control the state of the movable object. For example, the control module 1908 may be configured to control the propulsion mechanism of the movable object to adjust the spatial layout, speed and/or acceleration of the movable object with respect to six degrees of freedom. Alternatively or in combination, the control module 1908 may control one or more of the state of the carrier, payload, or sensing module.
[0307] The processing unit 1904 may be operatively coupled to a communication module 1910, which is configured to transmit and/or receive data from one or more external devices (for example, a terminal, a display device, or other remote control). Any suitable communication means can be used, such as wired communication or wireless communication. For example, the communication module 1910 may utilize one or more of a local area network (LAN), a wide area network (WAN), infrared, radio, WiFi, a peer-to-peer (P2P) network, a telecommunication network, cloud communication, and the like. Alternatively, a relay station such as a tower, satellite or mobile station can be used. Wireless communication can be distance dependent or independent of distance. In some embodiments, line of sight may or may not be required for communication. The communication module 1910 may transmit and/or receive one or more of sensing data from the sensing module 1902, processing results generated by the processing unit 1904, predetermined control data, user commands from a terminal or remote control, and the like.
[0308] The components of system 1900 may be arranged in any suitable configuration. For example, one or more components of the system 1900 may be located on a movable object, carrier, payload, terminal, sensing system, or additional external device in communication with one or more of the foregoing. In addition, although Figure 19 A single processing unit 1904 and a single non-transitory computer-readable medium 1906 are depicted, but those skilled in the art will understand that this is not intended to be limiting, and the system 1900 may include multiple processing units and/or non-transitory Sexual computer readable medium. In some embodiments, one or more of the multiple processing units and/or non-transitory computer-readable media may be located at different locations, such as on movable objects, carriers, payloads, terminals, sensing modules, On an additional external device in communication with one or more of the above, or a suitable combination thereof, so that any suitable aspect of the processing and/or memory function performed by the system 1900 can occur at one or more of the above locations.
[0309] A and/or B used herein include one or more of A or B and combinations thereof, such as A and B.
[0310] Although the preferred embodiments of the present invention have been shown and described herein, it is obvious to those skilled in the art that such embodiments are only provided by way of example. Those skilled in the art will now think of many modifications, changes and substitutions without departing from the present invention. It should be understood that various alternatives to the embodiments of the present invention described herein can be adopted in practicing the present invention. Many different combinations of the embodiments described herein are possible, and such combinations are considered part of this disclosure. In addition, all the features discussed in connection with any embodiment herein can be easily adapted for use in other embodiments herein. The appended claims are intended to define the scope of the present invention, and therefore cover methods and structures within the scope of these claims and their equivalents.
PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.