Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

391 results about "Simultaneous localization and mapping" patented technology

In navigation, robotic mapping and odometry for virtual reality or augmented reality, simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it. While this initially appears to be a chicken-and-egg problem there are several algorithms known for solving it, at least approximately, in tractable time for certain environments. Popular approximate solution methods include the particle filter, extended Kalman filter, Covariance intersection, and GraphSLAM.

Visual ranging-based simultaneous localization and map construction method

The invention provides a visual ranging-based simultaneous localization and map construction method. The method includes the following steps that: a binocular image is acquired and corrected, so that a distortion-free binocular image can be obtained; feature extraction is performed on the distortion-free binocular image, so that feature point descriptors can be generated; feature point matching relations of the binocular image are established; the horizontal parallax of matching feature points is obtained according to the matching relations, and based on the parameters of a binocular image capture system, real space depth is calculated; the matching results of the feature points of a current frame and feature points in a world map are calculated; feature points which are wrongly matched with each other are removed, so that feature points which are successfully matched with each other can be obtained; a transform matrix of the coordinates of the feature points which are successfully matched with each other under a world coordinate system and the three-dimension coordinates of the feature points which are successfully matched with each other under a current reference coordinate system is calculated, and a pose change estimated value of the binocular image capture system relative to an initial position is obtained according to the transform matrix; and the world map is established and updated. The visual ranging-based simultaneous localization and map construction method of the invention has low computational complexity, centimeter-level positioning accuracy and unbiased characteristics of position estimation.
Owner:北京超星未来科技有限公司

Visual sense simultaneous localization and mapping method based on dot and line integrated features

The invention discloses a visual sense simultaneous localization and mapping method based on dot and line integrated features. The method comprehensively utilizes the line features and the dot features extracted and obtained from a binocular camera image and is able to be used for the positioning and the attitude estimation of a robot in both an external environment and an internal environment. As the dot features and the line features are integrated for use, the system becomes more robust and more accurate. For the parameterization of linear features, the Pluck coordinates are used for straight line calculations, including geometric transformations, 3D reconstructions, and etc. In the back-end optimization, the orthogonal representation of the straight line is used to minimize the number of the parameters of the straight line. The off-line established visual dictionary for the dot and line integrated features is used for closed loop detections; and through the method of adding zone bits, the dot characteristics and the line characteristics are treated differently in the visual dictionary and when an image database is created and image similarity is calculated. The invention can be applied to the construction of a scene image both indoors and outdoors. The constructed map integrates the feature dots and the feature lines, therefore, able to provide even richer information.
Owner:ZHEJIANG UNIV

Image appearance based loop closure detecting method in monocular vision SLAM (simultaneous localization and mapping)

The invention discloses an image appearance based loop closure detecting method in monocular vision SLAM (simultaneous localization and mapping). The image appearance based loop closure detecting method includes acquiring images of the current scene by a monocular camera carried by a mobile robot during advancing, and extracting characteristics of bag of visual words of the images of the current scene; preprocessing the images by details of measuring similarities of the images according to inner products of image weight vectors and rejecting the current image highly similar to a previous history image; updating posterior probability in a loop closure hypothetical state by a Bayesian filter process to carry out loop closure detection so as to judge whether the current image is subjected to loop closure or not; and verifying loop closure detection results obtained in the previous step by an image reverse retrieval process. Further, in a process of establishing a visual dictionary, the quantity of clustering categories is regulated dynamically according to TSC (tightness and separation criterion) values which serve as an evaluation criterion for clustering results. Compared with the prior art, the loop closure detecting method has the advantages of high instantaneity and detection precision.
Owner:NANJING UNIV OF POSTS & TELECOMM

Simultaneous localization and mapping (SLAM) method for unmanned aerial vehicle based on mixed vision odometers and multi-scale map

The invention discloses a simultaneous localization and mapping (SLAM) method for an unmanned aerial vehicle based on mixed vision odometers and a multi-scale map, and belongs to the technical field of autonomous navigation of unmanned aerial vehicles. According to the SLAM method, an overlooking monocular camera, a foresight binocular camera and an airborne computer are carried on an unmanned aerial vehicle platform; the monocular camera is used for the visual odometer based on a direct method, and binocular camera is used for the visual odometer based on feature point method; the mixed visual odometers conduct information fusion on output of the two visual odometers to construct the local map for positioning, and the real-time posture of the unmanned aerial vehicle is obtained; then theposture is fed back to a flight control system to control the position of the unmanned aerial vehicle; and the airborne computer transmits the real-time posture and collected images to a ground station, the ground station plans the flight path in real time according to the constructed global map and sends waypoint information to the unmanned aerial vehicle, and thus autonomous flight of the unmanned aerial vehicle is achieved. Real-time posture estimation and environmental perception of the unmanned aerial vehicle under the non-GPS environment are achieved, and the intelligent level of the unmanned aerial vehicle is greatly increased.
Owner:NANJING UNIV OF AERONAUTICS & ASTRONAUTICS

Simultaneous localization and mapping method of monocular vision robot based on retracking strategy

The invention discloses a simultaneous localization and map construction method of a monocular vision robot based on a retracking strategy. The method comprises the following steps: 1) extracting ORB characteristics of each picture; 2) tracking camera postures by utilizing characteristic matching of the adjacent pictures; 3) carrying out relocation and retracking strategy on pictures which are not tracked; 4) if the relocation is successfully executed, estimating postures of a current camera; stopping the retracking strategy and deleting a temporary variable generated by the retracking strategy; 5) if the retracking strategy is successfully executed, generating a new track; 6) judging the quantity of the generated tracks; if the quantity is more than a threshold value, eliminating an initial track; 7) carrying out closed-loop detection on each key frame and carrying out track fusion after the detection is successful; 8) when a localization system is finished, screening the tracks to obtain one track with the accurate posture. The simultaneous localization and map construction method disclosed by the invention has the advantages that the complete camera track can be located under the conditions of rapidness in movement, shielding, insufficient textures and illumination changes.
Owner:PEKING UNIV SHENZHEN GRADUATE SCHOOL +1

Automatic parking system and method based on stereoscopic vision localization and mapping

The invention discloses an automatic parking system and method based on stereoscopic vision localization and mapping. The automatic parking system comprises an image acquisition module, an SLAM (Simultaneous Localization and Mapping) module and a route planning module, wherein the image acquisition module corrects an image acquired by a binocular camera calibrated in advance and obtains a disparity image, so that distance information between a vehicle and surrounding objects is obtained; scene-flow motion detection is realized through the matching of the image of a front frame and the image of a rear frame, so that dynamic points in the image are obtained; the SLAM module integrates acquired color images and obtained three-dimensional information, so that the position of the vehicle is located, and a map is established; the route planning module plans a route to find a parking position according to the established map; after the position of the vehicle and a target position are obtained, whether the space of the parking position is enough or not is judged, if parking conditions are met, an optimal parking track is planned to control the vehicle, so that automatic parking is realized. The automatic parking system is simple in operation process, easy to implement, low in cost and wide in application range.
Owner:SUN YAT SEN UNIV

Manufacturing method of multi-feature fusion map facing autonomous vehicle

The invention discloses a manufacturing method of a multi-feature fusion map facing an autonomous vehicle. The method comprises the following steps: collecting data of each sensor by utilizing an on-board laser radar device; generating a three-dimensional point cloud map by utilizing IMU(Inertial Measurement Unit) data and laser measurement data; generating a visual feature map by utilizing the IMU data and camera data; preprocessing GPS data, and converting a geodetic coordinates into a spatial rectangular coordinates; performing global optimization fusion by utilizing a continuous time SLAM(Simultaneous Localization and Mapping) algorithm; and generating the multi-feature fusion map. According to the manufacturing method of the multi-feature fusion map, various sensor data of a laser radar, an IMU, a camera and a GPS, etc., are integrated, so that the stability and the accuracy are greatly improved; the data are processed by fusing a visual SLAM algorithm and a laser SLAM algorithm,thereby obtaining a better composition effect than a single visual SLAM algorithm or a single laser SLAM algorithm; and the method is simple, practical and low in cost, and the map produced is accurate and rich in attributes, thereby improving the safety of the driving process of the autonomous vehicle.
Owner:深圳市智绘科技有限公司

Device and method used for real-time positioning and map building

A mobile electron device is used for positioning and map building in an unknown environment. The mobile electron device comprises: a first sensor, a second sensor, a feature point extraction unit, a matching unit, and a positioning and map building unit, wherein the first sensor is used for acquiring a first image of the current scene with the current position and orientation; the second sensor is used for acquiring a second image of the current scene with the current position and orientation; the feature point extraction unit is used for extracting feature points of the first image and feature points of the second image by utilization of a feature extraction algorithm; the matching unit is used for matching the feature points of the current scene and feature points of a previous scene, and acquiring a transformation matrix from the previous scene to the current scene; and the positioning and map building unit is used for, based on the transformation matrix and the matched feature points, determining changes on position and orientation of the mobile electron device relative to the previous position and orientation so as to perform positioning, and is also used for combining the images of the current scene with a known map.
Owner:LENOVO (BEIJING) CO LTD

Multisensor fusion-based nmanned aerial vehicle SLAM (simultaneous localization and mapping) navigation method and system

The invention discloses a multisensor fusion-based unmanned aerial vehicle SLAM (simultaneous localization and mapping) navigation method and a multisensor fusion-based unmanned aerial vehicle SLAM navigation system. The multisensor fusion-based unmanned aerial vehicle SLAM navigation method comprises the following steps: acquiring image information of the surrounding environment of an unmanned aerial vehicle in real time, and acquiring pose information of the unmanned aerial vehicle according to the image information; acquiring depth information of the unmanned aerial vehicle and an obstaclein real time, fusing the pose information and the depth information to construct an obstacle depth map, and acquiring global position information of the unmanned aerial vehicle according to the obstacle depth map; according to the global pose information and the obstacle depth map, generating a flight path of the unmanned aerial vehicle by an online dynamic path planning method, and controlling autonomous obstacle-avoiding flight of the unmanned aerial vehicle according to the flight path. Through the multisensor fusion-based unmanned aerial vehicle SLAM navigation method and the multisensor fusion-based unmanned aerial vehicle SLAM navigation system, real-time localization and mapping of the unmanned aerial vehicle in a complex environment can be achieved; compared with the conventional unmanned aerial vehicle navigation technology, the navigation technology adopted by the invention has the advantages as follows: real-time localization and mapping and autonomous navigation are achieved, and the intelligence degree and the navigation accuracy of the unmanned aerial vehicle are improved.
Owner:BEIJING FORESTRY UNIVERSITY

Visual SLAM (simultaneous localization and mapping) method based on SINS (strapdown inertial navigation system)/GPS (global positioning system) and speedometer assistance

The invention discloses a visual SLAM (simultaneous localization and mapping) method based on SINS (strapdown inertial navigation system) / GPS (global positioning system) and speedometer assistance. The method comprises the following steps: when GPS signals are available, performing data fusion on output information of the GPS and the SINS to obtain information including attitudes, speeds, positions and the like; when the GPS signals are unavailable, performing data fusion on output information of a speedometer and the SINS to obtain information including attitudes, speeds, positions and the like; shooting environmental images by using a binocular camera, and performing feature extraction and feature matching on the environmental images; and achieving positioning and map building by using the obtained transcendental attitude, speed and position information and environmental features, thereby completing a visual SLAM algorithm. According to the visual SLAM method disclosed by the invention, visual SLAM is assisted by using the SINA, the GPS and the speedometer, the positioning and map building under indoor and outdoor environments can be achieved, the application range is wide, and the precision and robustness of positioning can be improved.
Owner:SOUTHEAST UNIV

Method for simultaneous localization and mapping of mobile robot based on improved particle filter

The invention discloses a method for simultaneous localization and mapping of a mobile robot based on an improved particle filter. The method comprises the following steps: initializing an initial-moment pose of a robot; obtaining a t-moment prior probability density function according to the pose information at a t-1 moment, and generating a sampling particle set p; initializing the weights of particles; selecting an importance probability density function, generating a new sampling particle set q, calculating the weights of particles, updating the weights of the particles, and normalizing the weights; calculating the weighted sum of random sample particles at current moment t to express posterior probability density, and obtaining the moving pose and environmental map information; judging whether a new observed value is input; if so, returning; otherwise, ending the cycle; before returning, judging whether resampling is needed or not. According to the difference of the system state, a dynamic threshold is set for judgment, and a genetic algorithm is combined. According to the method disclosed by the invention, influence of a problem of particle degeneration on SLAM is reduced, and the calculated amount of the SLAM problem is reduced.
Owner:HARBIN ENG UNIV

Unmanned-aerial-vehicle visual-SLAM (Simultaneous Localization and Mapping) method based on binocular camera, unmanned aerial vehicle and storage medium

The invention discloses an unmanned-aerial-vehicle visual-SLAM (Simultaneous Localization and Mapping) method based on a binocular camera, an unmanned aerial vehicle and a computer-readable storage medium. The method includes the steps of: acquiring depth images of at least two different locations through the binocular camera; obtaining camera pose information through a visual odometer according to the acquired depth images of the at least two different locations; carrying out nonlinear optimization, appearance-based circle loop detection and circle loop verification on the camera pose information to obtain optimized camera pose information; and carrying out binocular dense mapping according to the optimized camera pose information to obtain a global map. According to the method, the depthimages of the different locations are acquired through the binocular camera, and binocular dense mapping is carried out after use of the visual odometer, nonlinear optimization, circle loop detectionand circle loop verification to obtain the global map; and on the one hand, the interference problem existing with adopting of RGB-D cameras can be solved, and on the other hand, more precise localization can be realized, and the more precise map is established.
Owner:EHANG INTELLIGENT EQUIP GUANGZHOU CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products