Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

514 results about "Global Map" patented technology

Global Map is a set of digital maps that accurately cover the whole globe to express the status of global environment. It is developed through the cooperation of National Geospatial Information Authorities (NGIAs) in the world. An initiative to develop Global Map under international cooperation, the Global Mapping Project, was advocated in 1992 by Ministry of Construction, Japan (MOC) at the time (the current Ministry of Land, Infrastructure, Transport and Tourism, Japan-MLIT).

System and method for generating lane-level navigation map of unmanned vehicle

The invention relates to a system and method for generating a lane-level navigation map of an unmanned vehicle based on multi-source data. The lane-level navigation map comprises an offline global map part and an online local map part. According to an offline module, within a target region where the unmanned vehicle runs, original road data is acquired through satellite photos (or aerial photos), a vehicle sensor (laser radar and a camera) and a high-precision integrated positioning system (a global positioning system and an inertial navigation system), then the original road data is subjected to offline processing, multiple kinds of road information are extracted, and finally the road information extracting results are fused to generate the offline global map. The offline global map is stored through a layered structure. According to an online module, when the unmanned vehicle automatically drives in the target region, the road data in the offline global map is extracted according to real-time positioning information, and the online local map with the vehicle as the center within the fixed distance range is drawn. The system and method can be applied to fusion sensing, high-precision positioning and intelligent decisions of the unmanned vehicle.
Owner:安徽中科星驰自动驾驶技术有限公司

High-precision map generation system and method based on high-definition ortho-photo map

The invention provides a high-precision map generation system and a high-precision map generation method based on a high-definition ortho-photo map. The system comprises an image shooting module, a horizontal laser radar, a GPS (global positioning system) processing module, an inertial navigation module, a rotary encoder, an image and data preprocessing module and a geographical information processing module, wherein the vehicle-mounted image shooting module is used for acquiring a road image; the laser radar is used for scanning a barrier and acquiring geographical information data; the precision of GPS information is optimized; the ortho-photo map of the road image is acquired and then is rotated and sheared to generate corresponding geographical information file operation; a full ortho-photo map sequence and a corresponding geographical information file are combined and spliced to generate a base map of a global map; various types of geographical information data are marked on the base map of the map. According to the method and the system, a high-precision navigation map can be generated, and the data precision can reach centimeter level; the method and the system are extremely high in practical value on an advanced auxiliary driving system and an unmanned vehicle.
Owner:SHANGHAI JIAO TONG UNIV

Automatic guided vehicle based on map matching and guide method of automatic guided vehicle

The invention provides an automatic guided vehicle based on map matching and a guide method of the automatic guided vehicle. The automatic guided vehicle comprises a map creation module, a location module, a human-machine interaction module, a route planning module, an autonomous navigation module and a dispatching system module, wherein the map creation module is used for creating an environmental map; the automatic guided vehicle utilizes the carried location module to match the observed local environmental information with a preliminarily created global map to obtain pose information under the global situation; the human-machine interaction module is used for displaying the information and working state of the automatic guided vehicle in the map in real time; the route planning module is used for planning a feasible optimum route in the global map; the automatic guided vehicle is autonomously navigated by virtue of the autonomous navigation module according to the planned route; the dispatching system module is used for dispatching the automatic guided vehicle closest to a calling site to go to work. By adopting the automatic guided vehicle, no other auxiliary location facility is needed, no alteration is made for the environment, the capability of the automatic guided vehicle is completely depended, the automatic guided vehicle is utterly ignorant to the environment, and no priori information is provided.
Owner:SHANGHAI JIAO TONG UNIV

Temporary robot obstacle avoidance method based on depth camera

The invention discloses a temporary robot obstacle avoidance method based on a depth camera. If a robot detects an obstacle when moving along a set global navigation path, the depth camera installed on the robot is used to shoot through deflecting left and right angles and acquire a depth map; the map is processed so as to acquire a local environment map; in the local environment map, traveling spaces of left and right sides of the obstacle are detected; and the side with a large width is selected and used as a circumambulating traveling space. A circumambulating moving scope rectangular is drawn. Circumambulating feasible points are generated in the circumambulating moving scope rectangular. The feasible points are selected from the circumambulating feasible points so as to generate a circumambulating path. The circumambulating path is mapped into a global map. An end point of the mapped circumambulating path is connected to a next travelling node on an original global navigation path so as to complete temporary obstacle avoidance. By using the method of the invention, temporary robot obstacle avoidance can be high-efficiently and accurately completed; and after circumambulating and the obstacle avoidance, the robot is returned to the original global navigation path so that intelligence of the robot is increased.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

Simultaneous localization and mapping (SLAM) method for unmanned aerial vehicle based on mixed vision odometers and multi-scale map

The invention discloses a simultaneous localization and mapping (SLAM) method for an unmanned aerial vehicle based on mixed vision odometers and a multi-scale map, and belongs to the technical field of autonomous navigation of unmanned aerial vehicles. According to the SLAM method, an overlooking monocular camera, a foresight binocular camera and an airborne computer are carried on an unmanned aerial vehicle platform; the monocular camera is used for the visual odometer based on a direct method, and binocular camera is used for the visual odometer based on feature point method; the mixed visual odometers conduct information fusion on output of the two visual odometers to construct the local map for positioning, and the real-time posture of the unmanned aerial vehicle is obtained; then theposture is fed back to a flight control system to control the position of the unmanned aerial vehicle; and the airborne computer transmits the real-time posture and collected images to a ground station, the ground station plans the flight path in real time according to the constructed global map and sends waypoint information to the unmanned aerial vehicle, and thus autonomous flight of the unmanned aerial vehicle is achieved. Real-time posture estimation and environmental perception of the unmanned aerial vehicle under the non-GPS environment are achieved, and the intelligent level of the unmanned aerial vehicle is greatly increased.
Owner:NANJING UNIV OF AERONAUTICS & ASTRONAUTICS

Three-dimensional laser radar positioning and navigation method for intelligent inspection and inspection robot

The invention discloses a three-dimensional laser radar positioning and navigation method for intelligent inspection. The method comprises the steps of S1 acquiring original point cloud data of a to-be-inspected area through a laser radar, and establishing a global point cloud map; S2 converting the global point cloud map into a grid map, and performing global path planning according to preset patrol points to obtain an optimal patrol path; S3 estimating to obtain an initial position, obtaining initial point cloud data of the initial position, and matching the initial point cloud data with theglobal point cloud map to obtain an initial positioning value based on a global map coordinate system; and S4 through matching of the current frame point cloud data obtained in real time and the global point cloud map, obtaining a current inspection pose. As the method is realized by utilizing the three-dimensional laser radar, the functions of three-dimensional perception, autonomous path planning and positioning navigation of the environment can be well realized, and the inspection precision of the inspection robot is improved; GPS positioning navigation does not need to be adopted, and theanti-electromagnetic interference capacity is high.
Owner:SHANGHAI AEROSPACE SYST ENG INST

Unmanned-aerial-vehicle visual-SLAM (Simultaneous Localization and Mapping) method based on binocular camera, unmanned aerial vehicle and storage medium

The invention discloses an unmanned-aerial-vehicle visual-SLAM (Simultaneous Localization and Mapping) method based on a binocular camera, an unmanned aerial vehicle and a computer-readable storage medium. The method includes the steps of: acquiring depth images of at least two different locations through the binocular camera; obtaining camera pose information through a visual odometer according to the acquired depth images of the at least two different locations; carrying out nonlinear optimization, appearance-based circle loop detection and circle loop verification on the camera pose information to obtain optimized camera pose information; and carrying out binocular dense mapping according to the optimized camera pose information to obtain a global map. According to the method, the depthimages of the different locations are acquired through the binocular camera, and binocular dense mapping is carried out after use of the visual odometer, nonlinear optimization, circle loop detectionand circle loop verification to obtain the global map; and on the one hand, the interference problem existing with adopting of RGB-D cameras can be solved, and on the other hand, more precise localization can be realized, and the more precise map is established.
Owner:EHANG INTELLIGENT EQUIP GUANGZHOU CO LTD

Method and Apparatus for Generating a Map from Landmarks

A method and apparatus for generating a map from landmarks. The method and apparatus may involve the use of a processor circuit. An embodiment may consist of a platform with a processor circuit, and various different sensors for landmark generation that are known to the art such as a plurality of cameras and/or laser rangefinders. Landmarks are stored in a data structure with their past untransformed landmarks. New landmark data is matched using previous stored untransformed landmarks. Generally current position is only used for matching when backtracking or closing the loop. Landmarks are grouped with other landmarks that are visible in the same time interval. In these groups, dynamic landmarks are identified computationally efficiently using binning. Using basis landmarks in a group, a map relative to these basis landmarks is generated that is translation and rotation invariant. These relative maps are linked together to form a global map. Current position is non cumulative and generated by comparing currently observed landmarks to their global positions. The advantages of this method and apparatus is that it is computationally real time, has dynamic landmark detection and has comparable accuracy to other methods known in the art.
Owner:KRAUT JACOB

Inspection robot navigation system and method based on RTK Beidou and laser radar

The invention relates to an inspection robot navigation system and method based on RTK Beidou and a laser radar. The inspection robot navigation system based on RTK Beidou and laser radar includes a robot movement station and a background management server, wherein the robot movement station includes a robot body, a control module, a positioning navigation module, a wireless communication module and a power supply management module; the control module, the positioning navigation module, the wireless communication module and the power supply management module are arranged on the robot body; thepositioning navigation module includes a laser radar and an RTK/SINS unit; a navigation map employs the design scheme combined with a global map and a local map; the global map construction draws ina mode that the robot records the track, and the navigation mode uses a preview PID algorithm; and the local map construction employs the laser radar to record the obstacle discrete data points and restore the obstacle edge information through the clustering step, the curve fitting step and other steps, and uses the artificial potential field path planning obstacle avoidance mode. Compared with the prior art, the inspection robot navigation system and method based on RTK Beidou and a laser radar have the advantages of being high in navigation positioning precision, having no demand for pavement reconstruction, being high in environment adaptability, and being high in work stability.
Owner:SHANGHAI UNIVERSITY OF ELECTRIC POWER

Robot synchronous positioning and map construction method and system

The invention discloses a robot synchronous positioning and map construction method based on line segment features and line segment vanishing points, and the robot synchronous positioning and map construction method is applied to the positioning and map construction of a robot in an unknown environment. The robot synchronous positioning and map construction method comprises the steps of: extracting line segment features, intersection points and vanishing points from images, establishing line segment matching between the images according to the intersection points, estimating camera poses according to line segment matching results, and selecting a new keyframe; and inserting the keyframe into a map, calculating three-dimensional coordinates of the newly-added line segment features, performing cluster adjustment on local map, and removing outlier observation; and performing closed-loop detection and global map optimization based on the intersection points. The invention further discloses a synchronous positioning and map construction system. The robot synchronous positioning and map construction method and the robot synchronous positioning and map construction system can be appliedto the positioning and map construction of the robot in the unknown environment, and is particularly suitable for structured or semi-structured scenes, such as indoor environments, outdoor environments with buildings, and the like.
Owner:上海阅面网络科技有限公司

Deep network-based multi-strategy global crowd analysis method

The invention provides a deep network-based multi-strategy global crowd analysis method. The method comprises the following steps of: firstly, monitor area modeling: drawing a global map schematic diagram, establishing a layer for a direction and a range corresponding to a global map in a camera monitor area, and waiting for import of crowd density data; secondly, for a monitor scene of each camera, obtaining a space visual angle mapping graph of a displayed monitor image through perspective transformation, namely, overlook visual angle mapping from side-looking visual angle of the camera to the ground; obtaining image features through a VGG16 transfer learning method, mapping pre-blocks input into the images to a feature layer through strides, carrying out SWITCH judgement on the image features of each block, and selecting to carry out density estimation or pedestrian detection operation on the image through an R1 density estimation network of an R2 pedestrian detection network; and integrating the pedestrian detection or density estimation result of each block to form a density map, and mapping the estimated density map onto the layer through perspective transformation so as to conveniently carry out accurate supervision on the global crowd condition.
Owner:苏州平江历史街区保护整治有限责任公司

Visual SLAM method based on multi-feature fusion

The invention discloses a visual SLAM method based on multi-feature fusion, and relates to the field of robot visual positioning and mapping. The invention discloses a multi-feature fusion vision SLAM(simultaneous localization and mapping) method based on a depth camera, which solves the vision positioning problem under the condition that pure point features fail by fully using point-line features extracted from an image and constructing plane features according to the point-line features. A self-adaptive threshold method is adopted to extract point features, so that more uniform point features can be obtained; line features are extracted, short and small line segments are deleted, and the segmented line segments are combined to improve the accuracy of line feature matching; wherein the point-line features are used for estimating the inter-frame pose and constructing a local map; a surface feature is calculated by adopting a minimum parameter method to reduce the calculated amount; the point, line and plane features are tightly coupled by constructing a back projection error function of the fusion features, and a global map is constructed to carry out global pose optimization. Thevisual SLAM method is high in precision, good in real-time performance and high in robustness, and the problem that the visual SLAM precision is reduced or even a system fails in a low-texture environment based on a feature point method is solved.
Owner:HARBIN UNIV OF SCI & TECH

Intelligent vehicle location method based on prior map

The invention discloses an intelligent vehicle location method based on prior map, including the following steps: 1) by means of an odometer and a vehicle-mounted radar on the intelligent vehicle, respectively acquiring pose information and observation information of the intelligent vehicle; 2) by means of SLAM algorithm, processing the pose information and observation information of the intelligent vehicle to obtain a global map of the whole driving area, a local map of a local driving area, and pose information of the intelligent vehicle under a SLAM coordinate system; 3) acquiring GPS poseinformation of the intelligent vehicle and converting the GPS pose information into the SLAM coordinate system to obtain the GPS pose information of the intelligent vehicle in the SLAM coordinate system; 4) fusing the pose information and the GPS pose information in the SLAM coordinate system through Kalman filtering to obtain initial pose estimation of the intelligent vehicle; 5) matching the global map and the local map, and optimizing the initial pose estimation on the basis of the matched result, thus obtaining final pose of the intelligent vehicle. The method greatly improves precise andaccuracy of a location result of the intelligent vehicle.
Owner:HEFEI INSTITUTES OF PHYSICAL SCIENCE - CHINESE ACAD OF SCI

Whole-course pose estimation method based on global map and multi-sensor information fusion

ActiveCN110706279AAccurate pose estimation throughout the processHigh precisionImage analysisUncrewed vehicleOdometer
The invention provides a whole-course pose estimation method based on global map and multi-sensor information fusion, and relates to the field of navigation. The method comprises: firstly, building anunmanned aerial vehicle system comprising sensors; calibrating the sensors to obtain corresponding parameters of each sensor, and initializing an unmanned aerial vehicle system; acquiring measurementinformation of the current pose of the carrier unmanned aerial vehicle by utilizing each sensor, and constructing and maintaining a local map by utilizing image information of a visual inertia odometer VIO system; and constructing a factor graph-based multi-sensor information fusion framework, optimizing by utilizing the factor graph to obtain an optimal state variable of each current frame of the VIO system corresponding to the unmanned aerial vehicle system, updating a conversion relationship between a local coordinate system and a global coordinate system under the current frame, and converting a local map into a global map. Measurement of all sensors carried by the unmanned aerial vehicle and global map information can be fused by using a global optimization mode so that accuracy andreliability of pose estimation of the unmanned aerial vehicle system can be enhanced.
Owner:TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products