Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

1081 results about "Odometer" patented technology

An odometer or odograph is an instrument used for measuring the distance traveled by a vehicle, such as a bicycle or car. The device may be electronic, mechanical, or a combination of the two. The noun derives from the Ancient Greek word ὁδόμετρον, hodómetron, from ὁδός, hodós ("path" or "gateway") and μέτρον, métron ("measure"). Early forms of the odometer existed in the ancient Greco-Roman world as well as in ancient China. In countries using Imperial units or US customary units it is sometimes called a mileometer or milometer, the former name especially being prevalent in the United Kingdom and among members of the Commonwealth.

Map merging method of unmanned aerial vehicle visual SLAM under city complex environment

The invention discloses a map merging method of unmanned aerial vehicle visual SLAM under city complex environment. The method comprises the steps of 1, collecting an image through an RGB-D camera installed on each unmanned aerial vehicle, utilizing the unmanned aerial vehicle to conduct pretreatment on the image, and then conducting image registration; 2, constructing a visual odometer, and achieving loop detection; 3, optimizing the posture of the unmanned aerial vehicle; 4, constructing an octomap map, and achieving real-time on-line SLAM; 5, transmitting the octomap into a ground computer, merging a local octomap into a global octomap, and then transmitting the merged all-region octomap to the unmanned aerial vehicle. According to the map merging method of the unmanned aerial vehicle visual SLAM under the city complex environment, calculated quantity is reduced, real-time on-line SLAM can be achieved, and hidden danger of information losses brought by unstable wireless transmission is reduced; meanwhile, the task execution time is shortened, the task execution efficiency is improved, hidden danger brought by insufficient unmanned aerial vehicle cruising ability, finally more precise positioning can be obtained, and a more precise map can be established.
Owner:NANJING UNIV OF AERONAUTICS & ASTRONAUTICS

MEMS (Micro Electro Mechanical System) inertial measurement unit-based pipeline surveying and mapping and defect positioning device and pipeline surveying and mapping and defect positioning method thereof

ActiveCN104235618ALow costSolving Mapping ProblemsPipeline systemsOdometerEngineering
The invention belongs to the technical field of pipeline surveying and mapping, and in particular relates to an MEMS (Micro Electro Mechanical System) inertial measurement unit-based pipeline surveying and mapping and defect positioning device and a pipeline surveying and mapping and defect positioning method thereof. The MEMS inertial measurement unit-based pipeline surveying and mapping and defect positioning device comprises a measurement unit, a correction unit, a defect detection unit, a power supply unit and a data processing and memory unit. Compared with the conventional inventions and papers and the like, the MEMS inertial measurement unit is lower in cost, and has a wider pipe diameter application range of being 60 mm at minimum besides the autonomy. The MEMS inertial measurement unit is combined with an odometer, a flux-gate magnetometer and an ultrasonic detection device. The pipeline surveying and mapping problem without laying a fixed-point magnetic scale is solved, meanwhile, the information on a defect position is detected and marked, and convenience is provided for the maintenance and strengthening of pipeline defects. An odometer wheel is also connected with a power generation device, so that the problems caused by external power supply are solved.
Owner:HARBIN ENG UNIV

Simultaneous localization and mapping (SLAM) method for unmanned aerial vehicle based on mixed vision odometers and multi-scale map

The invention discloses a simultaneous localization and mapping (SLAM) method for an unmanned aerial vehicle based on mixed vision odometers and a multi-scale map, and belongs to the technical field of autonomous navigation of unmanned aerial vehicles. According to the SLAM method, an overlooking monocular camera, a foresight binocular camera and an airborne computer are carried on an unmanned aerial vehicle platform; the monocular camera is used for the visual odometer based on a direct method, and binocular camera is used for the visual odometer based on feature point method; the mixed visual odometers conduct information fusion on output of the two visual odometers to construct the local map for positioning, and the real-time posture of the unmanned aerial vehicle is obtained; then theposture is fed back to a flight control system to control the position of the unmanned aerial vehicle; and the airborne computer transmits the real-time posture and collected images to a ground station, the ground station plans the flight path in real time according to the constructed global map and sends waypoint information to the unmanned aerial vehicle, and thus autonomous flight of the unmanned aerial vehicle is achieved. Real-time posture estimation and environmental perception of the unmanned aerial vehicle under the non-GPS environment are achieved, and the intelligent level of the unmanned aerial vehicle is greatly increased.
Owner:NANJING UNIV OF AERONAUTICS & ASTRONAUTICS

Wearable positioning and path guidance method based on binocular camera under outdoor operating environment

ActiveCN106840148APreserve the characteristics of compact informationBoot supportNavigation by speed/acceleration measurementsSatellite radio beaconingSimulationClosed loop
The invention discloses a wearable positioning and path guidance method based on a binocular camera under an outdoor operating environment. The wearable positioning and path guidance method comprises the following steps: 1) letting an operator perform environmental exploration, traversing the whole operation site environment along an operation path and positioning by utilizing a binocular vision odometer and a GPS as well as IMU data, and meanwhile creating an operating environment overview map; 2) in the processes of real-time positioning and path guidance, performing global metric positioning by utilizing the binocular vision odometer and topological positioning by utilizing closed-loop detection at the same time; 3) when a loop is judged to be closed, calculating position deviation by utilizing scene features, performing deviation correction on a current global position and completing sample base updating; 4) performing operation task path planning and operation path guidance prompt by utilizing a topological overview map and real-time positioning results, and pushing information to a user. A reliable and real-time operation path guidance and positioning function is provided aiming at a wearable operation auxiliary system for equipment routing inspection, operation and maintenance and other class-A tasks under the outdoor environment.
Owner:SOUTHEAST UNIV

Robot navigation positioning system and method

The invention discloses a robot navigation positioning system and method, which are used for map construction, positioning and path planning of a robot. The method comprises the following steps: S100,positioning is carried out, in the positioning step, the robot detects surrounding environment information through multiple sensors, and later, based on an adaptive particle filtering SLAM algorithmand in match with different odometers, real-time map construction and positioning are completed; and S200, path planning is carried out, in the path planning step, a two-phase hybrid state A*-based path planning algorithm is adopted, after a path length and the number of extended nodes are obtained when path planning is carried out on a rasterized map, a higher rasterized map is obtained through parsing and extension, and the acquired path length and the acquired number of extended nodes are used as input of fuzzy reasoning, a heuristic weight is obtained through fuzzy reasoning and is used asinput of search of a second stage, and path planning is performed on a higher rasterized map. The system and the method disclosed in the invention can not only adapt to different environments but also can perform dynamic path planning.
Owner:BEIJING ORIENT XINGHUA TECH DEV CO LTD

Modular unmanned vehicle positioning method and system based on visual inertia laser data fusion

The invention discloses a modular unmanned vehicle positioning method and system based on visual inertia laser data fusion. The method comprises the following steps that (1), an acquisition module acquires current information of an unmanned vehicle through a monocular camera, a laser radar and an IMU; (2), according to the current information of the driverless vehicle collected in the step (1), apose estimation module performs pose estimation of the vehicle through a monocular odometer and an IMU pre-integration model to obtain pose information of the unmanned vehicle; and (3), a pose optimization module establishes a multi-sensor fusion optimization model according to the pose estimation information in the step (2), a weight coefficient dynamic adjustment module adjusts the optimizationproportion of each sensor to enhance the environmental adaptability, the optimal pose of the vehicle is obtained after optimization, and the optical pose is converted into a world coordinate system to obtain the real-time pose of the vehicle. The method can meet the requirements for accuracy and robustness of positioning of the unmanned vehicle in a complex environment, and is suitable for positioning of the unmanned vehicle in the complex environment.
Owner:WUHAN UNIV OF TECH

A fast monocular vision odometer navigation and positioning method combining a feature point method and a direct method

ActiveCN109544636AAccurate Camera PoseFeature Prediction Location OptimizationImage enhancementImage analysisOdometerKey frame
The invention discloses a fast monocular vision odometer navigation and positioning method fusing a feature point method and a direct method, which comprises the following steps: S1, starting the vision odometer and obtaining a first frame image I1, converting the image I1 into a gray scale image, extracting ORB feature points, and constructing an initialization key frame; 2, judging whether thatinitialization has been carry out; If it has been initialized, it goes to step S6, otherwise, it goes to step S3; 3, defining a reference frame and a current frame, extracting ORB feature and matchingfeatures; 4, simultaneously calculating a homography matrix H and a base matrix F by a parallel thread, calculating a judgment model score RH, if RH is great than a threshold value, selecting a homography matrix H, otherwise selecting a base matrix F, and estimating a camera motion according to that selected model; 5, obtaining that pose of the camera and the initial 3D point; 6, judging whetherthat feature point have been extracted, if the feature points have not been extracted, the direct method is used for tracking, otherwise, the feature point method is used for tracking; S7, completingthe initial camera pose estimation. The invention can more precisely carry out navigation and positioning.
Owner:GUANGZHOU UNIVERSITY

Road passable area detection method based on three-dimensional laser radar

PendingCN110244321AMotion distortion removalEliminate Motion Distortion IssuesElectromagnetic wave reradiationPoint cloudRadar
The invention provides a road passable area detection method based on three-dimensional laser radar. The road passable area detection method comprises the steps of: obtaining surrounding information of a vehicle by adopting three-dimensional laser radar, and collecting point cloud data; eliminating the motion distortion of the point cloud by combining the odometer information of the vehicle; extracting laser point cloud interest points according to the point cloud data after motion distortion is eliminated, wherein extracting laser point cloud interest points mainly comprises rejecting data points located out of an area at a certain height above the laser radar; performing ground segmentation by adopting the height information of the extracted laser point cloud interest points and combining the RANSAC algorithm, and distinguishing the ground point cloud and the obstacle point cloud; and rasterizing the distinguished obstacle point cloud, extracting data points closest to the vehicle in each grid, and combining the data points to obtain boundary points of the passable area. According to the invention, the problem of motion distortion generated during the motion of the point cloud is eliminated, so that the surrounding information expressed by the point cloud is more accurate.
Owner:WUHAN UNIV OF TECH

Binocular vision positioning and three-dimensional mapping method for indoor mobile robot

InactiveCN103926927AStrong ability to express the environmentSolve the shortcomings of high complexity and poor real-time performancePosition/course control in two dimensions3D modellingOdometerVisual positioning
The invention discloses a binocular vision positioning and three-dimensional mapping method for an indoor mobile robot. A binocular vision positioning and three-dimensional mapping system for the indoor mobile robot is mainly composed of a two-wheeled mobile robot platform, an odometer module, an analog camera, an FPGA core module, an image acquisition module, a wireless communication module, a storage module, a USB module and a remote data processing host computer. The FPGA core module controls the image acquisition module to collect information of a left image and a right image and sends the information of the left image and the right image to the remote data processing host computer. Distance information between images and the robot is obtained on the remote data processing host computer according to the information of the left image and the right image in a fast belief propagation algorithm, a three-dimensional environmental map is established in the remote data processing host computer, recognition of a specific marker is realized in an improved SIFT algorithm, and then the position of the mobile robot is determined through information integration in a particle filter algorithm. The binocular vision positioning and three-dimensional mapping system is compact in structure and capable of having access to an intelligent space system so that the indoor mobile robot can detect the environment in real time and provide more and accurate services according to the corresponding position information at the same time.
Owner:CHONGQING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products