Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Network training method, incremental mapping method, positioning method, device and equipment

A training method and network technology, applied in the field of visual positioning, can solve the problems of high hardware requirements, large amount of calculation, time-consuming, etc., to reduce the degree of dependence and solve the effect of blurred boundaries

Inactive Publication Date: 2019-04-19
BEIJING KUANGSHI TECH
View PDF6 Cites 40 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, visual relocalization requires an accurate map to be established in advance. Although accurate maps can be obtained by using methods such as motion restoration structure, the amount of calculation is still unbearable in large-scale scenarios.
On the other hand, it is not advisable to directly register the image with the map. When the scene is large, the size of the map is likely to exceed the memory capacity of the computer. Map matching can take a lot of time
[0004] Aiming at the problems of high hardware requirements, large amount of calculation and time-consuming existing in the above-mentioned visual positioning method, no effective solution has been proposed yet

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Network training method, incremental mapping method, positioning method, device and equipment
  • Network training method, incremental mapping method, positioning method, device and equipment
  • Network training method, incremental mapping method, positioning method, device and equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0047] First, refer to figure 1 An example electronic device 100 for implementing the network training method, the incremental mapping method, the positioning method, the device and the device according to the embodiments of the present invention will be described.

[0048] Such as figure 1 A schematic structural diagram of an electronic device is shown, and the electronic device 100 includes one or more processing devices 102 and one or more storage devices 104 . Optionally, figure 1 The illustrated electronic device 100 may also include an input device 106, an output device 108, and a data acquisition device 110, and these components are interconnected by a bus system 112 and / or other forms of connection mechanisms (not shown). It should be noted that figure 1 The components and structure of the electronic device 100 shown are only exemplary, not limiting, and the electronic device may also have other components and structures as required.

[0049] The processing device ...

Embodiment 2

[0057] According to an embodiment of the present invention, an embodiment of a training method for a scene recognition network is provided. It should be noted that the steps shown in the flow chart of the accompanying drawings can be executed in a computer system such as a set of computer-executable instructions, Also, although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that shown or described herein.

[0058] figure 2 It is a flowchart of a training method for a scene recognition network provided by an embodiment of the present invention, and the network training method is used for training a scene recognition network. The scene recognition network proposed in this embodiment is a solution for loop closure detection using deep hashing, and it can also be applied to robot relocation. The scene recognition network based on deep hashing has better stability for viewing angle changes, illumi...

Embodiment 3

[0077] Figure 4 It is a flowchart of an incremental mapping method provided by an embodiment of the present invention. The incremental mapping method is applied to a mobile terminal, and the mobile terminal stores the scene recognition obtained by the training method of the scene recognition network provided by the above embodiment. network, such as Figure 4 As shown, the method includes the following steps:

[0078] Step S402, acquiring a two-dimensional map of the target scene.

[0079] The two-dimensional map can be a map drawn in an existing way, such as a lidar map, etc. In this embodiment, the assistance of lidar positioning can be used to obtain the real pose of the mobile terminal. Lidar positioning includes but is not limited to Cartographer , GMapping, Karto, etc. It can be understood that any other method that can obtain the real pose of the camera can also replace the lidar positioning here.

[0080] Step S404, when the real pose of the mobile terminal is acq...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a network training method. The invention discloses an incremental mapping method, device and equipment, and relates to the technical field of visual positioning, and the methodcomprises the steps: obtaining a training sample, enabling the images in a first image set and the images in a second image set in the training sample to be similar images, and enabling the images inthe first image set and the images in a third image set to be non-similar images; Inputting the training sample into a scene recognition network, wherein the scene recognition network comprises threedeep hash-based lightweight neural networks which have the same structure and share parameters; And training the scene recognition network by using the training sample until the loss function converges, and taking the corresponding parameter when the loss function converges as the parameter of the scene recognition network. The network training method, the incremental mapping method, the positioning method, the positioning device and the equipment provided by the embodiment of the invention can run on a low-end processor in real time, so that the degree of dependence on hardware is reduced.

Description

technical field [0001] The present invention relates to the technical field of visual positioning, in particular to a network training method, incremental mapping method, positioning method, device and equipment. Background technique [0002] Visual positioning is a key technology commonly used in robots, drones, self-driving cars, and augmented reality. Cameras are used as sensors. By analyzing the collected images, they are compared with pre-established environmental maps or real-time constructed maps to determine the location. Accurate camera position and pose. The main methods of visual localization include: Simultaneous Localization And Mapping (SLAM) and visual relocalization. [0003] Among them, SLAM estimates the accurate camera pose and the spatial position of landmark points at the same time, but due to the high complexity of the algorithm and too many variables to be optimized, resulting in high computational costs, it is difficult to run on mobile devices with ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/33G06T7/50G06T7/70G06K9/62G06K9/00G06N3/08
CPCG06N3/08G06T7/33G06T7/50G06T7/70G06V20/40G06F18/22
Inventor 王金戈吴琅李北辰贺一家刘骁
Owner BEIJING KUANGSHI TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products