Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Model training method, domain-adaptive visual position identification method and device

A model training and model technology, applied in the field of computer vision, can solve the problems of unguaranteed visual position recognition accuracy and low robustness, and achieve high robustness and improve training accuracy.

Inactive Publication Date: 2019-08-27
HUAZHONG UNIV OF SCI & TECH
View PDF7 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the deep neural network needs to be trained before use, and due to the influence of viewing angle, illumination and other factors, the feature distribution of the image used for training is often quite different from the feature distribution of the actual image to be recognized. In this case, The accuracy of visual place recognition is not guaranteed
In general, the robustness of existing visual place recognition methods is low

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Model training method, domain-adaptive visual position identification method and device
  • Model training method, domain-adaptive visual position identification method and device
  • Model training method, domain-adaptive visual position identification method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0062] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other.

[0063] The image feature extraction model training method provided by the present invention includes:

[0064] (1) Establish an image feature extraction model based on a deep neural network to obtain the feature vector of the image;

[0065] Such as figure 1 As shown, the image feature extraction model includes a feature extraction network and a local feature aggregation network;

[0066] The...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a model training method and a domain self-adaptive visual position identification method and device, and belongs to the technical field of computer vision, and the method comprises the steps: building an image feature extraction model based on a deep neural network; constructing a training set according to the standard data set, wherein each training sample in the trainingset comprises a target image, a positive sample and s negative samples; using the training set to train the image feature extraction model; wherein in the image feature extraction model, the feature extraction network comprises a plurality of first networks which are cascaded; wherein the first network is formed by sequentially connecting one or more second networks and a maximum pooling layer, and the maximum pooling layer is used for feature selection; the second network comprises convolution layers which are connected in sequence and used for feature extraction; the batch standardization layer is used for carrying out zero-mean standardization processing; the activation function layer is used for carrying out activation processing; the local feature aggregation network is used for aggregating local features to obtain feature vectors of the image. According to the invention, the robustness of visual position identification can be improved.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and more specifically relates to a model training method, a domain adaptive visual position recognition method and a device. Background technique [0002] Visual location recognition specifically refers to extracting features from an image, and then identifying the geographic location of the image based on the extracted image features. With the development of autonomous driving, the increasing demand for autonomously navigating mobile robots, and the increasing popularity of virtual reality and augmented reality, the research on visual position recognition is widely used in the field of computer vision, the robot community and other related fields. aroused widespread concern. [0003] In the early period of computer vision research, image features were mainly extracted by artificially designed methods of extracting image feature points, such as scale-invariant feature transform (SIFT) fe...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/46G06K9/62G06N3/04
CPCG06V10/44G06N3/045G06F18/24137G06F18/214
Inventor 桑农刘耀华高常鑫
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products