Machine vision indoor positioning method based on improved convolutional neural network structure

A convolutional neural network and network structure technology, applied in the field of machine vision indoor positioning, can solve problems such as difficulty in labeling training samples, achieve strong robustness, high accuracy, and broaden thinking

Active Publication Date: 2020-02-04
JIANGXI COLLEGE OF APPLIED TECH
View PDF6 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Traditional visual positioning methods usually use image matching methods. However, this type of method is susceptible to shallow features such as shooting angles, lighting changes, and content changes in non-fixed building outlines in the scene.
With the popularization of deep learning technology, many scholars us

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Machine vision indoor positioning method based on improved convolutional neural network structure
  • Machine vision indoor positioning method based on improved convolutional neural network structure
  • Machine vision indoor positioning method based on improved convolutional neural network structure

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0031] The present invention will be further described below in conjunction with specific examples.

[0032] Such as figure 1 As shown, the machine vision indoor positioning method based on the improved convolutional neural network structure provided in this embodiment mainly proposes an improved convolutional neural network structure and a neural network model training method for the structure, and finally passes the training The final convolutional neural network classifies the input video images to obtain the indoor position of the mobile robot equipped with an RGB camera. Among them, the convolutional neural network functions include: extracting the positional features of semantically segmented images and RGB images, using these two types of positional features to determine the real-time indoor location of the mobile robot.

[0033] The improved convolutional neural network structure in this embodiment is the product of the combination of U-Net, the first 13 layers of two...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a machine vision indoor positioning method based on an improved convolutional neural network structure and mainly provides an improved convolutional neural network structure and a neural network model training method for the structure. And finally, the input video images are classified through the trained convolutional neural network to obtain indoor position of a mobile robot equipped with a RGB camera. The convolutional neural network function comprises the steps of extracting position features of the semantic segmentation image and the RGB image, and determining thereal-time indoor position of the mobile robot by utilizing the two types of position features. The improved convolutional neural network structure is a product of combination of U-Net, front 13 layersof two VGG16Net and rear 3 layers of one VGG16Net, and the convolutional neural network is composed of five parts of U1, VGG2, VGG3, VGG4 and an ArcFace classifier. According to the invention, real-time positioning of the indoor position of the mobile robot can be accurately realized.

Description

technical field [0001] The present invention relates to the technical field of artificial intelligence, deep learning algorithm research and image processing algorithm research, in particular to a machine vision indoor positioning method based on an improved convolutional neural network structure. Background technique [0002] With the increasing development of artificial intelligence technology, various types of robots have been widely used in all walks of life. In the application process of mobile robots, real-time detection and monitoring of the robot's position is a prerequisite for better service to humans. Therefore, mobile Robot wireless positioning technology has gradually become a research hotspot. In the outdoor environment, the global positioning system based on the mobile signal, the Beidou navigation system and the cellular positioning technology can meet most of the positioning requirements, but these methods are not suitable for the positioning of the indoor e...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62G06N3/04
CPCG06N3/045G06F18/241G06F18/214
Inventor 朱斌张建荣李健
Owner JIANGXI COLLEGE OF APPLIED TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products