Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Binocular vision positioning method based on semantic target

A binocular vision positioning and target technology, applied in image analysis, instruments, computing and other directions, can solve the problems of high layout cost, inability to locate indoors, low positioning accuracy, etc., to achieve strong autonomy, fast and accurate positioning services, The effect of high positioning accuracy

Active Publication Date: 2021-06-01
HARBIN INST OF TECH +1
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The purpose of the present invention is to solve the problems that the existing indoor positioning methods need to know the scene layout before positioning, the layout cost is high, the positioning accuracy is low, and indoor positioning cannot be performed in an unknown environment, and a semantic target-based dual Visual positioning method

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Binocular vision positioning method based on semantic target
  • Binocular vision positioning method based on semantic target
  • Binocular vision positioning method based on semantic target

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0033] Specific implementation mode one: combine figure 1 Describe this embodiment, the specific process of a binocular vision positioning method based on semantic targets in this embodiment is:

[0034] It is divided into two modules: image semantic segmentation and binocular vision positioning;

[0035] Image Semantic Segmentation Module:

[0036] Step 1. The user uses the binocular camera to shoot the scene currently seen, and obtains two left and right images;

[0037] Step 2. Input the left and right images captured by the binocular camera into the trained R-FCN semantic segmentation network. The R-FCN semantic segmentation network identifies the semantic targets contained in the current left and right images, and each semantic target Corresponding corner coordinates;

[0038]Step 3: The user selects a semantic target shared by the left and right images among many semantic targets, and establishes a three-dimensional coordinate system of the target based on the corner ...

specific Embodiment approach 2

[0043] Specific embodiment two: the difference between this embodiment and specific embodiment one is that the specific training process of the trained R-FCN semantic segmentation network in the step 2 is:

[0044] The R-FCN semantic segmentation network consists of a fully convolutional network FCN, a candidate region generation network RPN, and a ROI sub-network;

[0045] The purpose of using the semantic segmentation technology in the present invention is to identify the semantic targets contained in the images taken by the user, and judge the position of the user in the indoor environment according to these targets, which is also in line with the use of surrounding landmark buildings when people enter an unknown place. Identify the characteristics of your location. The semantic segmentation network used in this paper is R-FCN, which is a two-stage target detection model developed from Faster R-CNN. It follows the idea of ​​full convolutional network FCN and solves the prob...

specific Embodiment approach 3

[0073] Embodiment 3: The difference between this embodiment and Embodiment 1 or 2 is that in the step 6, the corner point of the semantic target based on the step 4 corresponds to the pixel coordinate difference in the left and right images and the binocular image after step 5 is calibrated. The camera uses the binocular vision positioning algorithm to solve the current user's position coordinates and steering angle relative to the target in the three-dimensional coordinate system (indoor scene) established in step 3, to realize the positioning of the user; the specific process is:

[0074] After we use R-FCN to identify the semantic target contained in the user image and calculate the pixel coordinates corresponding to the target corner in the user image, then we will use the difference between the target corner coordinates in the left and right images to solve the current user and The distance of the target, and then restore the user's three-dimensional coordinates and steeri...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a binocular vision positioning method based on a semantic target, and relates to the binocular vision positioning method based on the semantic target. The invention aims to solve the problems that the existing indoor positioning methods need to know the scene layout before positioning, the layout cost is high, the positioning accuracy is low, and indoor positioning cannot be carried out in an unknown environment. The method comprises the steps of 1, acquiring a left image and a right image; 2, identifying semantic targets contained in the current left and right images and angular point coordinates corresponding to each semantic target; 3, selecting, by a user, a common semantic target contained in the left image and the right image in the semantic targets, and building a three-dimensional coordinate system of the target based on the angular point coordinates corresponding to the semantic target; 4, determining corresponding pixel coordinates of the angular point of the selected semantic target in the left image and the right image; 5, calibrating a binocular camera; and 6, solving the position coordinates and the steering angle of the current user relative to the target to realize the positioning of the user. The invention belongs to the field of image processing.

Description

technical field [0001] The invention belongs to the field of image processing, and is a method for realizing binocular vision positioning based on semantic targets by using technologies such as digital image processing, deep learning, and visual imaging. Background technique [0002] With the rapid development of the current society, more and more indoor places such as shopping malls, exhibition halls, and office buildings have sprung up, and people spend most of their time indoors every day. And when people enter a completely unfamiliar indoor place, they need to know their current specific location in the place, and hope to get a series of services based on it. Due to the complex and changeable indoor environment and being blocked by obstacles such as walls, traditional GPS technology is not applicable. At present, the positioning methods in the indoor environment are mainly divided into four categories, namely indoor positioning methods based on wireless signals, indoor ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/73G06T7/80G06K9/62
CPCG06T7/85G06T7/73G06F18/214
Inventor 马琳董赫张忠旺刘晟周剑琦叶亮何晨光
Owner HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products