RGB-D video based robot target recognition and localization method and system

A target recognition and robot technology, which is applied in the field of robot target recognition and positioning, can solve the problems that there is no systematic RGB-Depth video target recognition and precise positioning method, the complexity of the robot's working scene, and the high computational complexity, so as to achieve the enhanced spatial level Sensitive ability, guarantee of identity and relevance, high accuracy of recognition and positioning

Active Publication Date: 2017-07-04
HUAZHONG UNIV OF SCI & TECH
View PDF6 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, due to the complexity of the robot's working scene, the high computational complexity, and the large amount of c

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • RGB-D video based robot target recognition and localization method and system
  • RGB-D video based robot target recognition and localization method and system
  • RGB-D video based robot target recognition and localization method and system

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0042] Such as figure 1 Shown is a schematic diagram of the overall flow of the method of the embodiment of the present invention. From figure 1 It can be seen that this method includes two major steps: target recognition and target precise positioning, and target recognition is a prerequisite for precise target positioning. The specific implementation is as follows:

[0043] (1) Obtain the RGB-D video frame sequence of the scene where the positioning target to be identified is located;

[0044] Preferably, in an embodiment of the present invention, the RGB-D video sequence of the scene where the positioning target is to be identified can be collected by a depth vision sensor such as Kinect; RGB image pairs can also be collected by a binocular imaging device, and disparity estimation can be calculated The scene depth information is used as the depth channel information to synthesize RGB-D video as input.

[0045] (2) Extract the key video frames in the RGB-D video frame sequence, e...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an RGB-D video based robot target recognition and localization method and system. The target category is determined and accurate spatial position localization is acquired in a scene through the steps of target candidate extraction, recognition, time sequence consistency based confidence estimation, target segmentation optimization, position estimation and the like. In the invention, depth information of the scene is utilized, the spatial level perception ability of a recognition and localization algorithm is enhanced, the identity and the relevance of a target in a long time sequence target recognition and localization task are ensured while the video processing efficiency is improved through adopting key frame based long-short time time-space consistency constraints. In the localization process, collaborative target localization in a multi-information modal is realized through accurately segmenting the target in a planar space and evaluating the position consistency of the same target in a depth information space. The RGB-D video based robot target recognition and localization method and system are small in calculation amount, good in real-time performance and high in recognition and localization accuracy, and can be applied to robot tasks based on online visual information parsing and understanding technologies.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and more specifically relates to a method and system for robot target recognition and positioning based on RGB-D video. Background technique [0002] In recent years, with the rapid development of robot technology, machine vision technology for robot tasks has also received extensive attention from researchers. Among them, the recognition and precise positioning of the target is an important part of the robot vision problem and a prerequisite for performing subsequent tasks. [0003] Existing object recognition methods generally include two steps: extracting target information to be recognized as the basis for recognition and matching with the scene to be recognized. The traditional expression of the target to be recognized generally includes methods such as geometric shape, target appearance, and extraction of local features. Such methods often have shortcomings such as poor versatility...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/215G06T7/285
CPCG06T2207/10016
Inventor 陶文兵李坤乾
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products