Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method of Target Object Recognition and Positioning Based on Color Image and Depth Image

A technology of target objects and color images, which is applied in scene recognition, character and pattern recognition, program-controlled manipulators, etc. The effect of efficiency

Active Publication Date: 2019-05-31
JIANGSU R & D CENTER FOR INTERNET OF THINGS
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, when the SURF algorithm faces the occlusion of other objects around the target object, it cannot distinguish the target object well, resulting in false detection of feature points.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method of Target Object Recognition and Positioning Based on Color Image and Depth Image
  • Method of Target Object Recognition and Positioning Based on Color Image and Depth Image
  • Method of Target Object Recognition and Positioning Based on Color Image and Depth Image

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0030] The present invention will be further described below in conjunction with specific drawings.

[0031] The method for identifying and locating a target object based on a color image and a depth image of the present invention comprises the following steps:

[0032] (1) The mobile robot uses long-distance HSV color recognition, short-distance SURF feature point detection, and removes obstacles through depth images to identify target objects;

[0033] Specifically, as figure 1 Shown: In order to solve the situation where the working space is limited due to the fixed position of the camera, the camera and the robotic arm are mounted on the mobile platform and move together with the robot. When receiving an instruction to grab an object, the robot will search for the target object in the surrounding environment through the camera. When the distance from the target object is far away, the feature points of the target object will not be prominent enough at this time, and the r...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention relates to a method for identifying and locating a target object based on a color image and a depth image. (2) When the robot reaches the vicinity of the target area, it obtains the RGB feature information of the target object through SURF feature point detection, and combines the RGB feature information with the pre-stored The RGB feature information of the target object is used for feature matching, and if it conforms to the existing object model, the target object is positioned; (3) the RGB color image is collected to the imaging plane, and the two-dimensional coordinates of the target object in the imaging plane are obtained. The relative distance between the target object and the camera is obtained from the depth image, thereby obtaining the three-dimensional coordinates of the target object. The invention can quickly judge the object category and determine the three-dimensional coordinates.

Description

technical field [0001] The invention relates to a method for identifying and locating a target object, in particular to a method for identifying and locating a target object based on a color image and a depth image. Background technique [0002] With the improvement of people's requirements for robot functions, it has become a current development trend that mobile robots have visual functions, and work together with the robotic arm mounted on the mobile platform, "hand-eye" to complete the work. However, the traditional visual recognition positioning method uses monocular or binocular vision for positioning, which is easily affected by changes in illumination and requires a large amount of calculation. Especially for monocular vision, it is necessary to obtain three-dimensional space coordinates by comparing images taken from different angles of the same object, and the calculation is relatively complicated. Therefore, using infrared cameras and receivers to obtain object d...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): B25J9/16G06K9/00G06K9/46
CPCB25J9/1664B25J9/1697G06V20/10G06V20/58G06V10/56G06V10/462
Inventor 宋少博赵旦谱台宪青
Owner JIANGSU R & D CENTER FOR INTERNET OF THINGS
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products