RGB-D object identification method

An object recognition and object technology, applied in the fields of stereo vision and deep learning, can solve the problems of imperfect feature description and low recognition accuracy, and achieve the effect of solving the poor recognition effect.

Inactive Publication Date: 2018-04-20
TIANJIN UNIV
View PDF2 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] Aiming at the problems of low recognition accuracy and imperfect feature description in the current RGB-D object recognition technology, the present invention proposes a RGB-D object recognit

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • RGB-D object identification method
  • RGB-D object identification method
  • RGB-D object identification method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0024] The embodiment of the present invention combines multiple data modes such as color images, depth images, grayscale images, and surface normal vectors, and proposes an RGB-D object recognition method. The specific operation steps are as follows:

[0025] 101: Obtain a grayscale image generated from a color image and a surface normal vector generated from a depth image, and use the color image, grayscale image, depth image, and surface normal vector together as multi-data mode information;

[0026] 102: Extract high-level features in color images, grayscale images, and surface normal vectors through convolution-recurrent neural networks;

[0027] 103: Using convolution-Fisher vector-recurrent neural network to extract high-level features of depth images;

[0028] 104: Perform feature fusion of the above-mentioned multiple high-level features to obtain the total features of the object, and input the total features of the object into the feature classifier to realize the ob...

Embodiment 2

[0031] The scheme in embodiment 1 is further introduced below in conjunction with specific calculation formulas and examples, see the following description for details:

[0032] 201: Obtain multi-data mode information;

[0033] In order to more fully mine the color image and depth image information, the embodiment of the present invention adds two data modes, that is, the grayscale image generated by the color image and the surface normal vector generated by the depth image, so as to provide more information for object recognition. useful information. Specifically, the depth image and surface normal vector can provide the geometric information of the object, and the color image and grayscale image can provide the texture information of the object.

[0034] 202: Use convolution-recurrent neural network to extract high-level features of color images, grayscale images, and surface normal vectors;

[0035] Among them, the Convolutional-Recursive Neural Network (CNN-RNN) model co...

Embodiment 3

[0061] Combine below figure 2 The scheme in embodiment 1 and 2 is carried out feasibility verification, see the following description for details:

[0062] figure 2 The visualization result of this method is given, that is, the confusion matrix of the recognition result. Among them, the horizontal axis of the confusion matrix represents the predicted object category, a total of 51 categories, such as apple, bowl, cereal__box, etc., and the vertical axis represents the real object category in the data set.

[0063] The value of the diagonal element of the confusion matrix represents the recognition accuracy of this method in each category, such as the value of the element in row a and column b represents the percentage of misidentifying objects of type a as objects of type b. from figure 2 It can be seen that this method has obtained better recognition results, and most categories can obtain higher recognition rates.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an RGB-D object identification method which comprises the following steps of acquiring a gray scale image which is generated by a color image, and a surface normal vector thatis generated by a depth image, using the color image, the gray scale image, the depth image and the surface normal vector as multi-data mode information; respectively extracting high-layer characteristics in the color image, the gray scale image and the surface normal vector through a convolutional-recurrent neural network; extracting the high-layer characteristic of the depth image by means of aconvolutional-Fisher vector-recurrent neural network; and performing characteristic fusion on the plurality of high-layer characteristics, obtaining a total characteristic of the object, and inputtingthe total characteristic of the object into a characteristic classifier for realizing an object identification task. According to the RGB-D object identification method, a plurality of data modes arecombined; more accurate RGB-D object characteristics are extracted; and furthermore object identification accuracy is improved.

Description

technical field [0001] The invention relates to the technical fields of deep learning and stereo vision, in particular to an RGB-D object recognition method. Background technique [0002] Object recognition is one of the key technical issues in the field of computer vision, which has important research value and broad application prospects. With the further development and application of sensing technology, the Kinect camera, which can simultaneously acquire color images and depth images, has gradually become a new generation of mainstream imaging devices. Usually, the color image can provide information such as the texture and color of the target, and the depth image can provide effective information such as depth and shape. The two kinds of information complement each other and further enhance the performance of various visual tasks. How to fully mine the depth information in RGB-D data, explore the relationship between depth and color data, and further improve the target...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/46G06K9/62
CPCG06V10/443G06F18/241G06F18/253
Inventor 雷建军倪敏丛润民侯春萍陈越牛力杰
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products