Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Three-dimensional object recognition method combining view importance network and self-attention mechanism

A three-dimensional object and recognition method technology, applied in the field of computer vision, can solve the problem of loss of three-dimensional object view information, and achieve the effect of avoiding the decline of recognition accuracy and enhancing feature expression.

Pending Publication Date: 2022-05-27
BEIJING UNIV OF TECH
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the maximum pooling method is used in the MVCNN method, and most of the view information of the 3D object is lost. Therefore, further research on the 3D object recognition method based on multi-view is needed.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Three-dimensional object recognition method combining view importance network and self-attention mechanism
  • Three-dimensional object recognition method combining view importance network and self-attention mechanism
  • Three-dimensional object recognition method combining view importance network and self-attention mechanism

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] The invention is implemented based on the open source tool Pytorch of deep learning, and uses the GPU processor NVIDIA GTX3090 to train the network model.

[0027] In the following, the composition of each module in the method of the present invention will be further described in conjunction with the accompanying drawings and specific embodiments. Modifications of various equivalent forms of the present invention by those skilled in the art all fall within the scope defined by the appended claims of the present application.

[0028] The composition and flow of the network framework of the present invention are as follows: figure 1 shown, including the following steps:

[0029] Step 1: Project the three-dimensional object model from n perspectives, and then obtain n rendering views of the object V={v 1 , v 2 ,...,v n }, where v i is the ith view of the object, and n is set to 12 in this experiment, that is, 12 perspectives are used for 3D object recognition.

[0030]...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a three-dimensional object recognition method combining a view importance network and a self-attention mechanism. The method comprises the steps that a three-dimensional object to be recognized is projected from n different visual angles to obtain n different two-dimensional views, and n is larger than or equal to two; performing feature extraction on the n views through a basic CNN model to obtain feature maps of the corresponding views; the importance degrees of the n views for three-dimensional object recognition are judged through a view importance network, the features are enhanced to different degrees according to the importance degrees, and a view enhanced feature map is obtained; processing the view enhanced feature map by using a self-attention mechanism to obtain a three-dimensional shape descriptor; and inputting the three-dimensional shape descriptor into a full-connection network to carry out multi-view object identification so as to realize three-dimensional object identification. According to the method, important views beneficial to three-dimensional object recognition are highlighted, meanwhile, interference of non-important views on three-dimensional object recognition is restrained, and the three-dimensional object recognition accuracy is improved.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and relates to a three-dimensional object recognition method combining a view importance network and a self-attention mechanism. Background technique [0002] In recent years, with the development of indoor robots and computer vision, it has become a reality for indoor robots to actively find and grasp objects indoors for humans. How to accurately identify three-dimensional objects is one of the basic problems in this field. Various approaches have emerged in the field of 3D object recognition as Princeton University open sourced the ModelNet project to provide researchers with a comprehensive and clear collection of 3D object models. According to the different types of input data, 3D object recognition methods can be divided into three categories: point cloud-based 3D object recognition, voxel-based 3D object recognition, and multi-view-based 3D object recognition. [0003] The point cl...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06V20/64G06V10/44G06V10/764G06V10/82G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06N3/047G06N3/045G06F18/2415G06F18/241
Inventor 马伟徐儒常
Owner BEIJING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products