Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

3D Model Segmentation Method Based on Fusion of Multi-view Features by Projective Fully Convolutional Network

A fully convolutional network and multi-view technology, applied in the field of projected full convolutional network 3D model segmentation, can solve problems such as low practicality, prolonging the training time of the framework, and time-consuming viewing angle selection

Active Publication Date: 2020-04-17
NANJING UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] 4. Analysis methods must be robust to noise, downsampling, and the diversity of similar models
However, the choice of viewing angle of this method is too time-consuming, and the optimization of CRF training at the end prolongs the training time of the entire framework, which is less practical.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • 3D Model Segmentation Method Based on Fusion of Multi-view Features by Projective Fully Convolutional Network
  • 3D Model Segmentation Method Based on Fusion of Multi-view Features by Projective Fully Convolutional Network
  • 3D Model Segmentation Method Based on Fusion of Multi-view Features by Projective Fully Convolutional Network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0104] The object task of the present invention is as Figure 1a and Figure 1b as shown, Figure 1a is the undivided original model, Figure 1b Coloring and rendering results of labels after semantic segmentation, the structure of the whole method is as follows figure 2 shown. Each step of the present invention is described below according to an embodiment.

[0105] Step (1), collect data on the input 3D grid model data set S. Taking model s as an example, it is divided into the following steps:

[0106] In step (1.1), select 14 viewpoints from 42 fixed viewpoints to maximize the model patch coverage;

[0107] Step (1.1.1), set 42 fixed viewpoints, such as image 3 As shown, the distance from the viewpoint to the coordinate origin depends on whether the projection images in all viewpoint directions of the model can fill the rendering window as much as possible. In this paper, the size of the experimental rendering window is set to 320×320, and the unit is pixel. There ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a projection full convolution network three-dimensional model segmentation method based on fusion of multi-view features, comprising: step 1, collecting data from an input three-dimensional grid model data set; step 2, using an FCN full convolution network that fuses multi-view features Carry out semantic segmentation on the model projection rendering image, and obtain the probability that the pixels of the model projection rendering image in each viewpoint direction are predicted to be each label; step 3, reverse-project the semantic segmentation probability map of the model projection rendering image in each viewpoint direction and merge The maximum view pooling is used to obtain the probability that the model patch is predicted as each label; step 4, the Graph Cut algorithm is used for optimization to obtain the final predicted label of the model patch.

Description

technical field [0001] The invention belongs to the fields of computer image processing and computer graphics, and in particular relates to a projection full convolution network three-dimensional model segmentation method based on fusion of multi-view features. Background technique [0002] In recent years, with the emergence of more and more 3D modeling software, and depth sensors, such as Kinect, etc., have been widely used on platforms for collecting depth data, 3D model data has exploded on the Internet, 3D There are also a large number of representations of the model, such as point clouds, voxels, and patches. This trend makes the analysis of 3D models a hot research field. At present, the research in the field of image analysis has achieved fruitful results, and the introduction of deep learning framework has further improved the effect. However, convolution operations on 2D images cannot be directly applied to 3D models, making it difficult to apply deep learning me...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T15/00G06T15/50G06N3/04
CPCG06T15/00G06T15/506G06N3/045
Inventor 张岩水盼盼王鹏宇胡炳扬甘渊余锋根刘琨孙正兴
Owner NANJING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products