Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Projection full-convolution network three-dimensional model segmentation method based on fusion of multi-view-angle features

A fully convolutional network and 3D model technology, which is applied in the field of projected full convolutional network 3D model segmentation, can solve the problems of prolonging the frame training time, time-consuming viewing angle selection, and low practicability

Active Publication Date: 2018-08-10
NANJING UNIV
View PDF5 Cites 31 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] 4. Analysis methods must be robust to noise, downsampling, and the diversity of similar models
However, the choice of viewing angle of this method is too time-consuming, and the optimization of CRF training at the end prolongs the training time of the entire framework, which is less practical.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Projection full-convolution network three-dimensional model segmentation method based on fusion of multi-view-angle features
  • Projection full-convolution network three-dimensional model segmentation method based on fusion of multi-view-angle features
  • Projection full-convolution network three-dimensional model segmentation method based on fusion of multi-view-angle features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0104] The objective tasks of the present invention are as Figure 1a with Figure 1b As shown, Figure 1a Is the original undivided model, Figure 1b Color rendering results for the semantically segmented tags. The structure of the whole method is as follows figure 2 Shown. The various steps of the present invention will be described below based on examples.

[0105] Step (1): Collect data on the input three-dimensional mesh model data set S. Taking model s as an example, it can be divided into the following steps:

[0106] Step (1.1), choose 14 viewpoints from 42 fixed viewpoints to maximize the coverage of the model patch;

[0107] Step (1.1.1), set 42 fixed viewpoints, such as image 3 As shown, the distance between the viewpoint and the origin of the coordinates depends on whether the projection images in all viewpoint directions of the model can be filled in the rendering window as much as possible. The size of the rendering window in this experiment is set to 320×320, and th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a projection full-convolution network three-dimensional model segmentation method based on fusion of multi-view-angle features, and the method comprises the steps: 1, collecting data of a data set inputted into a three-dimensional network model; 2, carrying out the sematic segmentation of a model projection rendering map through a FCN (full-convolution network) integrated with multi-view-angle features, and obtaining the probability that the pixels of the projection rendering map of a model in all view-point directions are predicted as labels; 3, carrying out the back projection of a projection rendering map sematic segmentation probability graph of the model in each view-point direction, employing the maximum view-angle pooling, and obtaining the probability that amodel patch is predicted as each label; 4, carrying out the optimization through employing a Graph Cut image segmentation algorithm, and obtaining a final prediction label of the model patch.

Description

Technical field [0001] The invention belongs to the field of computer image processing and computer graphics, and particularly relates to a projection full convolutional network three-dimensional model segmentation method based on fusion of multi-view features. Background technique [0002] In recent years, with the emergence of more and more 3D modeling software, and depth sensors, such as Kinect, etc., have been widely used on platforms that collect depth data. Three-dimensional model data has exploded on the Internet. 3D The model also appeared in a large number of manifestations, such as point clouds, voxels, and patches. This trend makes the analysis of 3D models a hot research field. At present, the research in the field of image analysis has achieved fruitful results, and the introduction of the deep learning framework has further improved the effect. However, convolution operations on 2D images cannot be directly applied to 3D models, making it difficult to apply deep l...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T15/00G06T15/50G06N3/04
CPCG06T15/00G06T15/506G06N3/045
Inventor 张岩水盼盼王鹏宇胡炳扬甘渊余锋根刘琨孙正兴
Owner NANJING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products