Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-category multi-view data generation method for generating confrontation network based on depth convolution

A technology of data generation and deep convolution, which is applied in the fields of deep learning and image processing, can solve the problems of high acquisition cost and small data volume, and achieve the effect of reducing the number of nodes, eliminating interference, and connecting closely

Active Publication Date: 2018-01-19
ZHEJIANG UNIV OF TECH
View PDF4 Cites 22 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] At present, in the generation of image data, more single-image generation is based on large-scale data sets. In some practical object classification problems, such as pearl classification, using multi-view data can achieve better results, but there are One of the problems is that the amount of data is not large and the acquisition cost is high. Therefore, it is an urgent need to generate multi-category and multi-view data

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-category multi-view data generation method for generating confrontation network based on depth convolution
  • Multi-category multi-view data generation method for generating confrontation network based on depth convolution
  • Multi-category multi-view data generation method for generating confrontation network based on depth convolution

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] The present invention will be further described below with reference to the accompanying drawings and taking pearl data as an example.

[0039] refer to Figure 1 to Figure 5 , a multi-category multi-view data generation method based on a deep convolutional generative adversarial network, comprising the following steps:

[0040] Starting with a batch of pearl five-view data divided into 7 categories,

[0041] Step 1: Center Cut:

[0042] The five views consist of a top view and four side views, each with an original size of 300*300*3, including a pearl image in the center and a black background. According to experiments, cropping the picture to 250*250*3 will not affect the pearl image, and can save nearly 30% of pixels.

[0043] Step 2: Multi-view superposition:

[0044] Superimpose the five pearl maps according to the channel dimension, such as image 3 , forming a multi-dimensional matrix of 250*250*15, as a piece of image data, a total of 10,500 pieces of image...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention provides a multi-category multi-view data generation method for generating a confrontation network based on the depth convolution. The method comprises the following steps: 1. carrying out picture center cutting; 2. overlaying multiple views in the channel dimension; 3. extracting multi-view category tags; 4. co-training the DC-GAN network by using the overlaid multiple views, category tags, and high-dimensional random noise; 5.introducing the high-dimensional random noise and custom tags into the trained network, and generating multi-view overlaid data; and 6. cutting and filling the background to obtaining the multiple views satisfying the original size. According to the method provided by the present invention, the method for generating the confrontation network bymulti-view overlaying and tag training realizes the function that the multi-category multi-view can be generated through a model only by modifying the input, and the generated data can be used as extension of trained data to increase the diversity of trained data.

Description

technical field [0001] The present invention relates to the fields of deep learning and image processing, and related data (especially image data) generation technology, especially for single-object picture data of multiple categories and multiple perspectives, such as multiple views of different types of pearls in the pearl industry. Background technique [0002] In recent years, with the continuous development of deep learning technology, great breakthroughs have been made in a series of problems such as classification and object detection, and multi-layer neural network structures emerge in endlessly. However, the more complex the neural network, the higher the demand for the quantity and diversity of training data, and the final performance of the neural network is positively correlated with the richness of the training data in a wide range. [0003] In order to increase the richness of training data, the safest and most reliable method is to manually collect and label o...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62G06N3/04G06N3/08
Inventor 宣琦陈壮志方宾伟王金宝刘毅
Owner ZHEJIANG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products