Unsupervised depth representation learning method and system based on image translation

A technology of learning system and learning method, applied in the field of unsupervised deep representation learning method and system, can solve the problems of unsupervised prediction geometric transformation, prediction image cannot handle rotation invariant image, etc., and achieve the effect of excellent performance

Active Publication Date: 2021-06-01
ZHEJIANG NORMAL UNIVERSITY
View PDF3 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The purpose of the present invention is to address the defects of the prior art and provide an unsupervised deep representation learning method and system based on image translation, which not only solves the problem that the unsupervised method of predicting image rotation cannot handle rotation-invariant images, but also solves the problem of The problem of edge effects in unsupervised methods for predicting geometric transformations

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Unsupervised depth representation learning method and system based on image translation
  • Unsupervised depth representation learning method and system based on image translation
  • Unsupervised depth representation learning method and system based on image translation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0050] An unsupervised deep representation learning system based on image translation provided in this embodiment, such as figure 2 shown, including:

[0051] Image translation transformation module 11, for carrying out random translation transformation to image and generating auxiliary label;

[0052] Image masking module 12, is connected with image translation transformation module 11, is used for applying mask to the image after translation transformation;

[0053] The deep neural network 13 is connected with the image mask module 12, and is used to predict the actual auxiliary label of the image after applying the mask, and learn the depth representation of the image;

[0054] The regression loss function module 14 is connected with the deep neural network 13, and is used to update the parameters of the deep neural network based on the loss function;

[0055] The feature extraction module 15 is connected with the deep neural network 13 and is used to extract the represe...

Embodiment 2

[0080] An unsupervised deep representation learning system based on image translation provided in this embodiment is different from Embodiment 1 in that:

[0081] This embodiment is compared with existing methods on multiple data sets to verify the effectiveness of the above method.

[0082] data set:

[0083] CIFAR101: This dataset contains 60,000 color images of size 32×32, evenly distributed in 10 categories, that is, each category contains 6,000 images. Among them, 50,000 images are put into the training set, and the remaining 10,000 images are put into the test set.

[0084] CIFAR100: Similar to CIFAR10, it also contains 60,000 images, but is evenly distributed in 100 categories, each category contains 600 images. The number of samples in the training set and the test set is also 5:1.

[0085] STL10: Contains 13,000 labeled color images, 5,000 for training and 8,000 for testing. The image size is 96×96, the number of categories is 10, and each category contains 1300 i...

Embodiment 3

[0100] This embodiment provides an unsupervised deep representation learning method based on image translation, such as Figure 4 shown, including:

[0101] S11. Perform random translation transformation on the image and generate auxiliary labels;

[0102] S12. Applying a mask to the image after translation transformation;

[0103] S13. Predict the actual auxiliary label of the image after applying the mask, and learn the depth representation of the image;

[0104] S14. Updating the parameters of the deep neural network based on the loss function;

[0105] S15. Extracting representations of images.

[0106] Further, in the step S11, random translation transformation is performed on the image, and the translation transformation image is expressed as:

[0107] Among them, given an image dataset containing N samples x per image i They are all represented by a C×W×H matrix, C, W, and H are the number of image channels, width and height respectively; use Indicates the im...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an unsupervised deep representation learning system based on image translation. The system comprises an image translation transformation module which is used for carrying out the random translation transformation of an image, and generating an auxiliary label; the image masking module is connected with the image translation and transformation module and is used for applying masks to the image subjected to translation and transformation; the deep neural network is connected with the image mask module and used for predicting an actual auxiliary label of the image after the mask is applied and learning depth representation of the image; the regression loss functional module is connected with the deep neural network and used for updating parameters of the deep neural network based on a loss function; and the feature extraction module is connected with the deep neural network and is used for extracting the representation of the image. According to the method, the problem that an unsupervised method for predicting image rotation cannot process a rotation invariant image is solved, and the edge effect problem existing in an unsupervised method for predicting geometric transformation is also solved.

Description

technical field [0001] The present invention relates to the technical field of image representation learning, in particular to an unsupervised deep representation learning method and system based on image translation. Background technique [0002] Deep neural networks have achieved great success in machine vision tasks such as image classification, segmentation, and object detection. However, a large amount of manually annotated data is required to achieve satisfactory performance. In reality, labeling data is an extremely time-consuming and labor-intensive task. In some domains, such as medical and aerospace, only domain experts can provide reliable annotations, making it almost impossible to collect large amounts of labeled data. Therefore, unsupervised learning has become an increasingly important research direction. Unsupervised deep representation learning does not rely on human-labeled labels as supervisory information, and only uses image data itself to train deep ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62G06T3/00G06N3/04
CPCG06N3/045G06F18/214G06T3/04G06V10/82G06V10/7753G06T7/10G06V20/70G06V10/44
Inventor 朱信忠徐慧英郭西风董仕豪赵建民
Owner ZHEJIANG NORMAL UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products