An unsupervised image recognition method based on parameter transfer learning

A transfer learning and image recognition technology, applied in the field of image recognition, can solve the problems of long training time and large number of unlabeled samples, and achieve the effect of reducing training time, solving unsupervised recognition problems, and reducing dependence

Inactive Publication Date: 2019-04-05
HARBIN INST OF TECH
View PDF4 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to solve the problem that the traditional unsupervised image recognition m

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An unsupervised image recognition method based on parameter transfer learning
  • An unsupervised image recognition method based on parameter transfer learning
  • An unsupervised image recognition method based on parameter transfer learning

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0035] Specific implementation manner one: such as figure 1 As shown, an unsupervised image recognition method based on parameter transfer learning described in this embodiment includes the following steps:

[0036] Step 1. Collect images with category tags from the auxiliary domain to form the auxiliary domain image set X s ; Collect images without category tags from the application domain to form the application domain image set X t ; The application field refers to various fields to which the method of the present invention can be applied, and the auxiliary field refers to a field where the sample content is similar to the field to be applied to and contains a large number of tags;

[0037] Step 2: Construct two convolutional neural networks with the same structure, and use the two convolutional neural networks with the same structure as the auxiliary domain network and the application domain network respectively, where: the auxiliary domain network is denoted as N s , The applic...

specific Embodiment approach 2

[0048] Specific embodiment two: This embodiment is different from specific embodiment one in that: the specific process of step one is:

[0049] Collect images with category labels from the auxiliary domain to form the auxiliary domain image set X s ; Collect images without category tags from the application domain to form the application domain image set X t ; Where: application domain image set X t The number of image samples in the auxiliary domain image set X s One-tenth of the number of image samples in the middle;

[0050] Set auxiliary domain image X s Image collection with application domain X t All images in are scaled to the same size.

specific Embodiment approach 3

[0051] Specific embodiment three: this embodiment is different from specific embodiment one in that the specific process of the second step is:

[0052] Construct two convolutional neural networks with the same structure, and use the two convolutional neural networks with the same structure as the auxiliary domain network and the application domain network respectively, where: the auxiliary domain network is denoted as N s , The application domain network is marked as N t ;

[0053] Such as figure 2 As shown, each convolutional neural network includes five convolutional layers conv1 to conv5 and three fully connected layers fc1 to fc3, where: the fully connected layer is located behind the convolutional layer;

[0054] After the fully connected layer is the image classifier, the image classifier has C branches, where: C represents the total number of recognizable image categories; and the output y of the i-th branch of the image classifier i Expressed as:

[0055]

[0056] Where: p(x ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an unsupervised image recognition method based on parameter transfer learning, and belongs to the technical field of image recognition. The method solves the problems that a traditional unsupervised image recognition method needs a large number of unlabeled samples, and the training time is long due to a large number of unlabeled samples. According to the invention, transfer learning is directly carried out on parameters of the identification model; Only labeled samples in the auxiliary field and a small number of unlabeled samples in the application field are needed. According to the method, the problem that a traditional unsupervised image recognition method needs a large number of unlabeled samples is solved, dependence on the label samples is reduced, the problem of unsupervised recognition is solved, the learning efficiency of the model is improved, and the method is more suitable for an application scene with a large data scale. The method can be applied to the technical field of image recognition.

Description

Technical field [0001] The invention belongs to the technical field of image recognition, and specifically relates to an unsupervised image recognition method. Background technique [0002] Image recognition is a technology that detects objects of interest from static images or dynamic videos. An effective image recognition method is the prerequisite and basis for achieving intelligent recognition tasks such as target tracking, scene analysis, and environment perception. In real life, image recognition technology has a very wide range of applications, such as pedestrian / vehicle detection technology in the field of automatic driving, face recognition technology in the field of security, etc., which are all realized based on image recognition. [0003] Most of the current image recognition technologies are designed and implemented based on machine learning theory. The main method is to collect image samples containing category labels from application scenarios, and train the recogni...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62G06N3/04
CPCG06N3/045G06F18/214
Inventor 杨春玲陈宇张岩李雨泽朱敏
Owner HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products