Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image recognition model training method and device and electronic equipment

An image recognition and training method technology, applied in the field of deep learning, can solve the problems of large amount of calculation, time-consuming and laborious, and low image recognition efficiency.

Pending Publication Date: 2020-07-03
MEGVII BEIJINGTECH CO LTD
View PDF8 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] In the existing human action recognition tasks, when training the image recognition model, in order to obtain the detailed features of the training samples, it is necessary to accurately locate the discriminative area (ie, the area where the action occurs) in the sample image. Traditional image recognition The model training method mainly relies on manual labeling of discriminative regions, which is time-consuming and laborious. In order to save the time of manual labeling of discriminative regions, researchers began to use self-supervised attention mechanism to mine discriminative regions to locate discriminative regions in sample images, thus To obtain fine-grained features, however, when using the attention mechanism to mine discriminative regions for model training, it is necessary to use multiple models, and when using the trained model for image recognition, it is also necessary to use multiple models for staged recognition, and the amount of calculation is relatively large. Big
Therefore, the image recognition model obtained by the existing model training method still has the problem of low image recognition efficiency due to the large amount of image recognition calculations.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image recognition model training method and device and electronic equipment
  • Image recognition model training method and device and electronic equipment
  • Image recognition model training method and device and electronic equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0029] First, refer to figure 1 An example electronic device 100 for implementing an image recognition model training method, device and electronic device according to an embodiment of the present invention will be described.

[0030] Such as figure 1 Shown is a schematic structural diagram of an electronic device. The electronic device 100 includes one or more processors 102, one or more storage devices 104, an input device 106, an output device 108, and an image acquisition device 110. These components pass through a bus system 112 and / or other forms of connection mechanisms (not shown). It should be noted that figure 1 The components and structure of the electronic device 100 shown are only exemplary, not limiting, and the electronic device may also have other components and structures as required.

[0031] The processor 102 can be implemented in at least one hardware form of a digital signal processor (DSP), a field programmable gate array (FPGA), and a programmable logic...

Embodiment 2

[0038] This embodiment provides a training method for an image recognition model, which can be executed by such electronic devices as described above, see figure 2 The flow chart of the training method for the image recognition model shown, the method mainly includes the following steps S202 to S206:

[0039] Step S202, input the training samples pre-marked with sample labels into the image recognition model.

[0040] Since the image recognition model training method provided in this embodiment can use the image recognition model to determine the discriminative region in the training sample image, therefore, when labeling the training samples of the image recognition model, only the sample label of the sample image needs to be marked, There is no need to mark the discriminative region corresponding to the sample label, which greatly reduces the labeling work of training samples and saves labor costs. The above sample tags are action types in the sample images, for example, t...

Embodiment 3

[0071] On the basis of the foregoing embodiments, this embodiment provides an example of applying the foregoing image recognition model training method to train a fine-grained image recognition framework, wherein the fine-grained image recognition framework (Fine-GrainFeature Mining Network, FGFMNet ) includes the main network (Main Network, MNet) and the teacher network (Teacher Network, TNet). The main network is equipped with a discriminative region mining module (Discriminate RegionMining Module, DRMM). Specifically, it can be executed by referring to the following steps a to step f:

[0072] Step a: Use the ImageNet database to pre-train the main network and the teacher network to initialize the main network and the teacher network, and input the training samples pre-labeled with sample labels into the initialized main network. Among them, the above-mentioned main network and teacher network are both convolutional neural networks.

[0073] Step b: Extract the features of ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an image recognition model training method and device and electronic equipment, and relates to the technical field of deep learning, and the method comprises the steps: inputting a training sample marked with a sample label in advance into an image recognition model; in an iterative training process of an image recognition model, determining a fine-grained feature map corresponding to the training sample based on a network layer of the image recognition model, inputting the fine-grained feature map into a preset deep learning network to enable the deep learning network to learn fine-grained feature information from the fine-grained feature map, and distilling the learned fine-grained feature information into the image recognition model; wherein the fine-grained feature map is an image marked with a discriminative region corresponding to the sample label; and repeatedly executing the above training steps until the training is finished, and obtaining a trained image recognition model. According to the invention, the image recognition efficiency of the trained image recognition model can be improved.

Description

technical field [0001] The present invention relates to the technical field of deep learning, in particular to a training method, device and electronic equipment for an image recognition model. Background technique [0002] In the existing human action recognition tasks, when training the image recognition model, in order to obtain the detailed features of the training samples, it is necessary to accurately locate the discriminative area (ie, the area where the action occurs) in the sample image. Traditional image recognition The model training method mainly relies on manual labeling of discriminative regions, which is time-consuming and laborious. In order to save the time of manual labeling of discriminative regions, researchers began to use self-supervised attention mechanism to mine discriminative regions to locate discriminative regions in sample images, thus To obtain fine-grained features, however, when using the attention mechanism to mine discriminative regions for ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/46G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/20G06V10/462G06N3/045G06F18/241
Inventor 王彬
Owner MEGVII BEIJINGTECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products