Image recognition method, device, terminal equipment and readable storage medium

An image recognition and image technology, applied in the field of image recognition, can solve problems such as prolonging the development cycle of terminal equipment, long training data and training time

Pending Publication Date: 2020-04-10
GUANGDONG OPPO MOBILE TELECOMM CORP LTD
View PDF4 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In order to ensure that the deep learning model can extract more deep features that reflect image details, a large amount of training data a

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image recognition method, device, terminal equipment and readable storage medium
  • Image recognition method, device, terminal equipment and readable storage medium
  • Image recognition method, device, terminal equipment and readable storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0037] The image recognition method provided by Embodiment 1 of this application is described below, please refer to the attached figure 1 , the determination method includes:

[0038] In step S101, the image to be recognized is acquired, and the global depth feature of the image to be recognized is determined based on the first deep learning model;

[0039] At present, the convolutional neural network (Convolutional Neural Networks, CNN) model is usually used to learn the characteristics of the image, that is, the entire image is input into the CNN model, and the global depth feature of the image output by the CNN model is obtained. Commonly used CNN models include AlexNet model, VGGNet model, GoogleInception Net model and ResNet model. The specific model architecture is an existing technology, and will not be repeated here.

[0040] In this step S101, the AlexNet model, VGGNet model, GoogleInceptionNet model or ResNet model commonly used in the prior art can be used to obt...

Embodiment 2

[0066] Another image recognition method provided in Embodiment 2 of the present application is described below, please refer to the attached Figure 6 , the method includes:

[0067] In step S201, the image to be recognized is acquired, and the global depth feature of the image to be recognized is determined based on the first deep learning model;

[0068] In step S202, based on the above-mentioned image to be recognized, position indication information is determined, and the position indication information is used to indicate: if the above-mentioned image to be recognized contains a target object, the position of the target object in the above-mentioned image to be recognized;

[0069] In step S203, determining the depth features of the image region indicated by the position indication information in the image to be identified based on the second deep learning model, so as to obtain the local depth features of the image to be identified;

[0070] The specific implementation ...

Embodiment 3

[0086] Embodiment 3 of the present application provides an image recognition device. For ease of description, only the relevant parts of the application are shown, such as Figure 7 As shown, the identification device 300 includes:

[0087] A global feature module 301, configured to acquire an image to be recognized, and determine the global depth feature of the image to be recognized based on the first deep learning model;

[0088] A position determination module 302, configured to determine position indication information based on the image to be recognized, where the position indication information is used to indicate: if the image to be recognized contains a target object, then the target object in the image to be recognized position in

[0089] A local feature module 303, configured to determine, based on a second deep learning model, the depth feature of the image region indicated by the position indication information in the image to be recognized, so as to obtain the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides an image recognition method, an image recognition device, terminal equipment and a readable storage medium. The method comprises the following steps that: a to-be-identified image is obtained, and the global depth feature of the to-be-identified image is determined; position indication information is determined based on the to-be-identified image, wherein the position indication information is used for indicating the position of a target object in the to-be-identified image if the to-be-identified image contains the target object; the depth features of an image region indicated by the position indication information in the to-be-identified image is determined, so that the local depth features of the to-be-identified image can be obtained; and whether the category ofthe to-be-identified image is a target category is determined based on the global depth features and the local depth features. With the method adopted, a deep learning model can be trained without adopting a large amount of training data and relatively a long training duration, so that the development period of a terminal device is accelerated to a certain extent.

Description

technical field [0001] The present application belongs to the technical field of image recognition, and in particular relates to an image recognition method, a recognition device, a terminal device and a readable storage medium. Background technique [0002] At present, when identifying the category of an image, a deep learning model (for example, AlexNet, VGGNet or ResNet, etc.) Deep features determine the class of an image. [0003] When the images to be recognized are relatively similar, in order to be able to distinguish the categories of each image, a deep learning model is required to extract deep features that can reflect image details. In order to ensure that the deep learning model can extract more deep features that reflect image details, a large amount of training data and a long training time are required to train the deep learning model, which undoubtedly prolongs the development cycle of the terminal device. Contents of the invention [0004] In view of thi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62G06K9/46
CPCG06V10/44G06F18/241G06F18/214Y02T10/40
Inventor 贾玉虎
Owner GUANGDONG OPPO MOBILE TELECOMM CORP LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products