Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image processing method, device and mobile terminal

An image processing and image technology, applied in the direction of instruments, computing, character and pattern recognition, etc., can solve problems such as time-consuming, single, large amount of calculation, etc.

Active Publication Date: 2021-07-13
GUANGDONG OPPO MOBILE TELECOMM CORP LTD
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] However, at present, the fine classification of complex scenes through deep learning is mainly carried out by a single large-scale network, which has a large amount of calculation and takes a long time, which has caused huge pressure on mobile terminal deployment

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image processing method, device and mobile terminal
  • Image processing method, device and mobile terminal
  • Image processing method, device and mobile terminal

Examples

Experimental program
Comparison scheme
Effect test

no. 1 example

[0025] see figure 1 , figure 1 A schematic flowchart of the image processing method provided by the first embodiment of the present application is shown. The image processing method uses a deep convolutional neural network to classify the first-level scene in the image to be recognized, and then uses a shallow convolutional neural network to classify the second-level scene for each type of first-level scene, and finally outputs the image to be recognized The scene fine classification information avoids the high amount of calculation caused by using a single large-scale network to fine classify small scenes, and achieves a good balance between calculation and accuracy, making it possible to implement it on the mobile terminal. In a specific embodiment, the image processing method is applied as image 3 The image processing device 300 shown and the mobile terminal 100 ( Figure 5 ), the image processing method is used to improve the fine classification efficiency of scenes wh...

no. 2 example

[0040] see figure 2 , figure 2 A schematic flowchart of the image processing method provided by the second embodiment of the present application is shown. The mobile phone will be used as an example below, for figure 2 The flow shown is described in detail. The above-mentioned image processing method may specifically include the following steps:

[0041] Step S201: Acquiring an image to be recognized in an image acquisition mode.

[0042] In this embodiment, the image to be recognized may be an image acquired through components such as a camera of a mobile phone in an image acquisition mode when the camera of the mobile phone is shooting. In order to perform more refined post-processing on the image to be recognized, before performing image processing on the image to be recognized, the steps of the image processing method provided in this embodiment may be performed.

[0043] It can be understood that, in other implementation manners, the image to be recognized can als...

no. 3 example

[0082] see image 3 , image 3 A block diagram of the image processing apparatus 300 provided by the third embodiment of the present application is shown. The following will target image 3 The module block diagram shown is described, the image processing device 300 includes: a primary classification module 310, a secondary classification module 320 and an output module 330, wherein:

[0083] The first-level classification module 310 is configured to identify first-level scenes in the image to be recognized through a deep convolutional neural network, and obtain a first classification result, and each type of the first-level scene includes at least one type of second-level scene.

[0084] The secondary classification module 320 is configured to identify a secondary scene through a shallow convolutional neural network in the first classification result, and obtain a second classification result, the number of layers of the deep convolutional neural network is greater than tha...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present application discloses an image processing method, device, and mobile terminal. The method includes: in the image to be identified, identify a first-level scene through a deep convolutional neural network, and obtain the first classification result; in the first classification result, through The shallow convolutional neural network identifies the secondary scene and obtains the second classification result; based on the second classification result, it outputs the scene fine classification information of the image to be recognized. This method sequentially classifies large-scale scenes and small-scale scenes through the cascaded convolutional neural network, avoiding the high amount of calculation caused by using a single large-scale network to finely classify small-scale scenes, and has achieved excellent results in terms of calculation and accuracy. A good balance makes it possible to land on the mobile side.

Description

technical field [0001] The present application relates to the technical field of mobile terminals, and more specifically, to an image processing method, device and mobile terminal. Background technique [0002] Existing scene classification methods include classifying large and small types of scenes. Large categories of scenes include scenes with little relevance such as sky, grass, and food. But people tend to pay more attention to small scenes, such as the sky includes clouds, the sun, the atmosphere, etc., the grass includes pure green grass, yellow and green grass, etc., and the food includes fruits, vegetables, meat, etc. Recognizing different sub-categories allows for more refined post-processing to improve the display effect of photos. [0003] However, at present, the fine classification of complex scenes through deep learning is mainly carried out by a single large-scale network, which has a large amount of calculation and takes a long time, which has caused huge ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/10G06F18/2413
Inventor 张弓
Owner GUANGDONG OPPO MOBILE TELECOMM CORP LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products