Scene and target identification method and device based on multi-task learning

A multi-task learning and target recognition technology, applied in character and pattern recognition, instruments, biological neural network models, etc., can solve problems such as difficult and effective functions

Inactive Publication Date: 2017-06-13
珠海习悦信息技术有限公司
View PDF2 Cites 78 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, since the scene is related to multi-layer information such as the target, the background environment, and the spatial layout, the scene features of the same category have strong variability, and the scene features of different categories may have similar points overlapping each other, which brings great difficulties for accurate recognition. great challenge
At present, the overall scene recognition based on cutting-edge deep learning technology can only achieve a top-1 accuracy rate of about 50%, and it is difficult to play an effective role in practical applications.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Scene and target identification method and device based on multi-task learning
  • Scene and target identification method and device based on multi-task learning
  • Scene and target identification method and device based on multi-task learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0084] Such as figure 1 As shown, a method for scene and target recognition based on multi-task learning, the method includes the following steps:

[0085] Step S1: Collect pictures containing different scenes and objects as image sample data;

[0086] Step S2: Manually label the image sample data to obtain the target category label and the scene category label;

[0087] Step S3: Construct a multi-layer convolutional neural network model and perform network initialization;

[0088] Step S4: Using the image sample data and the corresponding target category labels, pre-train the constructed model until convergence to obtain the target recognition model;

[0089] Step S5: Based on the multi-task learning technology, add network branches to a specific layer of the target recognition model, and initialize randomly to obtain a multi-task network;

[0090] Step S6: Retrain the multi-task network by using image sample data and corresponding scene category labels and target category...

Embodiment 2

[0093] Such as figure 1 As shown, a method for scene and target recognition based on multi-task learning, the method includes the following steps:

[0094] Step S1: Collect pictures containing different scenes and objects as image sample data; including the following steps:

[0095] Step S11: an image collection step, using cameras and network resources to collect image data of different scenes and objects;

[0096] Step S12: an image screening step, performing secondary screening on the image data, removing image data with unsatisfactory picture quality and screen content, and using the image data of the remaining images as image sample data. Remaining images ≥ 3000. Preferably, there are more than 20,000 remaining images.

[0097] Step S2: Manually label the image sample data to obtain the target category label and the scene category label; including the following steps:

[0098] Step S21: mark the target category, mark N_ob target category labels for each image, and sto...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a scene and target identification method and device based on multi-task learning. The method comprises the steps that pictures containing different scenes and targets are collected as image sample data; the image sample data is subjected to manual label marking, and target class labels and scene class labels are obtained; a multi-layer convolutional neural network model is built, and network initialization is conducted; the image sample data and the corresponding target class labels are adopted for pre-training the built model till convergence, and a target identification model is obtained; based on a multi-task learning technology, network branches are added into a specific layer of the target identification model, random initialization is conducted, and a multi-task network is obtained; the image sample data and the corresponding scene class labels and target class labels are adopted for e-training the multi-task network till convergence, and a multi-task learning model is obtained; new image data is input to the multi-task learning model, and classification results of scene and target identification of images are obtained. Accordingly, the single task identification precision is improved.

Description

technical field [0001] The invention relates to the combination of the field of vision, image recognition and deep learning, in particular to a method and device for scene and target recognition based on multi-task learning. Background technique [0002] With the rise of deep learning, more and more technologies use deep learning to realize image recognition of pictures or video streams. Compared with traditional methods, deep learning avoids the complexity of manual parameter adjustment and manual feature selection. By building a deep network model, multi-layer analysis and abstract feature extraction are performed on data, which has high accuracy, high reliability, and high Adaptive features. Common image recognition applications cover action recognition, face recognition, object recognition, scene recognition, etc. Among them, object recognition and scene recognition, as the basis of image retrieval, image classification, scene understanding, and environment perception,...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06K9/66G06N3/04
CPCG06V30/194G06N3/045G06F18/24G06F18/214
Inventor 王志鹏周文明马佳丽
Owner 珠海习悦信息技术有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products