A method and device for scene and target recognition based on multi-task learning

A multi-task learning and target recognition technology, applied in character and pattern recognition, instruments, biological neural network models, etc., can solve problems such as difficult and effective functions

Inactive Publication Date: 2020-08-21
珠海习悦信息技术有限公司
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, since the scene is related to multi-layer information such as the target, the background environment, and the spatial layout, the scene features of the same category have strong variability, and the scene features of different categories may have similar points overlapping each other, which brings great difficulties for accurate recognition. great challenge
At present, the overall scene recognition based on cutting-edge deep learning technology can only achieve a top-1 accuracy rate of about 50%, and it is difficult to play an effective role in practical applications.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A method and device for scene and target recognition based on multi-task learning
  • A method and device for scene and target recognition based on multi-task learning
  • A method and device for scene and target recognition based on multi-task learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0084] Such as figure 1 As shown, a method for scene and target recognition based on multi-task learning, the method includes the following steps:

[0085] Step S1: Collect pictures containing different scenes and objects as image sample data;

[0086] Step S2: Manually label the image sample data to obtain the target category label and the scene category label;

[0087] Step S3: Construct a multi-layer convolutional neural network model and perform network initialization;

[0088] Step S4: Using the image sample data and the corresponding target category labels, pre-train the constructed model until convergence to obtain the target recognition model;

[0089] Step S5: Based on the multi-task learning technology, add network branches to a specific layer of the target recognition model, and initialize randomly to obtain a multi-task network;

[0090] Step S6: Retrain the multi-task network by using image sample data and corresponding scene category labels and target category...

Embodiment 2

[0093] Such as figure 1 As shown, a method for scene and target recognition based on multi-task learning, the method includes the following steps:

[0094] Step S1: Collect pictures containing different scenes and objects as image sample data; including the following steps:

[0095] Step S11: an image collection step, using cameras and network resources to collect image data of different scenes and objects;

[0096] Step S12: an image screening step, performing secondary screening on the image data, removing image data with unsatisfactory picture quality and screen content, and using the image data of the remaining images as image sample data. Remaining images ≥ 3000. Preferably, there are more than 20,000 remaining images.

[0097] Step S2: Manually label the image sample data to obtain the target category label and the scene category label; including the following steps:

[0098] Step S21: mark the target category, mark N_ob target category labels for each image, and sto...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a method and device for scene and target recognition based on multi-task learning. The method includes: collecting pictures containing different scenes and targets as image sample data; manually labeling the image sample data to obtain target category labels and Scene category label; build a multi-layer convolutional neural network model and initialize the network; use image sample data and corresponding target category labels to pre-train the constructed model until convergence, and obtain a target recognition model; based on multi-task learning technology , adding a network branch to a specific layer of the target recognition model and initializing it randomly to obtain a multi-task network; using the image sample data and the corresponding scene category labels and target category labels, the multi-task network is retrained until convergence, and the multi-task network is obtained. Learning model; input new image data to the multi-task learning model to obtain the classification results of image scene and target recognition. It improves single-task recognition accuracy.

Description

technical field [0001] The invention relates to the combination of the field of vision, image recognition and deep learning, in particular to a method and device for scene and target recognition based on multi-task learning. Background technique [0002] With the rise of deep learning, more and more technologies use deep learning to realize image recognition of pictures or video streams. Compared with traditional methods, deep learning avoids the complexity of manual parameter adjustment and manual feature selection. By building a deep network model, multi-layer analysis and abstract feature extraction are performed on data, which has high accuracy, high reliability, and high Adaptive features. Common image recognition applications cover action recognition, face recognition, object recognition, scene recognition, etc. Among them, object recognition and scene recognition, as the basis of image retrieval, image classification, scene understanding, and environment perception,...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62G06K9/66G06N3/04
CPCG06V30/194G06N3/045G06F18/24G06F18/214
Inventor 王志鹏周文明马佳丽
Owner 珠海习悦信息技术有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products