Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Deep convolutional neural network model improvement method, system and device and medium

A neural network model and deep convolution technology, applied in the field of deep convolutional neural network models, can solve the problems of model parameter scale calculation and storage resource increase, application difficulty, and poor application effect, so as to reduce the amount of model parameters and calculation The effect of improving the speed of inference and reducing the amount of parameters

Pending Publication Date: 2021-10-01
CHENGDU SHULIANYUNSUAN TECH CORP
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

With the continuous deepening of the research on the detection and recognition models of intelligent targets (including image targets, sound targets, etc.) The detection and recognition effect of the product network model on the target is also constantly improving, but the corresponding model parameter scale and the computing and storage resources required for training optimization have also increased significantly.
[0003] However, the device end (such as smart phone application, smart car, smart voice recorder, etc.) that finally applies this type of model usually has strict requirements on hardware power consumption and space size. The product network model is difficult to actually play a role on the front-end equipment
This severely restricts the intelligent development of target detection and recognition applications, leading to the fact that most intelligent applications are still in the laboratory verification stage, and there is still a certain distance from mature terminal applications.
[0004] The applicant found that in order to apply the existing deep convolutional neural network model to the terminal device, the deep convolutional neural network model will be improved and designed, such as reducing the parameters and the amount of calculation. On the one hand, the effect of this lightweight improvement is general, and The above improvements lead to a reduction in the accuracy of the deep convolutional neural network model, making its application effect on the terminal poor. When the high-precision deep convolutional neural network model is directly applied to the terminal device, due to the constraints of the terminal device making it difficult to apply

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep convolutional neural network model improvement method, system and device and medium
  • Deep convolutional neural network model improvement method, system and device and medium
  • Deep convolutional neural network model improvement method, system and device and medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0055] Embodiment 1 provides a method for improving a deep convolutional neural network model, including steps:

[0056] Lightweighting the first deep convolutional neural network model to obtain a second deep convolutional neural network model;

[0057] Performing pruning processing on the second deep convolutional neural network model to obtain a third deep convolutional neural network model;

[0058] Using the third deep convolutional neural network model as a first student model and using the second deep convolutional neural network model as a first teacher model, perform knowledge distillation to obtain a fourth deep convolutional neural network model;

[0059] The parameter accuracy of the fourth deep convolutional neural network model is reduced.

[0060] The method flow in the present embodiment one is as follows figure 1 As shown, the lightweight improvement and compression of the convolution-based deep neural network model are carried out. The overall improvement ...

Embodiment 2

[0077] On the basis of Embodiment 1, when the deep convolutional neural network model is a DenseNet network model, that is, the deep convolutional neural network model is a DenseNet network model for the target classification and recognition task, at this time the DenseNet network model is adopted using the embodiment The lightweight method of the method in one is improved for lightweight. Experimental verification is carried out on the miniImageNet image dataset (100 categories, 60,000 color pictures, size 224x224), and the experimental results are shown in Table 1.

[0078] Table 1 DenseNet network model lightweight improvement and compression implementation results

[0079]

[0080]

[0081] Among them, in Table 1, Model is the model, and Metric is the measurement class library, which is a measurement class library for monitoring indicators. Table 1 involves three models for comparison, including the DenseNet-Baseline model, which is the DenseNet reference model, and ...

Embodiment 3

[0094] On the basis of Embodiment 1, when the deep convolutional neural network model is the Yolo-V4 network model, the lightweight method proposed by the present invention is used to improve the weight of the YOLO-V4 network. Experimental verification is carried out on the Pascal VOC2007-2012 image target detection dataset (20 categories, 21503 color pictures). The experimental results are shown in Table 4.

[0095] Table 4

[0096]

[0097] Among them, Top-1mAP is the mAP parameter index, representing the accuracy of the model, YOLOv4 (Bb-CSPDarknet53) is the original deep neural network, YOLOv4-DSC (Bb-CSPDarknet53) is the artificially designed improved network, among them, YOLOv4-DSC ( The left column of the Bb-CSPDarknet53) column is the parameters of the YOLOv4-DSC (Bb-CSPDarknet53) model, and the right column is the parameter comparison result of the YOLOv4-DSC (Bb-CSPDarknet53) model and the YOLOv4 (Bb-CSPDarknet53) model; from Table 4 It can be seen that the amoun...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a deep convolutional neural network model improvement method, system and device and a medium, and relates to the field of deep convolutional neural network models. The method comprises the steps: carrying out lightweight processing on a first deep convolutional neural network model, and obtaining a second deep convolutional neural network model; performing pruning processing on the second deep convolutional neural network model to obtain a third deep convolutional neural network model; taking the third deep convolutional neural network model as a first student model and the second deep convolutional neural network model as a first teacher model, and performing knowledge distillation to obtain a fourth deep convolutional neural network model; and reducing the parameter precision of the fourth deep convolutional neural network model. The deep convolutional neural network model which can meet application constraints on terminal equipment and can guarantee the model precision can be designed through the technical scheme, and the model lightweight improvement effect is good.

Description

technical field [0001] The present invention relates to the field of deep convolutional neural network models, in particular to a method, system, device and medium for improving deep convolutional neural network models. Background technique [0002] In recent years, with the continuous development of deep learning technology, a large number of successful cases have emerged in the fields of finance, medical care, education, and industry. With the deepening of the research on detection and recognition models of intelligent targets (including image targets, sound targets, etc.) The detection and recognition effect of the product network model on the target is also constantly improving, but the corresponding model parameter scale and the computing and storage resources required for training optimization have also increased significantly. [0003] However, the device end (such as smart phone application, smart car, smart voice recorder, etc.) that finally applies this type of mo...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/08G06N5/04
CPCG06N3/082G06N5/04G06N3/045
Inventor 不公告发明人
Owner CHENGDU SHULIANYUNSUAN TECH CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products