Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Model compression method based on layer number sampling and deep neural network model

A technology of neural network model and compression method, which is applied in the direction of biological neural network model, neural architecture, neural learning method, etc., can solve problems such as limiting the popularization and application of neural network technology, insufficient computing speed, and unfavorable smart home application scenarios, etc. Achieve the effect of optimizing the configuration, speeding up the operation speed, and improving the operation speed

Inactive Publication Date: 2021-11-26
江苏苏云信息科技有限公司
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, if running online requires high latency, a large model like the BERT model requires huge computing resources and cannot fully meet the needs of the application due to the limitation of its computing speed.
In particular, the above-mentioned major costs are especially unfavorable in the application scenario of smart home
For reasons of economy and space utilization, the terminal devices deployed by users in smart homes are usually small terminal devices with limited computing power, such as smart speakers, smart gateways and small home hosts, etc., which are required for neural network training or reasoning. The cost is unaffordable, which limits the further promotion and application of neural network technology in the lives of ordinary people

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Model compression method based on layer number sampling and deep neural network model
  • Model compression method based on layer number sampling and deep neural network model
  • Model compression method based on layer number sampling and deep neural network model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0035] In Embodiment 1 of the present invention, the model compression method in the first aspect of the present invention is specifically described as a training situation. The model compression method used in training in this example is as follows: figure 2 As shown, a neural network based on several cascaded sub-networks (Conformer layer) with the same structure includes: Step S1. Generate random positive integers uniformly distributed in a preset interval, and the extreme value of the interval is not greater than that of the sub-network The total number; step S2. selecting the random positive integer number of the sub-networks to perform one iteration; repeating the steps S1 to S2 to complete each iteration until the model converges.

[0036] The several cascaded sub-networks described in this embodiment use the same set of training parameters, and the neural network in this embodiment is based on a parameter sharing strategy. In some other practical applications, the st...

Embodiment 2

[0043] Such as image 3 As shown, Embodiment 2 expands the model compression method based on layer number sampling provided by the second aspect of the present invention during reasoning, including: Step 1. Set a value, denoted as n, where n is a positive integer and less than the stated The total number of sub-networks; Step 2. Keeping the first n sub-networks in the neural network; Setting the value in Step 1 includes: evaluating the performance of the neural network model and determining the ideal value of n.

[0044] The neural network of this embodiment is not sensitive to pruning of the number of layers. In self-supervised learning tasks based on Transformer architectures, not all layers are used to encode context and capture high-level semantic information. The last few layers of sub-networks try to transform the hidden representations between layers into a space where the original features are more predictable. This is an additional layer coupling phenomenon. The ca...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides two model compression methods based on layer number sampling from two aspects, each model compression method is based on a plurality of neural networks with completely consistent cascaded sub-network structures, and one model compression method based on layer number sampling comprises the following steps: generating random positive integers uniformly distributed in a preset interval, wherein the extreme value of the interval is not greater than the total number of the sub-networks; selecting the random positive integer number of sub-networks to carry out iteration once, wherein the second model compression method based on layer number sampling comprises the following steps: evaluating the performance of a neural network model, and determining an ideal value of n, and the first n sub-networks are reserved in the neural network. The computing speed can be increased during training and reasoning, computing resources are saved, and the performance of the model is kept. The deep neural network model provided by the invention has corresponding advantages due to the adoption of the model compression method provided by the invention, and is beneficial to implementation in a wider application scene.

Description

technical field [0001] The invention belongs to the technical field of neural networks, and in particular relates to a model compression method of a neural network based on layer number sampling and a corresponding neural network model. Background technique [0002] With the development of artificial intelligence, especially deep learning technology, various intelligent tasks based on neural networks have entered every aspect of ordinary people's daily life. Artificial neural network (ANN), referred to as neural network (NN), is a machine learning model that includes multiple hidden layers. Neural network technology has promoted the development of all walks of life and improved the convenience of people's life and production. [0003] The application of neural network generally has two parts: Training and Inference. The so-called training usually refers to the process of continuously iterating and optimizing the parameters of the neural network according to the given data ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F30/27G06N3/04G06N3/08G06F111/08
CPCG06F30/27G06N3/08G06F2111/08G06N3/045
Inventor 黄羿衡陈桂兴
Owner 江苏苏云信息科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products