Neural network generation method and device, and computer readable storage medium

A neural network and neural network model technology, applied in the field of neural network generation and computer-readable storage media, can solve problems such as consumption and large computing power, and achieve the effect of reducing the number of trainings, reducing costs, and reducing computing power requirements

Pending Publication Date: 2021-12-24
ZTE CORP
View PDF0 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

It can be clearly seen that both schemes need to consume a lot of computing power

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Neural network generation method and device, and computer readable storage medium
  • Neural network generation method and device, and computer readable storage medium
  • Neural network generation method and device, and computer readable storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0107] see Image 6 As shown, the search space defining the NAS algorithm is the micro-unit space, the micro-unit has a single-input-single-output structure, and the topology of the macro-unit is a sequential structure; the network model composed of multiple micro-units is used to solve the image classification problem, and the multi-channel image The data is the actual classification of the input and output image content, and the generation method of the neural network includes the following steps:

[0108] Step S610: Initialize the search space of the micro-unit network structure, apply the micro-search NAS algorithm, and obtain the optimal micro-unit;

[0109] Step S620: Set the number of micro-units in the sequential topology and form a first network a-a-a-a-a-a-a-a of a predefined size;

[0110] Step S630: Train the first network using the picture data of the training dataset and the classification information corresponding to each image to obtain the teacher network a ...

Embodiment 2

[0115] see Figure 7 As shown, the search space that defines the NAS algorithm is the micro-unit space, and the micro-unit has a two-input and single-output structure; the network model composed of multiple micro-units is used to solve the image classification problem, and the multi-channel image data is used as input and the image content is output. For actual classification, the generation method of the neural network includes the following steps:

[0116] Step S710: Initialize the search space of the micro-unit network structure, and apply the micro-search NAS algorithm to obtain the optimal micro-unit structure;

[0117] a-a-a-a

[0118] Step S720: Set the number of micro-units in the complex topology network and form a first network a-a-a-a of a predefined size;

[0119] Step S730: Apply the picture data in the training data set and the classification information corresponding to each image to train the model to obtain

[0120] a 1 -a 3 -a 5 -a 7

[0121] teacher ...

Embodiment 3

[0140] see Figure 8 As shown, after obtaining a full set of neural network models, the restriction for deploying terminals is that the online inference delay should not exceed 500ms. The neural network model testing process includes the following steps:

[0141]Step S810: Evaluate the prediction accuracy of all neural network models with different numbers of micro-units on the test data set;

[0142] Step S820: Applying all the neural network models with different numbers of micro-units on the target terminal to perform the same reasoning task respectively, recording the time delay index when the terminal performs the reasoning task;

[0143] Step S830: According to the prediction accuracy rate and the time delay index, under the pre-set reasoning time delay limit condition, select the model with the highest reasoning accuracy rate and deploy it on the terminal.

[0144] In this embodiment, the terminal runs a classification network composed of more than 5 micro-units, and t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a neural network generation method and device and a computer readable storage medium, and the neural network generation method comprises the steps: obtaining an optimal micro-unit, constructing a first network through the optimal micro-unit, enabling the first network to have enough powerful performance, and meeting the actual application demands; and training the first network by using a preset training data set to obtain a second network, establishing third networks, respectively training each micro-unit of all the third networks by using the second network to obtain a first training data set, training each micro-unit of all the third networks by using the second training data set, and constructing the neural network model according to the micro-units of the trained third network, so that the functions of all the micro-units of the third network correspond to the functions of the second network, and compared with a traditional method of training the micro-units one by one, the training number can be reduced, the computing power demand can be effectively reduced, and then the cost of generating the neural network model is reduced.

Description

technical field [0001] The present invention relates to the field of computer technology, in particular to a neural network generation method, device and computer-readable storage medium. Background technique [0002] In recent years, deep learning technology has been greatly developed with the support of stronger computing power and has achieved great success in many fields such as computer vision, speech recognition, and natural language processing. The researchers found that among the multiple elements that make up a deep learning model, structural differences in artificial neural networks have a huge impact on the final performance of the model. Generally speaking, designing a suitable neural network for a specific problem requires experienced algorithm engineers to spend a lot of time and energy repeatedly adjusting and verifying the model, and the adjustment efficiency is poor and the effect is difficult to guarantee. Therefore, the neural network architecture search ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/08G06N3/04G06K9/62
CPCG06N3/08G06N3/045G06F18/214G06N3/04G06V10/82G06N3/082G06N3/084G06N3/044
Inventor 裘瑞涛杨玺坤骆庆开韩炳涛王永成屠要峰
Owner ZTE CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products