Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

109 results about "Quantized neural networks" patented technology

Quantized-CNN is a novel framework of convolutional neural network (CNN) with simultaneous computation acceleration and model compression in the test-phase.

Convolutional neural network quantification method and device, computer and storage medium

InactiveCN110363281AAccelerate the effectTaking into account the compression effecNeural architecturesNeural learning methodsComputation complexityQuantized neural networks
The invention provides a convolutional neural network quantification method, which comprises the steps of training a full-precision model of a convolutional neural network to be quantified, and calculating standard deviation of weight and response distribution of each layer of the full-precision model; estimating scale factors of parameters and features of the full-precision model according to thestandard deviation and hyper-parameters of the weight and response distribution of each layer of the full-precision model; for the to-be-optimized convolutional neural network, establishing a quantization module containing scaling factor-based forward calculation and backward gradient propagation functions to obtain a corresponding quantization network; carrying out fine tuning training on the quantization network, and determining an optimal scale factor; and retraining the quantization network generated by the optimal scaling factor to obtain a final quantization neural network model. The invention further provides a convolutional neural network quantization device, a computer and a storage medium. According to the invention, the problems of complex realization and high calculation complexity of the existing model quantification method are improved.
Owner:SHANGHAI JIAO TONG UNIV

Intelligent NIPS (Network Intrusion Prevention System) framework for quantifying neural network based on mobile agent (MA) and learning vector

The invention discloses an intelligent NIPS (Network Intrusion Prevention System) framework for quantifying a neural network based on a mobile agent (MA) and a learning vector. The NIPS framework comprises a data preprocessing unit, a construction classifier unit, an expert system unit and a knowledge base, wherein the data preprocessing unit is used for collecting network data streams and selecting an input sample and a test sample for the neural network from the collected network data streams; the construction classifier unit is used for making use of an input and learning sample MA-LVQ (Mobile Agent-Learning Vector Quantization) neural network classifier and performing class test to form a knowledge base; the expert system unit is used for interacting with the knowledge base according to a known security policy to compare and classify actions provided by the data streams and action descriptions in the knowledge base so as to determine an output result; and the knowledge base comprises a normal action description and an abnormal action description and is updated by interacting through the expert system unit. By adopting the NIPS framework, a better classifying effect can be achieved by a linear network, and the stronger limit on linear separability of data required by the linear network can be avoided effectively under the action of a competition layer; and the NIPS framework is more practicable and extensive.
Owner:SHANGHAI DIANJI UNIV

Neural network quantification method and device, and electronic device

The invention provides a neural network quantification method and device, and an electronic device. The method comprises the steps of: in the iterative training process of a neural network, utilizingscaling factors of all neurons in an input layer, performing quantitative calculation on initial activation values of all the neurons in the input layer in all output channels of the input layer, andoutputting activation values of all the neurons in a next hidden layer of the input layer; taking each hidden layer of the neural network as a current layer one by one; and executing the following quantization operation on each current layer: executing the following quantization operation on each current layer, determining a scaling factor of each neuron in the current layer based on the activation value of each neuron in the current layer, performing quantitative calculation on the activation value of each neuron in the current layer in each output channel of the current layer by utilizing the scaling factor of each neuron in the current layer, and outputting the activation value of each neuron in the next layer of the current layer; and when the iterative training is completed, taking the current neural network as a quantized neural network. The recognition precision of the neural network is improved.
Owner:BEIJING KUANGSHI TECH +1

Model construction method and device, image processing method and device, hardware platform and storage medium

The application relates to the technical field of deep learning, and provides a model construction method and device, an image processing method and device, a hardware platform and a storage medium. The model construction method comprises the steps that a neural network model used for image processing is trained, wherein the neural network model comprises at least one depth separable convolution module, and each depth separable convolution module comprises a layer-by-layer convolution layer, a point-by-point convolution layer, a batch normalization layer and an activation layer which are connected in sequence; and the trained neural network model is quantized to obtain a quantized neural network model. According to the method, firstly, model parameters are quantified, so that the data volume of the parameters is effectively reduced, and the model is suitable for being deployed in NPU equipment. Secondly, the depth separable convolution module in the method is different from the depth separable convolution module in the prior art, and a batch normalization layer and an activation layer are not arranged between a layer-by-layer convolution layer and a point-by-point convolution layer, so that values of model parameters are distributed in a reasonable range, and the model parameters can be quantized with high precision.
Owner:成都佳华物链云科技有限公司

Modern tramcar hybrid energy storage system energy management method based on working condition analysis

PendingCN112668848ARealize real-time optimal controlRealize the goal of electrical energy savingForecastingCharacter and pattern recognitionNerve networkPrincipal component analysis
The invention relates to a modern tramcar hybrid energy storage system energy management method based on working condition analysis, and the method comprises the steps: 1), carrying out the arrangement of historical data obtained in the operation of a tramcar, dividing the historical data into short strokes, screening the short strokes meeting conditions, enabling the screened short strokes to enter an alternative stroke library, and deleting the short strokes which do not meet the conditions; 2) extracting characteristic values of each short stroke, and screening 13 characteristic values; 3) carrying out dimension reduction on the characteristic value by using a principal component analysis method; 4) using a clustering analysis method to classify the operation conditions; 5) performing online identification on the actual operation condition by using a learning vectorization neural network; and 6) based on the identified working conditions, performing classification optimization on the energy management method under each working condition. According to the invention, the electrical energy-saving purpose of rail transit can be achieved. The energy management strategy is optimized based on the working condition analysis result, classification optimization can be carried out according to different modes of vehicle operation, and the optimization effect of the energy management strategy in each mode is improved.
Owner:BEIJING JIAOTONG UNIV

Composite insulator real-time segmentation method and system based on DeepLabV < 3 + >

The invention relates to a composite insulator real-time segmentation method and system based on DeepLabV < 3 + >, and the method comprises the following steps: S1, obtaining a power equipment infrared image which is shot in a power inspection process and contains a composite insulator, and constructing an original data set; s2, performing data amplification on the original data set to obtain a training data set; s3, constructing an improved DeepLabV < 3 + > network: replacing a backbone network of the DeepLabV < 3 + > with a lightweight neural network MobileNetV2 to improve real-time performance, introducing a lightweight efficient channel attention module ECA to realize local cross-channel interaction without dimension reduction, and then adding a Point fine segmentation module at an output end of the DeepLabV < 3 + > for post-processing to further improve a semantic segmentation result; training the improved DeepLabV < 3 + > network through the training data set to obtain a trained improved DeepLabV < 3 + > network; and S4, processing the shot infrared image of the power equipment through the trained improved DeepLabV3 + network so as to segment the composite insulator in real time. According to the method and the system, the real-time performance and the accuracy of composite insulator segmentation can be improved.
Owner:FUJIAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products