Method for processing convolution neural network
A technology of convolutional neural network and model, applied in the field of energy-saving convolutional neural network implementation
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment
[0026] The embodiment discloses a quantization method, and the activation vector is described below in a fixed-precision notation.
[0027] The scalar factor s is defined by Equation 3 when using the dynamic fixed-point format to fully represent 32-bit floating-point values in the activation vector (x).
[0028]
[0029] Where p represents the quantization bit length. In Equation 3, the dynamic quantization range is [[-max v , max v ]]. For the activation vector (x) in the convolution operation and the fully connected operation, max v The statistical maximum of the usual input features for a large set of data sets. available by figure 1 The statistical maximum value in is used for analysis.
[0030] Based on formula 3, s is a scalar factor, which is used to make up the gap between the floating-point value and the fixed-point value. The scalar factor s is a mathematical real number represented in 32-bit floating-point format. Apply scalar factor s to activation vec...
PUM

Abstract
Description
Claims
Application Information

- R&D
- Intellectual Property
- Life Sciences
- Materials
- Tech Scout
- Unparalleled Data Quality
- Higher Quality Content
- 60% Fewer Hallucinations
Browse by: Latest US Patents, China's latest patents, Technical Efficacy Thesaurus, Application Domain, Technology Topic, Popular Technical Reports.
© 2025 PatSnap. All rights reserved.Legal|Privacy policy|Modern Slavery Act Transparency Statement|Sitemap|About US| Contact US: help@patsnap.com