Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Approximate floating-point multiplier for neural network processor and floating-point multiplication

A floating-point multiplier and multiplication technology, which is applied in the field of neural network processors, can solve problems such as unsatisfactory, increased operating power consumption, and obstacles to the application of neural network processors, so as to achieve high performance and improve work energy efficiency.

Active Publication Date: 2017-10-20
INST OF COMPUTING TECH CHINESE ACAD OF SCI
View PDF4 Cites 22 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Multiplication and addition operations are important links in neural network operations. In order to reduce design complexity and improve operational efficiency, most dedicated hardware accelerators usually use fixed-point multipliers for multiplication operations, and most of the weight data obtained from training are calculated in floating points. environment, the mismatch of data storage and calculation forms between the training environment and the hardware acceleration environment leads to a large difference between the hardware acceleration processing results and the training results
However, if the traditional floating-point multiplier is used in the hardware accelerator, it will cause problems such as reduced acceleration efficiency, high hardware overhead, and increased operating power consumption, which seriously hinders the application of neural network processors in embedded devices. The demand for real-time analysis and processing of data using neural network processors in future ultra-low power IoT end nodes

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Approximate floating-point multiplier for neural network processor and floating-point multiplication
  • Approximate floating-point multiplier for neural network processor and floating-point multiplication
  • Approximate floating-point multiplier for neural network processor and floating-point multiplication

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below through specific embodiments in conjunction with the accompanying drawings. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0040] figure 1 It is a schematic structural diagram of an approximate floating-point multiplier according to an embodiment of the present invention. The approximate floating-point multiplier includes a sign bit operation unit, an exponent code operation unit, a mantissa operation unit, a normalization unit and a shift unit. Such as figure 1 As shown, the floating-point multiplier receives two operands A and B to be multiplied, and outputs the product (can be denoted as C). The operands A and B and their product are all floating-point numbers, and each floating-point number is store...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an approximate floating-point multiplier for a neural network processor and a floating-point multiplication. When the approximate floating-point multiplier executes fractional part multiplying operation on an operand, part bits are intercepted from all high bits of a fractional part of the operand according to designated precision, and 1 is supplemented to the front and the back of the intercepted part bits to obtain two new fractional parts; multiplying operation is performed on the two new fractional parts to obtain an approximate fractional part of a product; and zero is supplemented to a low bit of the normalized approximate fractional part so that the bits of the approximate fractional part are consistent with the bits of the fractional part of the operand, and therefore the fractional part of the product is obtained. According to the approximate floating-point multiplier, an approximate calculation mode is adopted, different bits of the fractional part are intercepted according to a precision demand for corresponding multiplying operation, energy loss of multiplying operation is lowered, multiplying operation speed is increased, and therefore the performance of a neural network processing system is more efficient.

Description

technical field [0001] The present invention relates to neural network processors, and more particularly to multiplication operations within neural network processors. Background technique [0002] At present, neural network processors usually use trained weight data as input signals to perform calculation operations on neural network models. Multiplication and addition operations are important links in neural network operations. In order to reduce design complexity and improve operational efficiency, most dedicated hardware accelerators usually use fixed-point multipliers for multiplication operations, and most of the weight data obtained from training are calculated in floating points. environment, the mismatch of data storage and computing forms between the training environment and the hardware acceleration environment leads to a large difference between the hardware acceleration processing results and the training results. However, if the traditional floating-point mult...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F7/57G06N3/063
CPCG06F7/57G06N3/063
Inventor 韩银和许浩博王颖
Owner INST OF COMPUTING TECH CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products