Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Floating-point multiplier and floating-point multiplication for neural network processor

A floating-point multiplier and multiplication technology, applied in the field of neural network processors, can solve problems such as inability to meet, reduce acceleration efficiency, hinder the application of neural network processors, etc., and achieve the effect of high performance and improved work accuracy

Active Publication Date: 2017-10-24
INST OF COMPUTING TECHNOLOGY - CHINESE ACAD OF SCI
View PDF6 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Multiplication and addition operations are important links in neural network operations. In order to reduce design complexity and improve operational efficiency, most dedicated hardware accelerators usually use fixed-point multipliers for multiplication operations, and most of the weight data obtained from training are calculated in floating points. environment, the mismatch of data storage and calculation forms between the training environment and the hardware acceleration environment leads to a large difference between the hardware acceleration processing results and the training results
However, if the traditional floating-point multiplier is used in the hardware accelerator, it will cause problems such as reduced acceleration efficiency, high hardware overhead, and increased operating power consumption, which seriously hinders the application of neural network processors in embedded devices. The demand for real-time analysis and processing of data using neural network processors in future ultra-low power IoT end nodes

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Floating-point multiplier and floating-point multiplication for neural network processor
  • Floating-point multiplier and floating-point multiplication for neural network processor
  • Floating-point multiplier and floating-point multiplication for neural network processor

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below through specific embodiments in conjunction with the accompanying drawings. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0036] figure 1 It is a structural schematic diagram of a floating-point multiplier according to an embodiment of the present invention. The floating-point multiplier includes a sign bit operation unit, an exponent code operation unit, a mantissa operation unit and a normalization unit. Such as figure 1 As shown, the floating-point multiplier receives two operands A and B to be multiplied, and outputs the product (can be denoted as C). The operands A and B and their product are all floating-point numbers, and each floating-point number is stored and represented in the form of "sign ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention discloses a floating-point multipliers and floating-point multiplication for a neural network processor. The floating-point multiplier matches the mantissas of two to-be-multiplied operands to select different operation modes to obtain the mantissa of the product, and directly outputs the mantissa of one of the operands when the mantissas of the two operands match the upper four bits; when the mantissas of the two operands match the upper three bits, the partial bits of the mantissa of the two operands are first truncated, the high bit of the truncated numbers is complemented by one, and then the multiplication is performed and the result is output; and if the condition is not satisfied, the mantissas of these two operands are multiplied to obtain the mantissa of the product. According to the floating-point multiplier disclosed by the present invention, in the implementation of the multiplication operation, a combination manner of approximate calculation and accurate calculation is used, and data replacement, partial bit multiplication, and other work with lower energy loss are used, so that without sacrificing greater working precision, the working efficiency of the multiplication operation is improved, and neural network processing system performance can be more efficient.

Description

technical field [0001] The present invention relates to neural network processors, and more particularly to multiplication operations within neural network processors. Background technique [0002] At present, neural network processors usually use trained weight data as input signals to perform calculation operations on neural network models. Multiplication and addition operations are important links in neural network operations. In order to reduce design complexity and improve operational efficiency, most dedicated hardware accelerators usually use fixed-point multipliers for multiplication operations, and most of the weight data obtained from training are calculated in floating points. environment, the mismatch of data storage and computing forms between the training environment and the hardware acceleration environment leads to a large difference between the hardware acceleration processing results and the training results. However, if the traditional floating-point mult...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F7/57G06N3/02
CPCG06F7/57G06N3/02
Inventor 韩银和许浩博王颖
Owner INST OF COMPUTING TECHNOLOGY - CHINESE ACAD OF SCI
Features
  • Generate Ideas
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More