Floating-point dot-product hardware with wide multiply-adder tree for machine learning accelerators

A floating-point, processor-based technology used in machine learning to address increased power and performance constraints, reduced performance, increased latency, cost, and/or power consumption

Pending Publication Date: 2020-12-08
INTEL CORP
View PDF0 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

A global search for the max exponent introduces latency (e.g. thus reducing performance)
Also, alignment can involve a relatively large amount of hardware (such as aligning shifter stages), which adds latency, cost, and / or power consumption
Indeed, as ML applications transition from standard number formats (e.g. floating-point 16-bit / FP16 with 5-bit exponent) to more optimized number formats (e.g. Brain floating-point with 8-bit exponent 16bit / Bfloat16), power and performance limits may increase

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Floating-point dot-product hardware with wide multiply-adder tree for machine learning accelerators
  • Floating-point dot-product hardware with wide multiply-adder tree for machine learning accelerators
  • Floating-point dot-product hardware with wide multiply-adder tree for machine learning accelerators

Examples

Experimental program
Comparison scheme
Effect test

example 1

[0039] Example 1 includes a performance-enhanced computing system including a network controller and a processor coupled to the network controller, the processor including logic coupled to one or more substrates, the logic for : performing a first alignment between a plurality of floating point numbers based on a first subset of exponent bits; at least in part in parallel with said first alignment, performing a first alignment between said plurality of floating point numbers based on a second subset of exponent bits a second alignment, wherein a first subset of the exponent bits is the least significant bit (LSB), and a second subset of the exponent bits is the most significant bit (MSB); and the plurality of floating point numbers of the alignment add to each other.

example 2

[0040]Example 2 includes the computing system of example 1, wherein the first alignment is based on each exponent relative to a predetermined constant.

example 3

[0041] Example 3 includes the computing system of example 1, wherein the second alignment is based on each exponent relative to a greatest exponent of all exponents.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to floating-point dot-product hardware with a wide multiply-adder tree for machine learning accelerators. Systems, apparatuses and methods may provide for technology that conducta first alignment between a plurality of floating-point numbers based on a first subset of exponent bits. The technology may also conduct, at least partially in parallel with the first alignment, a second alignment between the plurality of floating-point numbers based on a second subset of exponent bits, where the first subset of exponent bits are LSBs and the second subset of exponent bits are MSBs. In one example, technology adds the aligned plurality of floating-point numbers to one another. With regard to the second alignment, the technology may also identify individual exponents of a plurality of floating-point numbers, identify a maximum exponent across the individual exponents, and conduct a subtraction of the individual exponents from the maximum exponent, where the subtraction isconducted from MSB to LSB.

Description

technical field [0001] Embodiments relate generally to machine learning. More specifically, embodiments relate to floating point dot product hardware with wide multiply-adder trees for machine learning accelerators. Background technique [0002] Deep neural networks (DNNs) are commonly used in machine learning (ML) workloads to perform matrix multiplication and convolution operations, which tend to be the most power- and performance-limiting operations in ML workloads. Although hardware accelerators with dot-product computation units have been proposed to improve the area and energy efficiency of these operations (eg, using various dataflow architectures and data types), there is still much room for improvement. For example, a traditional floating-point (FP) dot product hardware solution may first find the largest exponent in the floating-point product, and use the largest exponent and the corresponding single exponent to align each product mantissa (e.g., significand, coef...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F7/544G06F7/487G06F7/483G06N3/063
CPCG06F7/5443G06F7/4876G06F7/4836G06N3/063G06F7/485G06F17/16G06N3/045G06N20/00G06F17/15G06F5/012
Inventor 希曼殊·考尔马克·安德斯
Owner INTEL CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products