Block floating point computations using reduced bit-width vectors

A block floating-point and vector technology, which is applied in calculations using number system representations, calculations using non-contact manufacturing equipment, calculations, etc., can solve problems such as reduction and adverse effects on accuracy

Pending Publication Date: 2020-12-11
MICROSOFT TECH LICENSING LLC
View PDF0 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

While reduced precision can improve the performance of different functions of neural networks (including the speed at which classification and regression tasks are perform

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Block floating point computations using reduced bit-width vectors
  • Block floating point computations using reduced bit-width vectors
  • Block floating point computations using reduced bit-width vectors

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0015] Computing devices and methods described herein are configured to perform block floating point calculations using reduced bit width vectors. For example, a block floating-point vector is broken into multiple smaller bit-width block floating-point vectors to perform operations on. Thereafter, the higher bit-width block floating-point vector is constructed by combining the results of operations performed on the smaller bit-width block floating-point vector. This precise fusion of block floating point numbers reduces the computational burden while increasing accuracy by allowing high precision mathematical operations to be performed using low precision hardware (eg, low precision hardware accelerators).

[0016] According to various examples of the present disclosure, neural networks, such as deep neural networks (DNNs), can be trained and operated more efficiently using smaller bit-width block floating-point vectors that allow High precision arithmetic on the precision bl...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

A system for block floating point computation in a neural network receives a block floating point number comprising a mantissa portion. A bit-width of the block floating point number is reduced by decomposing the block floating point number into a plurality of numbers each having a mantissa portion with a bit-width that is smaller than a bit-width of the mantissa portion of the block floating point number. One or more dot product operations are performed separately on each of the plurality of numbers to obtain individual results, which are summed to generate a final dot product value. The final dot product value is used to implement the neural network. The reduced bit width computations allow higher precision mathematical operations to be performed on lower-precision processors with improved accuracy.

Description

Background technique [0001] Block floating-point number format allows dynamic range and precision to be scaled independently. By reducing the precision, the system performance of a processor (such as a hardware accelerator) can be increased. However, reduced precision may affect system accuracy. For example, the block floating-point number format can be used for neural networks that can be implemented in many application areas for tasks such as computer vision, robotics, speech recognition, medical image processing, computer games, augmented reality, virtual reality, etc. . While reduced precision can improve the performance of different functions of neural networks (including the speed at which classification and regression tasks are performed for object recognition, lip reading, speech recognition, detecting unusual transactions, text prediction, and many others), However, accuracy may be adversely affected. Contents of the invention [0002] This Summary is provided t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F7/487G06F7/53G06F7/544
CPCG06F7/4876G06F7/5324G06F7/5443G06F7/483G06F17/16G06N3/04
Inventor D·洛E·S·钟D·C·伯格
Owner MICROSOFT TECH LICENSING LLC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products