Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Floating-point number conversion circuit

A technology for converting circuits and floating-point numbers, which is applied in the computer field, can solve the problems of low training efficiency of neural networks, achieve the effects of reducing resources, reducing data bit width, and improving efficiency

Pending Publication Date: 2020-06-19
NANJING UNIV
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The invention provides a floating-point number conversion circuit to solve the problem of low efficiency of neural network training caused by using single-precision floating-point numbers based on the IEEE 754 specification

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Floating-point number conversion circuit
  • Floating-point number conversion circuit
  • Floating-point number conversion circuit

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0045] The parameters of the Posit data format in the technical solution of the present invention include N and es, wherein N is the total bit width represented by the entire data, and es is the bit width of the index segment, and both parameters need to be determined before expressing the data. N can take any positive integer value, such as 5, 8 and so on. In this embodiment, N represents the preset total bit width, and es represents the preset exponent bit width, and the preset exponent bit width is selected according to the actual demand for floating-point numbers in the Posit data format, such as 2, 3, 4, etc. . figure 1 The schematic diagram of the specific data representation form of the single-precision floating-point number based on the IEEE 754 specification provided for the present invention, the single-precision floating-point number based on the IEEE 754 specification includes three parts of the symbol segment S, the exponent segment E1 and the mantissa segment F, ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a floating-point number conversion circuit which can convert a single-precision floating-point number based on an IEEE 754 specification into a single-precision floating-pointnumber in a poit data format, namely a second floating-point number. In the training process of a plurality of neural networks, the operation data approximately obeys normal distribution; data can beconcentrated near 0 through transformation; however, the precision of the single-precision floating-point number in the posit data format in the invention can be ensured near 0 in the neural network training process; moreover, the preset total bit width of the single-precision floating-point number in the posit data format can be regulated and controlled, so that the data bit width can be reducedto a great extent, resources required for storage and resources consumed in the read-write process are reduced, and the neural network training efficiency is improved.

Description

technical field [0001] The invention relates to the technical field of computers, in particular to a floating-point number conversion circuit. Background technique [0002] Neural network is an algorithmic mathematical model that imitates the behavior characteristics of animal neural networks and performs distributed parallel information processing. This kind of network depends on the complexity of the system, and achieves the purpose of processing information by adjusting the interconnection relationship between a large number of internal nodes. In recent years, with the rapid development of deep learning technology, the training of neural networks has become common and important, and the speed and resource consumption of neural network training have also become important indicators for deep learning evaluation. [0003] In the previous neural network training process, most of the floating-point numbers used the normalized single-precision floating-point number format base...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/063G06F7/483
CPCG06N3/063G06F7/483
Inventor 王中风徐铭阳方超林军
Owner NANJING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products