Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method and device for distributing storage resources in GPU in big integer calculating process

A technology for storage resources and allocation methods, which is applied in the field of storage resource allocation, can solve problems such as high computing overhead, achieve the effects of increasing execution speed, reducing register usage, and improving parallel execution efficiency

Inactive Publication Date: 2014-03-12
DATA ASSURANCE & COMM SECURITY CENT CHINESE ACADEMY OF SCI
View PDF3 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the data bit width of large integers in the public key cryptography algorithm is generally 128 to 2048 bits, and the calculation overhead of large integers is very large.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and device for distributing storage resources in GPU in big integer calculating process
  • Method and device for distributing storage resources in GPU in big integer calculating process
  • Method and device for distributing storage resources in GPU in big integer calculating process

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0122] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the accompanying drawings and examples.

[0123] In order to improve the execution speed of large integer calculations when the GPU is used for the large integer calculations of the SM2 algorithm, the present invention allocates registers for each thread during the execution of various large integers using GPU threads, and calculates the currently executed large integers. The operands and intermediate calculation variables are stored in the allocated registers, and the modulus of the currently performed large integer calculation and other data irrelevant to the currently performed large integer calculation are stored in memory. In this way, the operands and intermediate calculation variables to be called in the large integer calculation in the GPU thread can be directly obtained by repeatedly ca...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method and device for distributing storage resources in a GPU in the big integer calculating process. The method includes the steps that in the calculating execution process of all sorts of big integers by the adoption of GPU threads, a register is distributed for each thread, in the big integer calculating executing process of the threads, the operating number and the middle calculating variable of big integer calculating executed currently by one thread are stored into the corresponding register distributed by the thread, and the modulus of big integer calculating executed currently by the thread and other data are stored into an internal memory. In this way, when the operating number and the middle calculating variable need to be called in the big integer calculating process in the GPU threads, the operating number and the middle calculating variable can be obtained by repeated calling of the distributed registers directly. On one hand, the register using quantity of the GPU threads is reduced, and the parallel execution efficiency of the threads is improved; on the other hand, necessary registers are reserved for big integer calculating, and the execution speed of big integer calculating can be increased more effectively.

Description

technical field [0001] The present invention relates to large integer calculation technology, in particular to a method and device for allocating storage resources when performing large integer calculation in a graphics processing unit (GPU, Graphics Processing Unit). Background technique [0002] The use of multi-core parallel computing is an important way to improve processor performance. Therefore, GPUs including massive parallel structure computing units have emerged. GPUs have developed into high-performance general-purpose processors with high parallelism, multi-threading, fast computing, and large memory bandwidth. The GPU architecture is divided into three layers: the first layer consists of several thread processor clusters (TPC, Thread Processing Cluster), the second layer consists of multiple stream multiprocessors (SM, Streaming Multiprocessor), the third The layer is a stream processor (SP, Stream Processor) that constitutes the SM, and can also be called a thre...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/50
Inventor 荆继武潘无穷顾青向继赵原李淼谢超
Owner DATA ASSURANCE & COMM SECURITY CENT CHINESE ACADEMY OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products