A GPU-accelerated batch processing method for multiplying full vectors by homogeneous sparse matrices

A sparse matrix and processing method technology, applied in processor architecture/configuration, complex mathematical operations, etc., can solve problems such as multiplying full vectors by homogeneous sparse matrices that have not been batch-processed, and unable to fully utilize the advantages of GPUs in programs, to achieve improved The effect of parallelism, time-consuming solution, and reduced computing time

Active Publication Date: 2019-01-29
SOUTHEAST UNIV
View PDF7 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Scholars at home and abroad have begun to study the method of GPU accelerated iterative solution of sparse linear equations, but they have not done special research on the accelerated solution of the important module sparse matrix multiplied by full vector, nor have they done batch processing isomorphic sparse The work of matrix multiplication full of vectors cannot make the program take full advantage of the GPU

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A GPU-accelerated batch processing method for multiplying full vectors by homogeneous sparse matrices
  • A GPU-accelerated batch processing method for multiplying full vectors by homogeneous sparse matrices
  • A GPU-accelerated batch processing method for multiplying full vectors by homogeneous sparse matrices

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] Such as image 3 As shown, a GPU-accelerated batch processing method of multiplying a full-vector isomorphic sparse matrix of the present invention, a large number of isomorphic sparse matrix A 1 ~A bs Multiply full vector operation: A 1 x 1 = B 1 ,..., A bs x bs = B bs , Where x 1 ~x bs Is the multiplied full vector, b 1 ~b bs Is the result full vector, bs is the number of matrices to be batch processed, and the method includes:

[0035] (1) In the CPU, all matrix A 1 ~A bs Stored as row compression storage format, matrix A 1 ~A bs Sharing the same row offset array CSR_Row and column number array CSR_Col, the row offset array element CSR_Row[k] stores the total number of non-zero elements before the kth row of the matrix, and the value of k ranges from 1 to n+1; The specific values ​​of each matrix are stored in their respective numeric array CSR_Val 1 ~CSR_Val bs , The multiplied vector is stored in the array x 1 ~x bs , The result full vector is stored in array b 1 ~b bs...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a GPU accelerated method for performing batch processing of isomorphic sparse matrixes multiplied by full vectors. The method comprises the following steps of: (1), storing all matrixes A<1>-A<bs> in a row compression storage format in a CPU; (2), transmitting data required by a GPU kernel function to a GPU by the CPU; (3), distributing full vector multiplication tasks of the matrixes A<1>-A<bs> to GPU threads, and optimizing a memory access mode; and (4), executing batch processing of the isomorphic sparse matrixes multiplied by a full vector kernel function spmv_batch in the GPU, and calling the kernel function to perform batch processing of parallel computing of the isomorphic sparse matrixes multiplied by the full vectors. In the method disclosed by the invention, the CPU is responsible for controlling the whole process of a program and preparing data; the GPU is responsible for computing intensive vector multiplication; the algorithm parallelism and the access efficiency are increased by utilization of a batching processing mode; and thus, the computing time of batch sparse matrixes multiplied by full vectors is greatly reduced.

Description

Technical field [0001] The invention belongs to the application field of high-performance computing in power systems, and in particular relates to a GPU-accelerated batch processing method for multiplying a full vector of a homogeneous sparse matrix. Background technique [0002] Power flow calculation is the most widely used, basic and most important electrical calculation in power systems. In the research of power system operation mode and planning scheme, power flow calculation is required to compare the feasibility, reliability and economy of operation mode or planned power supply scheme. At the same time, in order to monitor the operating status of the power system in real time, a large number of and fast power flow calculations are also required. Therefore, when the system is planned and designed and the operation mode of the system is arranged, offline power flow calculation is adopted; in the real-time monitoring of power system operation status, online power flow calcul...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06F17/16G06T1/20
CPCG06F17/16G06T1/20
Inventor 周赣孙立成秦成明张旭柏瑞冯燕钧傅萌
Owner SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products