Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Big data appliance realizing method based on CPU-GPU heterogeneous cluster

A technology of heterogeneous clusters and implementation methods, applied in the field of cloud computing, can solve the problems of low operation efficiency of massive data computing, and achieve the effect of low efficiency

Active Publication Date: 2015-04-22
SHENZHEN INST OF ADVANCED TECH
View PDF3 Cites 22 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In view of this, the embodiment of the present invention provides a method for implementing a big data all-in-one machine based on a CPU-GPU heterogeneous cluster, so as to realize running Hadoop on a CPU-GPU heterogeneous cluster, and solve the problem of low operation efficiency of massive data calculations

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Big data appliance realizing method based on CPU-GPU heterogeneous cluster
  • Big data appliance realizing method based on CPU-GPU heterogeneous cluster

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0019] figure 1 It shows the implementation process of the CPU-GPU heterogeneous cluster-based big data all-in-one machine implementation method provided by Embodiment 1 of the present invention, and the method process is described in detail as follows:

[0020] In step S101, a computer cluster is set up, and each computer in the computer cluster is used as a node, and the node includes a Master node equipped with a CPU processor and other Slave nodes equipped with CPU and GPU processors, so The Master node is used to schedule and control tasks according to a predetermined task scheduling policy, and the Slave node is used for Map or Reduce computing operations.

[0021] In the embodiment of the present invention, the nodes may communicate through a wireless network connection. Exemplarily, each node may communicate through an Infiniband network connection. Each node has its own independent memory and disk. During disk access, each node can access both its own disk and disk...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of cloud computing, and provides a big data appliance realizing method based on a CPU-GPU heterogeneous cluster. The method includes the steps that the computer cluster is set up and comprises a Master node provided with a CPU and other Slave nodes provided with CPUs and GPUs; a CUDA is installed on each Slave node; a MapReduce model provided by a Hadoop is selected, a Map task is started for each task block, and the Map tasks are transmitted to the Slave nodes to be calculated; the Slave nodes divide the received Map tasks into corresponding proportions, dispatch the proportioned Map tasks to the CPUs or the GPUs to execute Map and Reduce operation and transmit operation results to the Master node; the Master node receives the operation results fed back by all the Slave nodes and finishes all task processing.

Description

technical field [0001] The invention belongs to the technical field of cloud computing, and in particular relates to a method for implementing a big data all-in-one machine based on a CPU-GPU heterogeneous cluster. Background technique [0002] Hadoop is a distributed computing platform that allows users to easily build and use it. MapReduce is the core component of Hadoop. MapReduce provides two important operations: 1) Map operation, used to process key-value pairs and generate intermediate results; 2) Reduce operation, used to reduce values ​​with the same key, and produce the final result. It is easy to perform distributed computing programming on the Hadoop platform through Map operations and Reduce operations. [0003] A graphics processing unit (Graphic Processing Unit, GPU) is a many-core processor configured with a large-scale computing unit. Compared with a CPU, it has faster computing power and higher memory bandwidth. [0004] However, the existing Hadoop can ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F15/16
Inventor 田盼喻之斌刘勇杨洋曾永刚贝振东须成忠
Owner SHENZHEN INST OF ADVANCED TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products