Multiple-GPU based random forest training method
A training method and random forest technology, applied in the field of multi-GPU random forest training, can solve the problems of changing computing requirements, inability to make full use of multi-GPU resources, and low random forest training efficiency, and achieve the effect of improving training efficiency.
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment 1
[0024] Such as figure 1 As shown, assuming that there are N samples, each sample has d features, and the number of decision trees in the random forest RF is M, without loss of generality, the number of GPU units G in this example is less than N, and the decision of the sample is used as the training Tasks can utilize multiple GPU units in parallel to the maximum.
[0025] A) Control multiple GPUs to calculate the first decision tree, and each GPU unit calculates a sample decision;
[0026] B) As the depth of the tree increases, the decision tree calculates to the leaves and stops the calculation
[0027] C) Release of GPU units at leaf nodes
[0028] D) Start the second decision tree, and the released GPU unit calculates and trains according to steps A-C
[0029] E) Other GPU units released by the first decision tree join the calculation of the second decision tree
[0030] F) Similarly, the GPU unit released by the leaves of the second decision tree starts the calculation...
Embodiment 2
[0033] Such as figure 2 As shown, similarly, assuming that there are N samples, each sample has d features, and the number of decision trees in the random forest RF is M. In this example, the number of GPU units G is greater than N. In order to ensure the full load of the GPU, group the GPU , without loss of generality, take two groups as an example, assuming that G / 2<=N
[0034] A) Control multiple GPUs in each group to calculate a decision tree, and each GPU unit calculates a decision for a sample; control multiple groups to perform synchronously
[0035] B) As the depth of the tree increases, the first group of decision trees calculates to the leaves and stops the calculation
[0036] C) Release of GPU units at the first set of leaf nodes
[0037] D) Start the third d...
PUM
Login to View More Abstract
Description
Claims
Application Information
Login to View More 