Method and apparatus for scheduling precoding system based on code book

A scheduling method and precoding technology, applied in baseband system components, synchronization/start-stop systems, orthogonal multiplexing systems, etc., can solve the problem that the PF scheduling algorithm cannot be implemented.

Inactive Publication Date: 2008-12-17
NTT DOCOMO INC
View PDF0 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, for users working in the space division multiple access mode, there is no way to use the PF scheduling algorithm to realize

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and apparatus for scheduling precoding system based on code book
  • Method and apparatus for scheduling precoding system based on code book
  • Method and apparatus for scheduling precoding system based on code book

Examples

Experimental program
Comparison scheme
Effect test

no. 1 example

[0130] In the first embodiment of the present invention, all users work in SDMA mode (that is, only users 1 to 12 in the cell), and the PF scheduling algorithm is used in steps 412, 413, and 414.

[0131] The method of the first embodiment of the present invention specifically includes the following steps:

[0132] In step A1, users with the same PMI and PVI are grouped into a group, and the result of the grouping is shown in the user feedback information table working in SDMA mode, that is, users 1, 2, and 3 are in the same group, corresponding to the first matrix. The first vector; users 4, 5, and 6 are in the same group, corresponding to the second vector of the first matrix; users 7, 8, and 9 are in the same group, corresponding to the first vector of the second matrix; users 10, 11, 12 are in the same group, corresponding to the second vector of the second matrix.

[0133] Step A2, use the PF algorithm to perform multi-user scheduling in each group, assuming that 2 users...

no. 2 example

[0143] In the second embodiment of the present invention, all users work in SDMA mode (that is, only users 1 to 12), and use PF, MaxC / I, and PF scheduling algorithms in steps 412, 413, and 414, respectively.

[0144] The method of the second embodiment of the present invention specifically includes the following steps:

[0145] Step B1, divide users with the same PMI and PVI into a group, and the result of grouping is shown in the user feedback information table working in SDMA mode, that is, users 1, 2, and 3 are in the same group, corresponding to the first matrix. The first vector; users 4, 5, and 6 are in the same group, corresponding to the second vector of the first matrix; users 7, 8, and 9 are in the same group, corresponding to the first vector of the second matrix; users 10, 11, 12 are in the same group, corresponding to the second vector of the second matrix.

[0146]Step B2, use the PF algorithm to perform multi-user scheduling in each group, assuming that 2 users...

no. 3 example

[0151] In the third embodiment of the present invention, all users work in SDMA mode (that is, only users 1 to 12), and PF, PF, and MaxC / I scheduling algorithms are used in steps 412, 413, and 414, respectively.

[0152] The method of the third embodiment of the present invention specifically includes the following steps:

[0153] In step C1, users with the same PMI and PVI are grouped into a group. The grouping results are shown in the user feedback information table working in SDMA mode, that is, users 1, 2, and 3 are in the same group, corresponding to the first matrix. The first vector; users 4, 5, and 6 are in the same group, corresponding to the second vector of the first matrix; users 7, 8, and 9 are in the same group, corresponding to the first vector of the second matrix; users 10, 11, 12 are in the same group, corresponding to the second vector of the second matrix.

[0154] Step C2, use the PF algorithm to perform multi-user scheduling in each group, assuming that ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a dispatching method of a precoding system based on the code book and a device, comprising: the step 21, receiving the user feedback information obtained according to the code book transmitted by the user terminal; the step 22, dividing the users working in the same coring mode into the same user group; the step 23, performing the independent dispatching process on a plurality of users according to the user feedback information by the dispatching algorithm, wherein at least one user/user group is selected from each working group; the step 24, selecting the user with the largest dispatching measurement from the user group/user dispatched in the step 23 or using the user group as the final dispatching result. The invention solves the dispatching processing problems of the user in a plurality of modes, meanwhile realizes the integrated consideration of the system performance.

Description

technical field [0001] The present invention relates to the problem of multi-user scheduling in a codebook-based precoding system, in particular to a scheduling of a codebook-based precoding system applied to the downlink in a multi-user multiple-input multiple-output (MU-MIMO) system The method and the scheduling device use the codebook-based precoding technology to perform multi-user scheduling. Background technique [0002] Precoding technology is an effective method that can improve system performance. The method can reduce the complexity of the receiving end and improve the system performance by preprocessing the signal at the sending end. [0003] In the precoding technology, different terminals feed back channel information to the base station in real time, and the base station obtains the optimal precoding process through calculation. In the linear precoding system, different data streams are linearly weighted at the sending end, and these data streams may be data ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H04B7/06H04L1/06H04L25/03H04L25/49H04J11/00H04J99/00H04W16/28H04W72/04H04W72/12
Inventor 刘竞秀佘小明陈岚
Owner NTT DOCOMO INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products