User face data inverse-transmitting method for realizing the switching across base station

A technology for user plane data and cross-base stations, applied in digital transmission systems, data exchange networks, transmission systems, etc., can solve the problems of long data processing paths, increased handover processing delays, and increased handover delays to reduce processing time latency, shortened latency, reduced protocol processing, and the effects of data replication between layers

Active Publication Date: 2010-06-23
ZTE CORP
0 Cites 9 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0011] The problem with this processing is that the data needs to be processed by fallback and reverse analysis when it is reversed, which makes the data processing path too l...
View more

Method used

In the embodiment of the present invention, as shown in Figure 4B, by establishing the mapping relationship between interlayer messages, the data reverse transmission operation during switching is moved up to the GTPU layer of TNL, and through the linkage processing of PDCP and GTPU, It effecti...
View more

Abstract

The present invention discloses a user face data inverse-transmitting method for realizing the switching across a base station, which comprises the following steps: data is received by a user face; message protocol dada units PDUs are stored at a high layer, and a lower layer is noticed to process massage protocol; after the massages are successfully transmitted, the PDUs of corresponding massages are deleted by the high layer according to the touching of the lower layer; when is switched, the high layer of a source side base station sends the PDUs of stored massages which need to be transmitted inversely to an target base station. Thus, when data needs to be transmitted inversely, by the linkage of the high layer and the lower layer, the data is directly sent out by the high layer to the target base station, so a data processing path is greatly shortened, and further, the processing time delay in the switching across a base station is extendedly reduced. On the premise that the existing protocol is followed, switched data is moved to the GTPU layer of TNL by inverse-transmitting operation. Thus, not only the lossless transmission of the switched data is realized, but also the inverse-transmitting time delay of the data can be effectively reduced, the repeated data processing and interlayer data copying are largely reduced, and the inverse-transmitting time delay of the user face data is effectively shortened.

Application Domain

Wireless network protocolsData switching networks

Technology Topic

Time delaysData needs +5

Image

  • User face data inverse-transmitting method for realizing the switching across base station
  • User face data inverse-transmitting method for realizing the switching across base station
  • User face data inverse-transmitting method for realizing the switching across base station

Examples

  • Experimental program(1)

Example Embodiment

[0030] In the present invention, the user plane receives data, stores the PDU of the message in the upper layer, and informs the lower layer to perform protocol processing on the message; after the message is sent successfully, the upper layer deletes the PDU of the corresponding message according to the trigger of the lower layer; when switching, the source The upper layer of the side base station sends the stored PDU of the message that needs to be back-transmitted to the target side base station.
[0031] figure 1 It is the flow chart of the user plane data reverse transmission in the cross-base station handover in the present invention, such as figure 1 As shown, the implementation process of user plane data back-transmission for cross-base station handover includes the following steps:
[0032] Step 101: When the user plane receives data, a mapping relationship between the GTPU layer, the PDCP layer, and the RLC layer is established layer by layer. Base stations produced by different manufacturers have different specific methods for establishing the mapping relationship.
[0033] Establish the mapping relationship between messages layer by layer, which can be: the high-level protocol entity stores the PDU of the message, and informs the lower layer to process the message, and the position index of the message in the buffer pool is also attached to the message information ; After the lower-layer protocol entity receives the message, it records the position index and associates it with the processed PDU of this layer. In this implementation manner, each layer of protocol entities can establish a mapping relationship between messages. A copy of the lower-level SDU is stored in the buffer pool of the higher-level entity, which can be used for backhaul during handover at any time.
[0034] Establishing the mapping relationship between messages layer by layer can also be: when the lower layer processes the message, if it needs to establish the mapping relationship between the layers of the message, obtain the corresponding high-level message information according to the location pointer by calling the inter-layer interface; After the low-level transmission is completed, the low-level useless PDU is deleted, and the inter-layer interface is called to trigger the message deletion operation of the high-level protocol entity, and the corresponding message is deleted. In this implementation manner, while the mapping relationship is established, the message deletion operations of the protocol entities of each layer can be synchronized to ensure that the copies that do not need to be back-transmitted are no longer saved.
[0035] Step 102: During the handover, the GTPU layer sends the GTPU PDU that needs to be reversed to the target base station according to the mapping relationship.
[0036] Step 103: The GTPU layer directly sends the original data from the core network to the target side base station. Since the original data is sent on the GTPU layer, it is a GTPU PDU.
[0037] Step 104: The target side base station receives the GTPU PDU from the GTPU layer of the source side base station.
[0038] figure 2 It is the processing flow chart of the source side base station before handover in the present invention, such as figure 2 As shown, the specific processing of the source side base station before handover includes the following steps:
[0039] Step 201: The GTPU layer stores the GTPU PDUs of the messages of this layer, and notifies the lower PDCP layer to process the messages.
[0040] Step 202: The PDCP layer reads the GTPU PDU of the GTPU layer, and records the index value of the GTPU PDU in the GTPU layer buffer pool; converts the GTPU PDU into a PDCP PDU and stores it; notifies the lower RLC layer to process the message.
[0041] Step 203: The RLC layer assembles the PDCP PDUs into RLC PDUs, records the boundary range of the PDCP PDUs according to the boundary report of the PDCP layer to determine the specific position of the RLC PDU in the PDCP PDU, and then sends the RLC PDU to the lower layer.
[0042] Steps 204 to 206: Determine that the transmission is successful, the RLC layer triggers the PDCP layer to delete PDCP PDUs; the PDCP layer deletes the stored and successfully transmitted PDCP PDUs according to the triggering and mapping relationship of the RLC layer, and triggers the GTPU layer to delete GTPU PDUs ; The GTPU layer deletes the stored GTPU PDUs that have been successfully sent according to the triggering and mapping relationship of the PDCP layer.
[0043] image 3 It is the processing flow chart of the source side base station during handover in the present invention, such as image 3 As shown, the specific processing of the source side base station during handover includes the following steps:
[0044] Step 301: When the handover occurs, the PDCP layer triggers a backhaul action of the GTPU layer, and reports the index value of the GTPU PDU to be backhauled and the corresponding PDCP sequence number according to the mapping relationship.
[0045] Step 302: The GTPU layer retrieves the GTPU PDU stored in this layer according to the trigger of the PDCP layer, encapsulates the GTPU PDU corresponding to the index value into reverse transmission data, and then sends it to the target side base station through the X2 interface tunnel, and sends the corresponding PDCP sequence The number is carried in the extension header of the GTPU and provided to the target base station.
[0046] Step 303: The GTPU layer sends the GTPU PDU that will be notified to the PDCP layer to be processed in the future during the handover to the target side base station through the X2 interface tunnel.
[0047] Step 304: The GTPU layer of the source side base station sends the original data from the core network S1 interface directly to the target side base station through the X2 interface tunnel of the GTPU layer.
[0048] Through existing Figure 4A , Figure 4B Comparing the existing scheme with the implementation of the present invention can fully highlight the superiority of the present invention.
[0049] In the existing technology, such as Figure 4A As shown, the acquisition of the reverse transmission message of the GTPU layer requires reverse processing of each protocol layer, and the data processing path is too long. Taking the message that the RLC layer does not receive the UE's reception confirmation message as an example, it needs to go through the PDCP layer to remove the PDCP header, decrypt, and decompress the header before it can be handed over to the GTPU layer for back transmission. The encryption and decryption and header compression algorithms of the user plane PDCP layer are very time-consuming. Empirical data shows that the introduction of algorithms such as header compression will cause at least 30% performance degradation on the user plane. In addition, message duplication between protocol layers is inevitable. Therefore, repeated protocol processing and inter-layer data replication greatly increase the handover processing delay.
[0050] In the embodiment of the present invention, such as Figure 4B As shown, by establishing the mapping relationship of messages between layers, the data reverse transmission operation during handover is moved up to the GTPU layer of TNL, and through the linkage processing of PDCP and GTPU, the processing path of reverse transmission data is effectively shortened and avoided The data replication and reverse protocol processing between layers can effectively shorten the data reverse transmission delay during cross-base station handover, speed up the handover process, and enhance user experience.
[0051] The above are only the preferred embodiments of the present invention, and are not used to limit the protection scope of the present invention.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.

Similar technology patents

Data transmission method and device

ActiveCN112804713AImprove delivery efficiencyReduce processing delay
Owner:DATANG MOBILE COMM EQUIP CO LTD

Method and apparatus for signal modulation and demodulation in wireless communication system

ActiveUS20220182271A1improve diversity gainreduce processing delay
Owner:ELECTRONICS & TELECOMM RES INST

Panoramic video transmission method and system based on node calculation

InactiveCN110708562AReduce processing delayReduce bandwidth overhead
Owner:SHANGHAI JIAO TONG UNIV

Image detection method and device

PendingCN114782331Acalculation speedReduce processing delay
Owner:上海米哈游海渊城科技有限公司

Classification and recommendation of technical efficacy words

  • Reduce processing delay

Speculative addressing using a virtual address-to-physical address page crossing buffer

InactiveUS20140181459A1reduce processing delayimprove speculative addressing
Owner:QUALCOMM INC

Multi-hop broadband wireless communication system and wireless node device thereof

ActiveCN102761883AReduce processing delaylow synchronization requirements
Owner:慕福奇

Service request allocation method based on cooperative computing between communication satellites

ActiveCN111884703AReduce processing delay
Owner:NO 54 INST OF CHINA ELECTRONICS SCI & TECH GRP +1

Panoramic video transmission method and system based on node calculation

InactiveCN110708562AReduce processing delayReduce bandwidth overhead
Owner:SHANGHAI JIAO TONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products