Unlock instant, AI-driven research and patent intelligence for your innovation.

Hardware abstraction layer message forwarding method based on cache dynamic allocation

A hardware abstraction layer and message forwarding technology, applied in the direction of multi-program device, inter-program communication, program control design, etc., can solve problems such as increased workload, large coupling, and error-prone, to improve the degree of generalization, reduce Coupling, tweaks and simple effects

Active Publication Date: 2022-03-18
NAT UNIV OF DEFENSE TECH
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

For future waveform applications, the deployment of waveform components and the communication relationship between waveform components are currently unpredictable, so it is impossible to determine which processors are used as receiving processors and reserve DMA buffer space, and it is also impossible to determine which processors processor as a transmit processor
Therefore, according to the current method, once a new waveform application is deployed to the platform, the hardware abstraction layer developer needs to re-allocate the DMA buffer space and recompile according to the communication relationship between the processor and the waveform component to which the waveform component is deployed. The hardware abstraction layer software of the involved processor not only increases the workload but also leads to the instability of the hardware abstraction layer software version. With the rapid development of software radio applications and the increasingly abundant waveform applications, people hope that the hardware abstraction layer The degree of generalization is higher, and it can carry more types of waveform applications without modifying the hardware abstraction layer software;
[0007] 2) Increase the difficulty of adjusting the DMA cache space, so once a processor on the hardware platform wants to adjust the location of its DMA cache space in memory (for example, a new waveform application needs to use the memory occupied by the DMA cache space time), the developer needs to find all the processors on the hardware platform that send SRIO data to the DMA cache space and modify the corresponding DMA cache start address inside the hardware abstraction layer software. The above operation is a cumbersome and error-prone process. process, once an error occurs, it will add a lot of workload to the subsequent waveform component debugging and troubleshooting;
[0008] 3) The scalability of the hardware platform is weakened. With the development of chip technology, the hardware platform also needs to be continuously upgraded. A scientific and reasonable hardware platform should be open and support the integration of hardware modules developed by third parties into the hardware platform.
Then the processor newly integrated into the hardware platform must communicate with the original processor, which requires the hardware abstraction layer developer of the original processor and the hardware abstraction layer developer of the new processor according to the layout of the waveform components and the data between components The interactive relationship negotiates the allocation of DMA cache space together, and compiles the negotiated DMA cache space address into the hardware abstraction layer software. The above expansion process involves a large number of developers and a high degree of coupling between processors. One move the whole body", which greatly increases the difficulty and workload of hardware platform expansion

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Hardware abstraction layer message forwarding method based on cache dynamic allocation
  • Hardware abstraction layer message forwarding method based on cache dynamic allocation
  • Hardware abstraction layer message forwarding method based on cache dynamic allocation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0030] Such as figure 2 As shown, the present invention is based on the hardware abstraction layer message forwarding method of cache dynamic allocation, comprising the following steps:

[0031] (10) Initialize the LD-PD table: the source waveform component registers the mapping relationship between the LD and the PD of the target waveform component in the LD-PD table in the hardware abstraction layer; the source waveform component is for calling the hardware abstraction layer interface to send data Waveform component; the target waveform component is a waveform component receiving the data sent by the source waveform component, which runs on different processors from the source waveform component respectively; the LD is the logical address of the waveform component; the PD is the waveform component The SRIO port address of the running processor;

[0032] Described (10) initialization LD-PD table step comprises:

[0033] (11) LD value acquisition: the source waveform compon...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a hardware abstraction layer message forwarding method based on cache dynamic allocation, which can not only avoid hardware abstraction layer message coverage based on SRIO bus transmission conditions, but also improve the generalization degree of the hardware abstraction layer and enhance the scalability of the hardware platform sex. The hardware abstraction layer message forwarding method based on cache dynamic allocation of the present invention comprises the following steps: (10) initializing LD-PD table, (20) initializing PD table, (30) sending MHAL message, (40) retrieving LD-PD table , (50) retrieve the PD table, (60) forward the MHAL message.

Description

technical field [0001] The invention belongs to the technical field of wireless communication, in particular to a hardware abstraction layer message forwarding method based on cache dynamic allocation with high degree of generalization and strong scalability. Background technique [0002] Software Communication Architecture (Software Communication Architecture, SCA) has been widely used as an important architecture in the field of software radio. In order to improve the portability of waveform components, SCA Hardware Abstract Layer (Modem Hardware Abstract Layer, MHAL, hereinafter referred to as " Hardware Abstraction Layer") standard is proposed. The hardware abstraction layer is the middle layer software that shields the communication details of the underlying hardware and provides standard interfaces for the upper layer. The waveform component can realize data interaction between components by calling the hardware abstraction standard interface. Two addresses are define...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F9/54G06F13/28
CPCG06F9/546G06F13/28
Inventor 王彦刚范建华俞石云杨霖王康赵框
Owner NAT UNIV OF DEFENSE TECH