Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

30 results about "Quick path interconnect" patented technology

Memory extending system and memory extending method

ActiveCN103488436ALarge memory capacityAvoid the problem of redundant processing powerInput/output to record carriersComputer architectureQuick path interconnect
An embodiment of the invention discloses a memory extending system and a memory extending method. The system comprises processors, extended memories, extended chips and multiple processor installation positions, and a memory installation position is arranged at each process installation position; the processor installation positions are connected mutually through QPI (quick path interconnect) interfaces, at least one processor installation position is provided with a processor, and at least one of the rest installation positions serves as extended installation position; the extended chips are installed in the extended installation position; the extended memories are installed to the memory installation positions connected with the extended chips. The memory extending system has the advantages that the extended chips are mounted at other processor installation positions to replace the processors, and the existing processors are enabled to be capable of accessing the extended memories carried by the extended chips through the extended chips, so that memory capacity of the existing processors is increased on the condition that processing capacity is not improved, and the problem of processing capacity redundancy caused by the fact that memories are extended by adding processors in the prior art is solved.
Owner:XFUSION DIGITAL TECH CO LTD

Node partition dividing method, device and server

The embodiment of the invention discloses a node partition dividing method, device and server, relating to the field of electronic information technology. Through the invention, a partition scheme with a minimum hop count of QPI (quick path interconnect) can be automatically analyzed and obtained according the topological structure of the system, thus the influence of subjective factors is avoided in manual partition division, and the reduction of the operation speed caused by improper partition is relieved so as to relieve the reduction of system performance. The method comprises the following steps of: obtaining the topological structure of the system and the number of nodes participating in partition, wherein the system comprises at least three nodes, and each node comprises at least two CPUs (central processing units); determining the connection information according to the topological structure of the system, wherein the connection information includes the connection relationship between each CPU and other CPUs in the system; and determining the partition scheme according to the number of nodes participating in partition and the connection information, wherein the number of nodes in the partition scheme is the number of nodes participating in partition. The method, device and server disclosed by the invention are applicable to server partition.
Owner:HUAWEI TECH CO LTD

Heterogeneous hybrid memory server architecture

The invention discloses a heterogeneous hybrid memory server architecture which comprises a CPU (central processing unit) computation board and an NVM (non-volatile memory) board. CPU chips are arranged on the CPU computation board and are connected with DRAM (dynamic random access memory) chips; master FPGA (field programmable gate array) chips are arranged on the NVM board and are connected withthe DRAM chips and NVM memory bars; the CPU chips are connected with the master FPGA chips by QPI (quick path interconnect) bus; the global cache consistency of non-volatile memories can be maintained by the master FPGA chips, and accordingly global memories can be shared. The heterogeneous hybrid memory server architecture has the advantages that each NVM with low power consumption and high capacity is used as a far-end memory, each DRAM with low capacity and high speed is used as a near-end memory, and accordingly heterogeneous hybrid memory systems with high capacity and low power consumption can be constructed; addresses are compiled for the heterogeneous memories in a unified manner, accordingly, the problems in the aspects of heterogeneous memory system coupling and speed matching can be solved, and the global data consistency can be maintained; the heterogeneous hybrid memory server architecture is high in memory capacity and CPU access efficiency and low in power consumption.
Owner:ZHENGZHOU YUNHAI INFORMATION TECH CO LTD

Data processing method and device

The invention discloses a data processing method and device. The method comprises the following steps: when a first processor in a plurality of processors of a first server is scheduled, a first processor realizes data communication with a second server through a physical network card of the first server; wherein the first processor is used for sending data to the physical network card through a PCIE (Peripheral Component Interconnect Express) bus and/or receiving the data, and the second processor is used for sending the data to the physical network card through a PCIE bus and/or receiving the data through the PCIE bus; each processor communicates with the physical network card through the PCIE bus; wherein the plurality of processors communicates with each other through a quick path interconnect (QPI) bus, and the plurality of processors communicates with each other through the QPI bus. The plurality of processors is provided with different NUMA (Non Uniform Memory Access) architectures, and the plurality of processors is provided with different NUMA architectures, Non Uniform Memory Access Architectures and Non Uniform Memory Access architectures; wherein the processors fixedlyconnected with the physical network card are other processors except the first processor in the plurality of processors.
Owner:CHINA MOBILE COMM LTD RES INST +1
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products