Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

38results about How to "Reduce write latency" patented technology

A method, device, device and medium for processing large block data of solid state hard disk

The invention discloses a method, device, equipment and medium for writing large block data of a solid-state hard disk. The method includes: screening large-block data write requests from write requests sent by the host; using a DM module to split the large-block data to be written corresponding to the large-block data write requests into multiple small-block data and mount them on a linked list; The DM module sends a write request to the LKM module, and the content of the write request includes the LBA first address of the large block of data to be written and the total number of small blocks of data; the LKM module receives the write request and performs cache allocation to generate several write request results Then return to the DM module; the DM module retrieves the context content of each small piece of data from the mounted linked list in units of small pieces of data according to the received several request results and initiates a data transfer request to the DMA module; in response to the DMA module completing the data If it is moved, the DM module resources and the LKM module resources are released. The scheme of the invention reduces the number of communication between modules, optimizes the efficiency of large block writing, reduces the writing delay, and increases the writing bandwidth.
Owner:SHANDONG YINGXIN COMP TECH CO LTD

Processing method, device and equipment for writing bulk data in solid state disk and medium

The invention discloses a processing method, device and equipment for writing bulk data in a solid state disk and a medium. The method comprises the following steps: large data write requests are screened from write requests sent by a host; the DM module is used for splitting to-be-written large data corresponding to the large data writing request into multiple pieces of small data and mounting the small data to a linked list; the DM module sends a write request to the LKM module, wherein the content of the write request comprises an LBA initial address of large data to be written and the total number of small data; the LKM module receives the write request, performs cache allocation to generate a plurality of write request results and returns the write request results to the DM module; the DM module retrieves the context content of each piece of small data from the mounted linked list by taking the small data as a unit according to a plurality of received request results and initiates a data migration request to the DMA module; and the DM module resources and the LKM module resources are released in response to completion of data migration of the DMA module. According to the scheme, the number of communication times between modules is reduced, the efficiency of large block writing is optimized, the writing delay is reduced, and the writing bandwidth is improved.
Owner:SHANDONG YINGXIN COMP TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products