Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Block level cache prefetching optimization method and system based on deep learning

An optimization method and deep learning technology, applied in memory systems, instruments, computing, etc., can solve problems such as unsupported, very different performance of prefetching accuracy, and limited ability to mine data access characteristics, to achieve efficiency and accuracy. rate improvement, reduce memory overhead, and improve prediction efficiency

Active Publication Date: 2019-09-17
HUAZHONG UNIV OF SCI & TECH
View PDF7 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, the above prefetching methods for block-level access have some defects that cannot be ignored: both the fixed prefetching algorithm and the sequential prefetching algorithm only use the locality principle of data access, and the prefetching efficiency and prefetching accuracy are relatively low ; The prefetch algorithm based on application hint needs to provide an interface for conveying prefetch instructions to the upper application, which cannot be supported in many systems; the prefetch algorithm based on data mining has limited ability to mine data access characteristics, In different access patterns, the performance of prefetch accuracy varies greatly

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Block level cache prefetching optimization method and system based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other.

[0041] Such as figure 1 As shown, the present invention provides a block-level cache prefetch optimization method based on deep learning, comprising the following steps:

[0042] (1) Obtain input and output (Input / output, referred to as IO) data in units of bytes from the test data set, and convert the IO data into IO data in units of blocks (Block);

[0043] Specifically, the storage block si...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a block level cache prefetching optimization method based on deep learning. The method comprises the steps of obtaining IO data with bytes as a unit from a test data set, converting the IO data into IO data with a block as a unit; judging whether the converted IO data is hit in a cache or not; if not, performing sequential prediction on the converted IO data to obtain a plurality of storage blocks, storing the converted IO data in an IO queue of a memory, judging whether the IO queue is full or not, if yes, inputting all the IO data in the IO queue of the memory into the trained Seq2Seq model to obtain predicted IO data, and obtaining a plurality of corresponding storage blocks according to the predicted IO data. According to the method, correlation of IO is mined through a deep learning method, prediction of an IO sequence is completed through the Seq2Seq model based on LSTM, and finally IO sequence prediction and sequential prediction are combined, so that cache prefetching is completed, and the hit rate of a cache is increased.

Description

technical field [0001] The invention belongs to the technical field of computer storage, and more particularly relates to a block-level cache prefetch optimization method and system based on deep learning. Background technique [0002] In the era of big data, the performance of the storage system is becoming more and more important. In order to further improve the performance of the storage system, it is necessary to prefetch the block-level access of the local disk. [0003] The current prefetching methods for block-level access mainly include fixed prefetching algorithms, sequential prefetching algorithms, prefetching algorithms based on application hints, and prefetching algorithms based on data mining. Among them, the implementation of the fixed prefetch algorithm is relatively simple, only need to prefetch n data behind the currently accessed data; the sequential prefetch algorithm is an improvement on the fixed prefetch algorithm, which can automatically adjust the pre...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F12/0862
CPCG06F12/0862G06F12/0868
Inventor 周可王桦石星何铭健张霁冉忞玮
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products