Supercharge Your Innovation With Domain-Expert AI Agents!

PM-based database page caching method and system

A page cache and database technology, applied in the field of data processing, can solve problems such as restricting the effective throughput of the disk, write amplification, etc., and achieve the effect of improving the disk brushing efficiency, low management overhead, and improving performance

Pending Publication Date: 2021-11-16
ZTE CORP
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the granularity of flushing in units of pages will lead to the problem of write amplification. Even if a transaction only modifies a few bytes of a certain page, the entire dirty page must be flushed to the disk during persistence, which seriously restricts the effectiveness of the disk. throughput

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • PM-based database page caching method and system
  • PM-based database page caching method and system
  • PM-based database page caching method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0092] image 3 In accordance with the PM-based database page cache method according to the present invention, the following will be referenced image 3 The PM-based database page cache method of the present invention will be described in detail.

[0093] First, at step 301, the table file is mapped to the DRAM memory.

[0094] In the embodiment of the present invention, the table file of the PM memory is mapped to the DRAM memory by a persistent memory development kit PMDK.

[0095] At step 302, the cache page is specifically divided and page description.

[0096] In the embodiment of the present invention, the database buffer page is divided, including, dividing each cache page into a fixed size area, where the zone size is set to 1 cache line, and the data page is split into multiple regions (CacheLine) Block). Database cache page after the region, such as figure 1 As shown, including the page and the page, where

[0097] The pages include: Checksum, Lower, Upper, Linp, where

...

Embodiment 2

[0123] Figure 4 Insert a flowchart of the data page and brush the disk according to the insertion of the present invention, and will be referenced below Figure 4 The insertion of the present invention modifies the data page and brush the flow of the disk. First, in step 401, the record is inserted into the idle block of the cache page.

[0124] In the embodiment of the present invention, it is scanned from the tail of the cache page, and an idle space is found in the cache page idle spatial location and inserts a record.

[0125] At step 402, the UPPER pointer and the corresponding LINP array are adjusted.

[0126] In the embodiment of the present invention, the UPPER pointer is adjusted to point to the last recording head, which is offset to PTR; adjust the corresponding LNP array, pointing to the record.

[0127] At step 403, the record inserted is calculated and the page descriptor is modified.

[0128] In the embodiment of the present invention, which area is calculated which...

Embodiment 3

[0137] Figure 5 For a PM-based database page cache system architecture, such as Figure 5 As shown, the PM-based database page cache system, including, PM memory 501, DRAM memory 502, region division module 503, page description module 504, and disk brush module 505, where

[0138] PM memory 501, which is used to store table files.

[0139] In the embodiment of the present invention, the PM memory 501, which maps the table file to the DRAM memory 502 through a persistent memory development kit PMDK, accepts the disk brush module 505 instruction, and updates the table file.

[0140] DRAM memory 502, which stores the database buffer page and page descriptor, and brushes the modified area to the PM memory 501 based on the disk brush module 505 instruction.

[0141] Regional division module 503, which divides the database buffer page.

[0142] In the embodiment of the present invention, the database is built into the database, including, each cache page is divided into a fixed size ar...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A PM-based database page caching method comprises the following steps: mapping a table file to a memory; carrying out region division on the cache page; setting corresponding page descriptors for the cache pages subjected to region division; obtaining a modification area, and modifying a corresponding page descriptor; and flashing the data of the modification area to the PM memory. The invention further provides a PM-based database page caching system, write amplification is further reduced while the expenditure of the change points of the recording page is reduced, the performance is improved, the cache line block is brushed in a parallel mode, the brushing efficiency is further improved, and the service life of hardware is further prolonged.

Description

Technical field [0001] The present invention relates to data processing technology, and particularly relates to a PM based caching method and system database pages. Background technique [0002] Existing relational database system for the disk-oriented database systems, mainly in the HDD and DRAM architecture is built on two layers of memory hierarchy. Because relational database data itself is large, not all stored in memory, and the memory itself does not have the persistence, and therefore need to persist data pages to disk. [0003] Conventional storage media are block device, it is the smallest unit of read and write block (Block), I / O delay, ranging from milliseconds to microseconds level, and is nanosecond DRAM. For performance reasons, the storage engine does not directly modify the data page on a physical disk, but first want to modify the data page is read into memory buffers (herein referred to page image data in the buffer cache for the page), then modify cached pag...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F3/06G06F12/0882
CPCG06F3/0613G06F3/0676G06F3/0644G06F12/0882G06F3/06
Inventor 闫宗帅屠要峰陈河堆郭斌黄震江韩银俊解海波王涵毅
Owner ZTE CORP
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More