Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

161 results about "Dirty page" patented technology

Dirty pages are the pages which contain the data which are yet not committed to the hard drive. (Please note that we are not talking about Dirty Reads here. We are talking about the dirty pages in memory which essentially means data which is in memory but not yet moved to hard drive).

Data page caching method for file system of solid-state hard disc

The invention discloses a data page caching method for a file system of a solid-state hard disc, which comprises the following implementation steps of: (1) establishing a buffer link list used for caching data pages in a high-speed cache; (2) caching the data pages read in the solid-state hard disc in the buffer link list for access, classifying the data pages in the buffer link list into cold clean pages, hot clean pages, cold dirty pages and hot dirty pages in real time according to the access states and write access states of the data pages; (3) firstly searching a data page as a page to be replaced in the buffer link list according to the priority of the cold clean pages, the hot clean pages, the cold dirty pages and the hot dirty pages, and replacing the page to be replaced with a new data page read from the solid-state hard disc when a free space does not exist in the buffer link list. In the invention, the characteristics of the solid-state hard disc can be sufficiently utilized, the performance bottlenecks of the external storage can be effectively relieved, and the storage processing performance of the system can be improved; moreover, the data page caching method has the advantages of good I/O (Input/Output) performance, low replacement cost for cached pages, low expense and high hit rate.
Owner:NAT UNIV OF DEFENSE TECH

Migrating method and device for virtual machine, as well as physical host

The invention discloses a migrating method and device for a virtual machine, and partly solves the problem that the switch timing between a precopy and a post-copy in a live migration hybrid-copy method is hard to determine. In some of the applicable embodiment of the invention, the virtual machine migration method comprises the steps as follows: firstly, copying the memory data of a target virtual machine to be migrated on a source physical host to a target physical host in a circulatory iteration manner by adopting a precopy method; secondly, after each round of iteration copy is ended, calculating the changing rate of a dirty page, wherein the changing rate of the dirty page represents the rate of the number changing of the remaining internal memory dirty page in a current round of iteration copy relative to the remaining internal memory dirty page in a previous round of iteration copy; thirdly, judging whether the dirty page changing rate is within a dirty page changing rate threshold range or not; fourthly, counting the frequency that the dirty page changing rate is within the dirty page changing rate threshold range and judging whether the frequency is larger than or equal to the threshold value; finally, if the frequency is larger than or equal to the threshold value, adopting the post-copy method for migrating the target virtual machine to the target physical host.
Owner:常州横塘科技产业有限公司

Virtual machine migration method and device

The invention provides a virtual machine migration method and a virtual machine migration device, wherein the method comprises the following steps that a virtual machine migration task on each server in a network is determined, the virtual machine migration task comprises a mark of a virtual machine to be migrated, a mark of a source server, a mark of a target server and the delay time for migrating the virtual machine to be migrated from the source server to the target server, the migration sequence of the virtual machine to be migrated is determined according to the mark of the virtual machine to be migrated, the mark of the source server, the mark of the target server and the delay time for migrating the virtual machine to be migrated from the source server to the target server, the migration path is determined according to the migration sequence of each virtual machine to be migrated, each migration path comprises at least two virtual machine migration tasks with the successive migration sequences, and at the first set time, the migration transmission bandwidth increase and/or dirty page generation speed reduction is carried out on the migrating virtual machines in the migration path with the maximum virtual machine migration task delay time sum.
Owner:CHINA UNITED NETWORK COMM GRP CO LTD

Method, device and system for achieving thermal migration of virtual machine

The invention discloses a method, a device and a system for achieving thermal migration of a virtual machine. The method comprises that a source virtual machine management device on a source physical machine determines non-temporary data memory pages of the virtual machine on the source physical machine; the non-temporary data memory pages are copied to a target physical machine from the source physical machine; dirty pages produced in a non-temporary data memory page copying process are copied to the target physical machine from the source physical machine till the ratio of the number of the non-copied dirty pages to the number of the non-temporary data memory pages is lower than a preset value; and migration is performed on the virtual machine when the ratio of the number of the non-copied dirty pages to the number of the non-temporary data memory pages is lower than the preset value. According to the method, processes and memory pages in a multi-process system are classified, and temporary data memory pages are not copied in a dirty page circulatory copying process, so that waste of a system central processing unit (CPU) and the network bandwidth is reduced, and the user experience is improved.
Owner:四川华鲲振宇智能科技有限责任公司

Solid-state disk page-level cache area management method

The invention provides a solid-state disk page-level cache area management method. The method comprises the following steps of: dividing a solid-state disk page-level cache area into three parts: a hash index table cache area, a dirty page cache area and a clean page cache area, wherein the hash index table cache area is used for recording historical features of access of different data pages, thedirty page cache area is used for caching hot dirty pages, and the clean page cache area is used for caching hot clean pages; carrying out hot data recognition on a request data page by utilizing historical access feature information of a corresponding request on a hash table by adoption of a hot data recognition mechanism, and loading the recognized hot data page into a buffer area by combiningspatial local features of an access request; and finally, dynamically selecting proper data pages from a clean page cache queue and a dirty page cache queue to carry out replacement by synthesizing current reading/writing request access features and practical bottom reading/writing cost by adoption of a self-adaptive replacement mechanism when a data page replacement operation can be carried out in the buffer area. The method has favorable practicability and market prospect.
Owner:HANGZHOU DIANZI UNIV

Method and device for data persistence processing and data base system

The invention discloses a method and a device for data persistence processing and a data base system. The method for data persistence processing includes: adding page identification respectively corresponding to generated dirty pages into a checking point queue every time an internal memory of the data base system generates dirty pages; determining an activity set and a current set in the checking point queue, and sequentially unloading the dirty pages corresponding to identification of each page and included by the activity set into a disc at a preset checking point occurring occasion; checking the page identification respectively corresponding to the plurality of dirty pages which are going to be uploaded into the disc in the point queue to form the activity set; enabling an inserted set added into the checking point queue to serve as the current set; and determining a next activity set in the checking point queue if unloading of relative dirty pages of the activity set is finished, and sequentially unloading the dirty pages corresponding to the identification of each page included by the next activity set into the disc. By means of the method and the device for data persistence processing and the data base system, the efficiency in dirty pages unloading is improved on the basis that dirty page unloading has small influence on normal business operation.
Owner:HUAWEI CLOUD COMPUTING TECH CO LTD

Page-level buffer improvement method based on classification strategy

The invention discloses a page-level buffer improvement method based on a classification strategy. The method comprises a request type distinguishing module, a hot data page storage area module, a cold data page storage area module and a continuous data page storage area module. The method comprises the following steps: firstly, dividing a hot data page cache into the hot data page storage area module, the cold data page storage area module and the continuous data page storage area module, and separately using the modules to load data pages of a request with frequent accesses, the data pages of a request with lower access frequency and the data pages of the request with high spatial locality; secondly, arriving at the continuous data page storage area module by prefetching multiple continuous data pages; and finally, when the data page cache is full, preferably replacing least recently used (LRU) clean pages in the cold data page storage area module, and if the cold data page storage area module does not have dirty pages, replacing the dirty pages. Compared with a page-level LRU algorithm, by adopting the page-level buffer improvement method disclosed by the invention, the responseperformance to continuous loads can be improved, and the read and write overheads of flash memories can be effectively reduced.
Owner:ZHEJIANG WANLI UNIV

Thermal migration method and device for virtual machine direct connection equipment

The embodiment of the invention discloses a thermal migration method and device for virtual machine direct connection equipment. One specific embodiment of the method comprises the following steps: executing a register state synchronization method by calling a register state synchronization interface, and synchronizing a register state of source end virtual machine direct connection equipment to atarget end virtual machine; cancelling a transparent transfer state of the source end virtual machine direct connection equipment, and iteratively executing the following synchronization operations in multiple rounds and migrating the source end virtual machine direct connection equipment to the target end virtual machine: executing the register state synchronization method by calling the register state synchronization interface, capturing a read-write operation on a register of the source end virtual machine direct connection equipment in a migration process, and executing the captured read-write operation on a register of the target end virtual machine direct connection equipment; and executing a DMA dirty page synchronization method by calling a DMA dirty page transmission interface, and writing data corresponding to a DMA dirty page of the source end virtual machine direct connection equipment into a memory of the target end virtual machine. The embodiment of the invention can realize the thermal migration of direct connection equipment without changing a kernel of a virtual machine.
Owner:BEIJING BAIDU NETCOM SCI & TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products