Unlock instant, AI-driven research and patent intelligence for your innovation.

Distributed filesystem atomic flush transactions

a technology of filesystem and atomic flush, applied in the direction of instruments, digital computers, computing, etc., can solve the problem of only performing pre-fetching

Inactive Publication Date: 2014-01-02
PITTS WILLIAM M
View PDF10 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

The invention is a computing system that uses a file service proxy cache node to improve data processing efficiency. Specifically, the system allows for a data request to be passed between an upstream site and the file service proxy cache node, which saves the data in a stable memory and dispatches a flushing request to a second file service proxy cache node if necessary. This helps to ensure the data is always available for use and reduces processing delays. The technical effects of the invention include improved data processing speed and efficiency, as well as improved performance and reliability in data processing operations.

Problems solved by technology

Of course, pre-fetching is only performed for well-behaved clients.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Distributed filesystem atomic flush transactions
  • Distributed filesystem atomic flush transactions
  • Distributed filesystem atomic flush transactions

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0011]This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential characteristics of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.

[0012]One example embodiment includes a computing system where a data request has been passed between an upstream site and a file service proxy cache node, the file service proxy cache node being a network node located between the upstream site and the origin file system node, a non-transitory computer-readable storage medium including instructions that, when executed by the file service proxy cache node, performs the step receiving a flush request from the upstream site. The flush request includes a request to save flush data contained in the flush request to a stable memory. The instructions also perform the steps storing the f...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

Large scale high performance file proxy caching sites may be configured to coalesce many client write operations into one very large assemblage of modified file data. At some point the caching site will flush all modified file data downstream towards the origin file server. In some instances the amount of modified data being flushed may be more than can be transferred in a single network request. When multiple network requests are required, the consistency guarantee provided by many filesystems requires that the file either be updated with the data contained in all of the network requests or not be modified at all. In addition, once the first flush request is processed no other file read or write requests can be serviced until the last flush request has been processed. This document discloses methods for performing atomic multi-request flush operations within a large geographically distributed filesystem environment.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS[0001]This application claims the benefit of and priority to U.S. Provisional Patent Application Ser. No. 61 / 666,597 filed on Jun. 29, 2012, which application is incorporated herein by reference in its entirety.[0002]This application is related to co-pending U.S. application Ser. No. ______, filed on Jun. 28, 2013, and entitled, “RECURSIVE ASCENT NETWORK LINK FAILURE NOTIFICATIONS” (Attorney Docket No. 10284.14), which application is incorporated herein by reference in its entirety.[0003]This application is related to co-pending U.S. application Ser. No. ______, filed on Jun. 28, 2013, and entitled, “USING PROJECTED TIMESTAMPS TO CONTROL THE SEQUENCING OF FILE MODIFICATIONS IN DISTRIBUTED FILESYSTEMS” (Attorney Docket No. 10284.16), which application is incorporated herein by reference in its entirety.[0004]This application is related to co-pending U.S. application Ser. No. ______, filed on Jun. 28, 2013, and entitled, “METHOD OF CREATING PATH ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F15/167
CPCG06F15/167G06F16/172G06F16/183G06F16/182H04L67/5682H04L41/00H04L65/40H04L67/1097G06F12/0804G06F12/0891H04L67/06
Inventor PITTS, WILLIAM M
Owner PITTS WILLIAM M