Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Retrieval algorithm for deleting repetitive data in cloud storage

A technology of data deduplication and retrieval algorithm, applied in digital data processing, special data processing applications, computing, etc., can solve problems such as waste of cloud resources, retrieval troubles, similarity impact, etc., to achieve short running time and low system overhead , the effect of high deduplication rate

Inactive Publication Date: 2017-05-03
SICHUAN YONGLIAN INFORMATION TECH CO LTD
View PDF3 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In view of duplicate data in the cloud space, causing trouble to retrieval, wasting precious cloud resources, generating additional overhead and solving the impact of sampling on similarity, the present invention proposes a retrieval algorithm for deduplication in cloud storage

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Retrieval algorithm for deleting repetitive data in cloud storage
  • Retrieval algorithm for deleting repetitive data in cloud storage
  • Retrieval algorithm for deleting repetitive data in cloud storage

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0006] In order to make the purpose of the present invention, technical solutions and advantages clearer, the following are the specific calculation steps of the technical solutions of the present invention:

[0007] Step 1. After the fingerprint data is divided into blocks, each file block is hashed, and the corresponding hash value is the fingerprint.

[0008] Step 2. Calculate the similarity between different file samples, and the specific solution process is as follows:

[0009] Assuming that there is a file P in the storage space, divide them into n file blocks according to the word length, hash each file block, and output a set A of hash values P , A P =(a P1 , a P2 ,...,a Pn ); Similarly, for file Q there are: A Q =(a Q1 , a Q2 ,...,a Qn )

[0010] If: A Pi =a Qi , indicating that the two file blocks are the same, then in the file P / Q, the number of identical blocks can be expressed as: ∑ i min(A Pi , a Qi ) The total number of blocks of the two files is: ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a retrieval algorithm for deleting repetitive data in cloud storage. When deletion of the repetitive data needs to be carried out, a file similar to a newly written file in a whole storage system can be retrieved at first; furthermore, a threshold value is reached; then, the two files are precisely compared; the repetitive data is abandoned; and only different data and index information are reserved. A certain amount of file fingerprint data is randomly extracted in a sampling manner; a sampling method and the sampling amount are considered; in combination with the sample similarity, a file repetitive rate function is constructed; a redundant file is abandoned by setting a repetitive rate threshold value; therefore, deletion of repetitive files is realized; the storage space is saved; the method has the characteristics of being rapid in calculation speed and high in deletion rate; and thus, the retrieval algorithm is more suitable for large data and cloud storage environments.

Description

technical field [0001] Data deduplication and retrieval in computer storage and cloud storage Background technique [0002] With the development of information technology and network technology, big data and massive data have become the main business of data centers, and deduplication and compression are technologies that can save a large amount of data storage. Backup alone is not enough; deduplication and compression will soon become a must-have feature for primary storage. Data deduplication is a compression technique that minimizes the amount of data by identifying duplicate content, deduplicating it, and leaving a pointer in the corresponding storage location; this pointer is created by hashing a data pattern of a given size. At present, only a few primary storage arrays provide deduplication as an additional function of the product; for users who rent cloud space, a large amount of duplicate data is flooded in the cloud space, which not only causes trouble for retriev...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F17/30
Inventor 范勇胡成华
Owner SICHUAN YONGLIAN INFORMATION TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products