Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

159results about How to "Improve caching efficiency" patented technology

Method and device for reading data based on data cache

The invention discloses a method and a device for reading data based on data cache. The method comprises the steps of: receiving read access request information (of a user) for obtaining data input, and extracting a generated key word; determining a data key value corresponding to a key word which is not subjected to cache generation in a cache databank, and initializing a mark value of data to be obtained; or determining a data key value corresponding to a key word which is subjected to cache generation in the cache databank, wherein the data included in the cached data key value is invalid, and setting the mark value of the data to be obtained as the mark value included in the obtained data key value; reading and checking a sub-bank of the databank so as to obtain data corresponding to the generated key word; obtaining time stamp information of the sub-bank of the databank, setting data key value information of key value pairs according to the set mark of the data to be obtained, the read time stamp information of the sub-bank of the databank and the data to be obtained, and updating the cache databank; and outputting the data obtained through checking. With the adoption of the method and the device, the cache efficiency of the data is improved, and the comprehensive property in data cache is optimized.
Owner:新浪技术(中国)有限公司

Heat management method based on cached data, server, and storage medium

The embodiment of the invention relates to the technical field of hot spot data identification, and discloses a heat degree management method based on cached data, a server and a storage medium. The method comprises the following steps: when hitting the data in the cache, judging whether the current access interval of the hitting data is greater than or equal to the preset interval; Wherein, the current access interval is the time interval between the current access time of the data and the previous access time; If the current access interval is greater than or equal to the preset interval, the current decreasing heat degree of the data is calculated according to the preset decreasing heat degree formula; According to the decreasing heat degree of the data, decreasing the heat degree valueof the data to obtain the current heat degree value of the data; Determining a target data queue to which the data belongs according to the current heat value of the data; When the data queue to which the data currently belongs is not the target data queue to which it belongs, the data is transferred to the target data queue to which it belongs. As a result, the hot data can be identified more accurately and the caching efficiency can be improved.
Owner:CHINANETCENT TECH

Multisource time series data compression storage method

The invention discloses a multisource time series data compression storage method. The multisource time series data compression storage method comprises the following steps of grouping deployment objects; dividing internal groups of deployment object groups; distributing a memory file which is used for caching for every internal group in the memory; performing first level lossy compression when time series data of a certain measuring point is received, finding the memory file which is corresponding to the measuring point according to the deployment object group and the internal group to which the measuring point belongs and caching the compressed data into the memory file; mapping the memory files to a hard disk and performing second level lossy compression when the internal files are fully filled or achieve the preset time limits and storing the compressed data blocks into a relational database. The multisource time series data compression storage method has the advantages of enabling the corresponding memory files to be found rapidly when data is cached, rapidly positioning storage positions and improving the caching efficiency; improving the compression efficiency and effectively saving hard drive capacity due to the partitioning compression mode; improving the data reading speed due to the relational database.
Owner:ASAT CHINA TECH

Anonymous area generation method and location privacy protection method

The invention discloses an anonymous area generation method and a location privacy protection method. The anonymous area generation method is used for generating an anonymous area based on a spatial K anonymous method; a multi-level access protection mechanism is adopted in the location privacy protection method; the location privacy protection method performs cache normalization; when a target user sends a LBS request, POI data is obtained according to the priority levels of a target user local, a neighbour user in a network and a LBS server; furthermore, when the POI data is obtained from the LBS server, generation of the anonymous area is carried out by adopting the anonymous area generation method; the LBS request is sent to the LBS server based on the anonymous area as the anonymous location of the user; and furthermore, the POI data returned by the LBS server is output. By means of the anonymous area generation method and the location privacy protection method disclosed by the invention, the location privacy of users is protected in combination with a cache method and the spatial K anonymous method; therefore, inquiry requests sent by users can be reduced to the most extent; furthermore, the privacy protection level of the users can be improved; and thus, the problems that the burden of the server and a channel is too heavy and the information repetition utilization rate is low can be well solved.
Owner:步步高电子商务有限责任公司

Method oriented to prediction-based optimal cache placement in content central network

The invention belongs to the technical field of networks, and particularly relates to a method oriented to prediction-based optimal cache placement in a content central network. The method can be used for data cache in the content central network. The method includes the steps that cache placement schemes are encoded into binary symbol strings, 1 stands for cached objects, 0 stands for non-cached objects, and an initial population is generated randomly; the profit value of each cache placement scheme is calculated, and the maximum profit value is found and stored in an array max; selection operation based on individual fitness division is conducted; crossover operation based on individual correlation is conducted; variation operation based on gene blocks is conducted; a new population, namely, a new cache placement scheme is generated; whether the array max tends to be stable or not is judged, and if the array max is stable, maximum profit cache placement is acquired. The method has the advantages that user access delay is effectively reduced, the content duplicate request rate and the network content redundancy are reduced, network data diversity is enhanced, the cache performance of the whole network is remarkably improved, and higher cache efficiency is achieved.
Owner:HARBIN ENG UNIV

System for displaying cached webpages, a server therefor, a terminal therefor, a method therefor and a computer-readable recording medium on which the method is recorded

The present invention relates to a system for displaying cached webpages, to a server therefor, to a terminal therefor, to a method therefor and to a computer-readable recording medium on which the method is recorded. Provided is a system for displaying cached webpages, the system comprising: a Web service server which stores at least one webpage; a caching server which collects links to webpages that match preset conditions in the webpage(s), and then creates a caching page list that comprises at least one of the links to webpages so collected; and a terminal which refers to the link(s) to webpage(s) in the caching page list, and then caches the webpage(s), and which, whenever a user inputs a link for calling up a specific webpage, simultaneously displays the input link and the link to the cached webpage. Also, provided is a system for displaying cached webpages, the system comprising: a Web service server which stores at least one webpage; a caching server which collects links to webpages that match preset conditions in the webpage(s), and then creates a caching page list that comprises at least one of the links to webpages so collected; and a terminal which receives the caching page list and then displays the link(s) to webpage(s) in the caching page list so received, and which refers to the link(s) to webpage(s) so displayed and then caches the webpages, and also effects a display such that webpages cached in links of the webpages so displayed are separate from non-cached webpages.
Owner:SK PLANET CO LTD

Buffer device and buffer method for feeding back shared light based on FDL (fiber delay line) loop

The invention provides a buffer device and a buffer method for feeding back shared light based on an FDL (fiber delay line) loop, and relates to the technical field of optical fiber communication. The FDL loop comprises four sub switching matrixes and four FDL buffer groups; a conflicting optical packet can go into an extended input port of the FDL loop from any extended output port of an OPS (optical packet switching) node when the optical packet of the optical OPS node encounters an obstruction; an available FDL can be selected in the FDL loop to buffer the optical packet until the output port of the OPS node is free, and the conflicting optical packet departs from the FDL loop through the extended output port of the FDL buffer group, and is switched into the free output port of the OPS through an OPS matrix. The device allows the conflicting optical packet to go into and depart from the FDL loop through the extended input / output port of the FDL loop; the structure of feeding back connection allows the optical packet to pass through the FDL buffer group for a plurality of times; the success probability of buffering the conflicting optical packet is improved, and the utilization rate of the FDL loop is improved. The problems of port contention of the optical packet switching network and utilization rate of finite number of FDLs are well solved.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Caching management unit and caching management method

The invention discloses a cache management unit and a cache management method. The cache management unit comprises a state calculation circuit for calculating effective data in a cache and obtaining a cache state according to calculation of effective data values; a state indication circuit for outputting the cache state to a producer and/or a consumer; a writing synchronization circuit for sending a writing synchronization signal to perform writing synchronization operation on the state calculation circuit when the producer really writings primary data in the cache at each time; a reading synchronization circuit for sending a reading synchronization signal to perform reading synchronization operation on the state calculation circuit when the customer receives the primary data in the cache at each time; a writing presynchronization circuit for sending a writing presynchronization signal to perform writing presynchronization operation on the state calculation circuit when the producer sends out a write command at each time; and a reading presynchronization circuit for sending a reading presynchronization signal to perform reading presynchronization operation on the state calculation circuit when the customer sends out a read command at each time. The cache management unit and the cache management method help enhance the overall cache efficiency.
Owner:SHANGHAI MAGIMA DIGITAL INFORMATION
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products