Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

399 results about "Hit rate" patented technology

Hit rate is a metric or measure of business performance traditionally associated with sales. It is defined as the number of sales of a product divided by the number of customers who go online, Planned call, or visit a company to find out about the product.

Load balancing system based on content

InactiveCN101605092ASolve the problem of inconsistencySave resourcesData switching networksData synchronizationThe Internet
The invention relates to a load balancing system based on content, comprising load balancing based on content, inspection of content and health status of a backend server, self high availability of load balancing and load balancing algorithm thereof, wherein, the load balancing based on content refers to that a system works in the seventh layer of a seven-layer model of an OSI Internet, and belongs to the load balancing of an application layer; the inspection of content and health status of a backend server is provided with an inspection module of system resources to inspect the content and the health status of the backend server in real time; in order to avoid single-point failures of the load balancing system, the load balancing system uses dual-system configuration to realize the high availability; a main system and a standby system keep the consistency of connection information at client by data synchronization. When a system is in the failure status, a standby load balancing system automatically takes over services. The system of the invention changes the goofy load balancing mode of the traditional IP layer load balancing, and thus, the load hit rate can be effectively increased. The load balancing system does not require that the backend server stores the same content, which largely saves server resources.
Owner:LANGCHAO ELECTRONIC INFORMATION IND CO LTD

Cooperative caching method based on popularity prediction in named data networking

ActiveCN106131182AFast pushSatisfy low frequency requestsTransmissionCache hit rateDistributed computing
The invention discloses a cooperative caching method based on popularity prediction in a named data networking. When storing a content in the named data networking (NDN), the application of a strategy of caching at multiple parts along a path or a strategy of caching at important nodes causes high redundancy of node data and low utilization rate of a caching space in the networking. The method provided by the invention uses a 'partial cooperative caching mode', firstly, the content is subjected to future popularity prediction, then, the caching space of an optimal proportion is segmented from each node to be used as a local caching space for storing the content of high popularity. The rest part of each node stores the content of relatively low popularity through a neighborhood cooperation mode. An optimal space division proportion is calculated through considering hop count of an interest packet in node kit and server side request hit in the networking. Compared with the conventional caching strategy, the method provided by the invention increases the utilization rate of the caching space in the networking, reduces caching redundancy in the networking, improves a caching hit rate of the nodes in the networking, and promotes performance of the whole networking.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Base station caching method based on minimized user delay in edge caching network

The invention discloses a base station caching method based on the minimized user delay in an edge caching network, and belongs to the field of wireless mobile communication. In an edge caching network scene, a connection matrix between a user and a base station is established at first; simultaneously, a strategy that the base station caches a file is generated; a relationship matrix between the base station and the file is established; then, the average hit rate that all users in the whole network obtain the file from the base station is counted; a constraint condition is set to achieve the optimal minimum network user average delay when satisfying a reasonable user average hit rate threshold value; a traversing method is further used for solving and finding an optimal base station content storage strategy; a small base station is deployed according to the strategy; and finally, a user is connected to the small base station to obtain the file. According to the base station caching method based on the minimized user delay in edge caching network disclosed by the invention, the balance problem of the average delay and the average hit rate in a process of designing the small base station file caching strategy of the edge caching network is sufficiently considered; and thus, the purpose of minimizing the user average downloading delay when a certain user average hit rate is satisfied is realized.
Owner:BEIJING UNIV OF POSTS & TELECOMM

Method for establishing access by fusing multiple levels of cache directories

The invention relates to a method for establishing an access by fusing multiple levels of cache directories, and a graded fused hierarchical cache directory mechanism is established. The method comprising the steps that multiple CPU and GPU processors form a Quart computing element, a Cuckoo directory is established in a graded way in caches built in the CPU or GPU processors, an area directory and an area directory controller are established outside the Quart computing element, thus the bus communication bandwidth is effectively reduced, the arbitration conflict frequency is lowered, a data block directory of a three-level fusion Cache can be cached, and thus the access hit rate of the three-level fusion Cache is improved. Therefore a graded fused hierarchical Cache directory mechanism inside and outside the Quart is constructed, the Cache failure rate is lowered, the on-chip bus bandwidth is reduced, the power consumption of the system is lowered, the new status of the Cache block does not need to be added, the very good compatibility with the Cache consistency protocol is realized, and a new train of thought is provided for constructing a heterogeneous monolithic multi-core processor system with extensibility and high performance.
Owner:UNIV OF SHANGHAI FOR SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products