Patents
Literature
Eureka-AI is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Eureka AI

119results about How to "Avoid overhead" patented technology

Method and Apparatus for Encrypting Data Messages after Detecting Infected VM

For a host that executes one or more guest virtual machines (GVMs), some embodiments provide a novel encryption method for encrypting the data messages sent by the GVMs. The method initially receives a data message to send for a GVM executing on the host. The method then determines whether it should encrypt the data message based on a set of one or more encryption rules. When the process determines that it should encrypt the received data message, it encrypts the data message and forwards the encrypted data message to its destination; otherwise, the method just forwards the received data message unencrypted to its destination. In some embodiments, the host encrypts differently the data messages for different GVMs that execute on the host. When two different GVMs are part of two different logical overlay networks that are implemented on common network fabric, the method in some embodiments encrypts the data messages exchanged between the GVMs of one logical network differently than the data messages exchanged between the GVMs of another logical network. In some embodiments, the method can also encrypt different types of data messages from the same GVM differently. Also, in some embodiments, the method can dynamically enforce encryption rules in response to dynamically detected events, such as malware infections.
Owner:NICIRA

Query method based on regional bitmap indexes in cloud environment

The invention provides a query method based on regional bitmap indexes in a cloud environment. The method comprises the following steps of: 1) establishing the regional bitmap indexes; 1.1) performing range division on index attributes on a data table in the cloud environment to generate a global sequencing table of attribute values, wherein the global sequencing table is used for sequencing tuples by using a set rule; 1.2) establishing an indicating bitmap on each data node according to the range division result, wherein the indicating bitmap records the storage condition of local attribute values; 1.3) establishing a local bitmap index on each data node according to the framework of the cloud environment to finish establishment of the regional bitmap indexes; and 2) inputting a query condition, establishing a condition bitmap according to the query condition by a main node, and distributing the condition bitmap to each data node, wherein the condition bitmap covers all probabilities included in the query condition; and concurrently executing retrieval task through each data node, acquiring the query result of each data node by the main node, and returning a union set of the query results of the data nodes to a user. By establishing the regional bitmap indexes, configurable parallel computing resources in the cloud environment can be fully utilized, and quick response can be provided for the data query request using capacity comparison as a condition.
Owner:PEKING UNIV

Method and system for estimating capacity of server

The invention discloses a method and a system for estimating the capacity of a server. The method disclosed by the invention comprises the following steps of: counting the business-processing capacity finished by the server within unit time and the time length at the peak period each day; calculating the percent of the processing capacity of the server at the peak period accounting for the all-day processing capacity and the complexity parameter; adjusting the redundancy factor of the server and the utilization ratio of a central processor; and estimating the capacity of the server according to the business-processing capacity finished by the server within the unit time, the time length at the peak period each day, the percent of the processing capacity at the peak period accounting for the all-day processing capacity, the complexity parameter, the redundancy factor and the utilization ratio of the central processor. The invention also discloses the system for estimating the capacity of the server. Through the introduction of the complexity parameter in the invention, the differences of different application systems are eliminated, and a universal estimating method for the capacity of the server of each application system is realized, so that the estimating efficiency is improved; and estimation is carried out by utilizing the actual data provided by the current online application system, so that errors between the estimated capacity and the actual capacity are reduced.
Owner:STATE GRID CORP OF CHINA +1

High-performance mass storage system and high-performance mass storage method

The invention discloses a high-performance mass storage system and a high-performance mass storage method, and belongs to the technical field of storage. The mass storage system comprises a file server and a network disk array group, wherein the network disk array group comprises a plurality of network disk arrays, all network disk arrays are mounted onto the file server through a peripheral channel in a DAS mode, and then, all network disk arrays and the file server are connected onto a switching network through network channels; a network user is connected onto the switching network, and realizes interaction with the file server and the network disk arrays through the switching network; the file server is in charge of monitoring all operations in the mass storage system and provides data description for the network user; the network disk arrays are in charge of the data access of the network user; and the peripheral channel is an SCSI bus or an FC bus or a network passage. The high-performance mass storage system and the high-performance mass storage method have the beneficial effects that the advantages of a centralized storage system and a distributed storage system are simultaneously achieved, and the file centralized management and the data distributed storage are realized.
Owner:LANGCHAO ELECTRONIC INFORMATION IND CO LTD

Heterogeneous multi-core thread scheduling method, heterogeneous multi-core thread scheduling system and heterogeneous multi-core processor

The invention relates to a heterogeneous multi-core thread scheduling method. The heterogeneous multi-core thread scheduling method includes: respectively generating a ranking list for threads and cores according to dynamic characteristics of a program, finding out optimal stable match of the threads and the cores according to the ranking lists, and performing thread scheduling according to stable match. The heterogeneous multi-core thread scheduling method specifically includes: receiving characteristic vectors of the threads running in the cores, and selecting priority ranking of the cores for the threads according to the characteristic vectors; ranking each thread for each core; receiving the ranking lists of the threads and the cores, and finding out stable match results of the threads and the cores; and receiving the stable match results, scheduling by an operating system and allocating each thread to the corresponding core for running. Huge expenditure caused by sampling scheduling is avoided, the heterogeneous multi-core thread scheduling method takes more complex factors influencing performances and power consumption into consideration, and only relative relations rather than specific values need to be predicated, so that model complexity is lowered while scheduling precision is also improved.
Owner:INST OF COMPUTING TECHNOLOGY - CHINESE ACAD OF SCI

Bidirectional forwarding detection system and detection time configuration method of bidirectional forwarding detection

The invention discloses a bidirectional forwarding detection system and a detection time configuration method of bidirectional forwarding detection. In the process of realizing bidirectional forwarding detection, the network time delay between a bidirectional forwarding detection local end and a bidirectional forwarding detection opposite end is tested, and then the detection time of the bidirectional forwarding detection local end and the detection time of the bidirectional forwarding detection opposite end are configured to be the same based on the measured network time delay; that is, according to the invention, the detection time of bidirectional forwarding detection is configured based on the network time delay between the bidirectional forwarding detection local end and the bidirectional forwarding detection opposite end and is not configured only according to the packet transmitting and receiving interval and the detection times configured on the network equipment on two ends at present, so the problem that the detection time hardware processing capability negotiated by a central processing unit at present cannot be reached to cause the a BFD conversation to be broken so as to cause frequent link jittering to lead to unnecessary system overhead can be avoided so as to improve the accuracy and resource utilization ratio of bidirectional forwarding detection.
Owner:ZTE CORP

Processing and optimizing method and device for dynamic language, equipment and storage medium

The invention provides a processing and optimizing method and device for a dynamic language, equipment and a storage medium. The processing and optimizing method includes the steps: based on program codes in an intermediate file, obtaining an abstract syntax tree through analysis, obtaining type description information of the corresponding program codes through analysis based on the annotation information in the intermediate file, and adding the type description information to corresponding nodes on the abstract syntax tree; and traversing the abstract syntax tree so as to obtain attribute and type information owned by the object with the class structure based on the association relationship between the nodes and the type description information of the nodes, generating a hidden class, and attaching the hidden class to the class definition node corresponding to the object so as to generate an optimization code based on the abstract syntax tree with the hidden class. Therefore, by pre-generating the hidden class during compiling, the object layout information can be known in advance, so that the source file can be optimized based on the pre-generated hidden class to avoid the overhead of adding attributes to create the hidden class and expanding the object layout during running, and the running rate is increased.
Owner:BANMA ZHIXING NETWORK HONGKONG CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products