Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

61486 results about "Utilization rate" patented technology

In business, the utilization rate is an important number for firms that charge their time to clients and for those that need to maximize the productive time of their employees. It can reflect the billing efficiency or the overall productive use of an individual or a firm. Looked at simply, there are two methods to calculate the utilization rate.

System and method for managing virtual servers

A management capability is provided for a virtual computing platform. In one example, this platform allows interconnected physical resources such as processors, memory, network interfaces and storage interfaces to be abstracted and mapped to virtual resources (e.g., virtual mainframes, virtual partitions). Virtual resources contained in a virtual partition can be assembled into virtual servers that execute a guest operating system (e.g., Linux). In one example, the abstraction is unique in that any resource is available to any virtual server regardless of the physical boundaries that separate the resources. For example, any number of physical processors or any amount of physical memory can be used by a virtual server even if these resources span different nodes. A virtual computing platform is provided that allows for the creation, deletion, modification, control (e.g., start, stop, suspend, resume) and status (i.e., events) of the virtual servers which execute on the virtual computing platform and the management capability provides controls for these functions. In a particular example, such a platform allows the number and type of virtual resources consumed by a virtual server to be scaled up or down when the virtual server is running. For instance, an administrator may scale a virtual server manually or may define one or more policies that automatically scale a virtual server. Further, using the management API, a virtual server can monitor itself and can scale itself up or down depending on its need for processing, memory and I / O resources. For example, a virtual server may monitor its CPU utilization and invoke controls through the management API to allocate a new processor for itself when its utilization exceeds a specific threshold. Conversely, a virtual server may scale down its processor count when its utilization falls. Policies can be used to execute one or more management controls. More specifically, a management capability is provided that allows policies to be defined using management object's properties, events and / or method results. A management policy may also incorporate external data (e.g., an external event) in its definition. A policy may be triggered, causing the management server or other computing entity to execute an action. An action may utilize one or more management controls. In addition, an action may access external capabilities such as sending notification e-mail or sending a text message to a telephone paging system. Further, management capability controls may be executed using a discrete transaction referred to as a “job.” A series of management controls may be assembled into a job using one or management interfaces. Errors that occur when a job is executed may cause the job to be rolled back, allowing affected virtual servers to return to their original state.
Owner:ORACLE INT CORP

Artificial intelligence early warning system

The invention relates to an artificial intelligence early warning system. The system comprises an intelligent Internet of Things and risk factor data acquisition system (100), and a risk factor management system (200), a cloud computing device (300), a cloud storage device (400), a cloud database (500), and an artificial intelligence early warning operation system (600), an artificial intelligenceearly warning server (700), an internet + distributed early warning police booth (800), a five-level artificial intelligence early warning system (900), a four-level artificial intelligence early warning system (1000), a three-level artificial intelligence early warning system (1100), a two-level artificial intelligence early warning system (1200) and a one-level artificial intelligence early warning system (1300). The artificial intelligence early warning system of the present invention collects, compares, analyzes, deduces, evaluates the risk factors, and carries out the cloud computing, cloud storage, graded alarming and prevention and control, so that all-weather 24-hour monitoring on monitoring points around the police box is achieved, a user can share information, the utilization rate of information resources is increased, and the safety guarantee is increased for maintaining the stability of the borderlands.
Owner:苏州闪驰数控系统集成有限公司

Managing power consumption based on utilization statistics

The present invention, in various embodiments, provides techniques for managing system power. In one embodiment, system compute loads and / or system resources invoked by services running on the system consume power. To better manage power consumption, the spare capacity of a system resource is periodically measured, and if this spare capacity is outside a predefined range, then the resource operation is adjusted, e.g., the CPU speed is increased or decreased, so that the spare capacity is within the range. Further, the spare capacity is kept as close to zero as practical, and this spare capacity is determined based on the statistical distribution of a number of utilization values of the resources, which is also taken periodically. The spare capacity is also calculated based on considerations of the probability that the system resources are saturated. In one embodiment, to maintain the services required by a Service Level Agreement (SLA), a correlation between an SLA parameter and a resource utilization is determined. In addition to other factors and the correlation of the parameters, the spare capacity of the resource utilization is adjusted based on the spare capacity of the SLA parameter. Various embodiments include optimizing system performance before calculating system spare capacity, saving power for system groups or clusters, saving power for special conditions such as brown-out, high temperature, etc.
Owner:HEWLETT-PACKARD ENTERPRISE DEV LP

Energy-saving method of cloud data center based on virtual machine migration and load perception integration

The invention relates to a system level virtualization technology and an energy-saving technology in the field of the structure of a computer system, and discloses an energy-saving method of a cloud data center based on virtual machine migration and load perception integration. The method comprises the steps: dynamically completing the migration and the reintegration of the load of a virtual machine in the cloud data center by monitoring the resource utilization rate of a physical machine and the virtual machine and the resource use condition of current each physical server under the uniform coordination and control of an optimized integration strategy management module of the load perception and an on-line migration control module of the virtual machine, and tuning off the physical servers which run without the load, so that the total use ratio of the server resource is improved, and the aim of energy saving is achieved. The energy-saving method of the cloud data center based on the virtual machine on-line migration and the load perception integration technology is effectively realized, the amount of the physical servers which are actually demanded by the cloud data center is reduced, and the green energy saving is realized.
Owner:ZHEJIANG UNIV

Distributed type dynamic cache expanding method and system supporting load balancing

The invention discloses a distributed type dynamic cache expanding method and system supporting load balancing, which belong to the technical field of software. The method comprises steps of: 1) monitoring respective resource utilization rate at regular intervals by each cache server; 2) calculating respective weighing load value Li according to the current monitored resource utilization rate, and sending the weighting load value Li to a cache clustering manager by each cache server; 3) calculating current average load value of a distributed cache system by the cache clustering manager according to the weighting load value Li, and executing expansion operation when the current average load value is higher than a threshold thremax; and executing shrink operation when the current average load value is lower than a set threshold thremin. The system comprises the cache servers, a cache client side and the cache clustering manager, wherein the cache servers are connected with the cache client side and the cache clustering manager through the network. The invention ensures the uniform distribution of the network flow among the cache nodes, optimizes the utilization rate of system resources, and solves the problems of ensuring data consistency and continuous availability of services.
Owner:济南君安泰投资集团有限公司

Techniques for placing applications in heterogeneous virtualized systems while minimizing power and migration cost

N applications are placed on M virtualized servers having power management capability. A time horizon is divided into a plurality of time windows, and, for each given one of the windows, a placement of the N applications is computed, taking into account power cost, migration cost, and performance benefit. The migration cost refers to cost to migrate from a first virtualized server to a second virtualized server for the given one of the windows. The N applications are placed onto the M virtualized servers, for each of the plurality of time windows, in accordance with the placement computed in the computing step for each of the windows. In an alternative aspect, power cost and performance benefit, but not migration cost, are taken into account; there are a plurality of virtual machines; and the computing step includes, for each of the windows, determining a target utilization for each of the servers based on a power model for each given one of the servers; picking a given one of the servers with a least power increase per unit increase in capacity, until capacity has been allocated to fit all the virtual machines; and employing a first fit decreasing bin packing technique to compute placement of the applications on the virtualized servers.
Owner:IBM CORP

Industrial treatment method and industrial treatment device for oil field waste

The invention provides an industrial treatment method for oil field waste. The industrial treatment method for oil field waste comprises the following steps: carrying out sampling analysis on the oil field waste, preheating to 80-300 DEG C by using high-temperature steam or conduction oil, then carrying out microwave pyrolysis treatment, controlling pressure at minus 5000 to minus 100 Pa, thus obtaining solid treatment substances and gas, condensing, separating and purifying the gas, and finally recycling so as to obtain water, oil and non-condensable gas. An industrial treatment device for oil field waste comprises a feed hopper, a dryer, a microwave heating cavity, a microwave generator, a heating device and a condensation separation purification device, wherein the feed hopper is connected to the dryer which is connected to the microwave heating cavity, a steam or conduction oil outlet of the heating device is connected to a steam or conduction oil outlet of the dryer; gas outlets of the dryer and the microwave heating cavity are connected to the condensation separation purification device; the microwave generator is connected to the microwave heating cavity. The industrial treatment method and the industrial treatment device have good treatment effects, high utilization rate of energy sources and good economical efficiency.
Owner:RUIJIE ENVIRONMENTAL PROTECTION TECH CO LTD

Arrangement structure for mechanical arm of minimally invasive surgery robot

An arrangement structure for a mechanical arm of a minimally invasive surgery robot comprises a main operation end part, an auxiliary operation end part, a driven adjusting arm and a driving operation arm combination. The main operation end part and the auxiliary operation end part are connected in a front and back mode through a vertical column to form a whole, and a sliding block vertically sliding is arranged on the front end face of the vertical column. The driven adjusting arm comprises a first connection rod and a second connection rod, the driving operation arm combination comprises a driving arm support platform, at least three driving arm seats and at least three driving operation arms, and at least three same driving arms are respectively in rotating connection with at least three driving arm seats. The arrangement structure for the mechanical arm of the minimally invasive surgery robot has the advantages that a main operation end is integrated with a driven operating end to enable the robot to be conveniently shifted, the arrangement structure for the mechanical arm of the minimally invasive surgery robot achieves support and adjustment of a plurality of driving operation arms, reduces total volume of the robot, and improves utilization rate of a space in a surgery room, and the arrangement structure for the mechanical arm of the minimally invasive surgery robot not only saves the space of the surgery room, but also has the advantage of being capable of fast moving, and does not need a special surgery room.
Owner:周宁新 +1
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products