Eureka-AI is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Eureka AI

1794 results about "Resource pool" patented technology

Authentication and authorization methods for cloud computing security platform

An authentication and authorization plug-in model for a cloud computing environment enables cloud customers to retain control over their enterprise information when their applications are deployed in the cloud. The cloud service provider provides a pluggable interface for customer security modules. When a customer deploys an application, the cloud environment administrator allocates a resource group (e.g., processors, storage, and memory) for the customer's application and data. The customer registers its own authentication and authorization security module with the cloud security service, and that security module is then used to control what persons or entities can access information associated with the deployed application. The cloud environment administrator, however, typically is not registered (as a permitted user) within the customer's security module; thus, the cloud environment administrator is not able to access (or release to others, or to the cloud's general resource pool) the resources assigned to the cloud customer (even though the administrator itself assigned those resources) or the associated business information. To further balance the rights of the various parties, a third party notary service protects the privacy and the access right of the customer when its application and information are deployed in the cloud.

Resource scheduling method and system

Embodiments of the invention are concerned with scheduling resources to perform tasks requiring a plurality of capabilities or capabilities and capacities, and has particular application to highly changeable or uncertain environments in which the status and the composition of tasks and/or resources changes frequently. Embodiments provide a method for use in a scheduling process for scheduling allocation of resources to a task, each resource having a plurality of attributes, wherein the task has one or more operational constraints including a required plurality of capabilities, and a performance condition associated therewith. The method comprises: receiving data indicative of a change to the status of the scheduling process; in response to receipt of the status data, reviewing the attributes of individual resources so as to identify combinations of resources able to collectively satisfy said capability requirements of the task; evaluating each identified combination of resources in accordance with a performance algorithm so as to identify an associated performance cost; selecting a combination of resources whose identified performance cost meets the performance condition; and scheduling said task on the basis of said selected combination of resources. In embodiments of this aspect of the invention, changes to resource configurations are effected as part of the scheduling process. These changes can be made dynamically, in response to the occurrence of events that have a bearing on the scheduling process, and involve aggregating resources together so as to create, essentially, a new resource pool from which selection can be made.

Method and system for self-adaptive on-demand resource allocation in a virtualized environment

The invention discloses a system for adaptively allocating resources as required in a virtualization environment. The system comprises a dynamic perception request distribution module, a 1-physical machine (PM):N-virtual machine (VM) module and a data center global management module, wherein the 1-PM:N-VM module allocates the resources on a PM according to user experiences which are collected in real time; the dynamic perception request distribution module distributes loads to proper VMs according to monitored application request load information and VM volume information and responds to requests; and the data center global management module judges whether the VMs are required to migrate between the PMs to be re-placed according to collected PM resource load information, and judges whether a new PM is released or applied to an idle resource pool to quit or enter application service when the PM is excess or insufficient. The invention also discloses a method for adaptively allocating the resources as required in the virtualization environment. The method comprises an adaptive VM dynamic volume perception request distribution strategy, a 1-PM:N-VM resource allocation strategy and a VM migration strategy. The invention has an application prospect in the technical field of computers.

Network slice manager and management method thereof

The invention discloses a network slice manager and a management method thereof, and belongs to the technical field of communication. The manager comprises a creation module, an expansion module, a deletion module, a user request table, a network slice parameter table, a network slice state table and a plurality of external interfaces, wherein the creation module is respectively connected with the user request table and the network slice parameter table; both the expansion module and the deletion module are respectively connected with the network slice parameter table and the network slice state table; the creation module is connected with a core network, an access network, a physical resource pool and a user by the external interfaces; the expansion module is connected with an existing network slice and the physical resource pool by the external interfaces; and the deletion module is connected with the existing network slice by the external interface. The management method of the manager comprises three parts of creation, expansion and deletion of the network slice. According to the network slice manager and the management method thereof, which are disclosed by the invention, the network slice which meets requirements can be created according to user demands, and a utilization rate of the network slice is effectively maximized, so that a utilization rate of network resources is improved.

Multi-queue multi-priority big data task management system and method for achieving big data task management by utilizing system

The invention relates to a multi-queue multi-priority big data task management system and method for achieving big data task management by utilizing the system. The method comprises the following steps: S1. a client side launches a task request and sets a task queue tag and priority for a task; S2. a resource manager receives the task request launched by the client side, and allocates the task request into the corresponding task queue; S3. each node resource manager reports a resource owned by a distributive node where on which node resource manager is positioned to the corresponding resource manager so as to form a resource pool, and each resource manager allocates the resources for each task queue; and S4. each resource manager sequentially allocates the current task needing to be executed in a plurality of task queues to the node resource managers, and after a task command is received, each node resource manager dispatches a container unit to execute the task. According to the multi-queue multi-priority big data task management system and the method utilizing the system, the unified management and dispatching of the resources can be conducted, the task dispatch execution can be conducted based on multi-queue and multi-priority conditions, the utilization rate of resources can be increased, and the task execution speed can be increased.

Uniform resource scheduling method in cloud computing system

The invention discloses a uniform resource scheduling method in a cloud computing system. The method comprises the following steps: 1) establishing a physical resource pool and a virtual resource pool; 2) forwarding a resource request to a corresponding component according to the requirement type of the resource request by a system controller; 3) after receiving the request, selecting a server from the physical resource pool, turning power on and initializing by a physical resource pool management component, and returning an access address and a command to the user; after receiving the request, selecting a physical resource from the virtual resource pool to create a virtual machine by a virtual resource pool management component, and returning the access address and the command, wherein when the utilization rate of the virtual resource pool is greater than the set threshold value, a resource scheduler selects a server from the physical resource pool and cancels the server, and transfers the server to the virtual resource pool and register; when the utilization rate of the physical resource pool is greater than the set threshold value, the resource scheduler selects the server from the virtual resource pool and cancels the server, and then transfers the server to the physical resource pool and register. The uniform resource scheduling method is high in the resource utilization rate and low in energy consumption.
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products