Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

91 results about "Run queue" patented technology

In modern computers many processes run at once. Active processes are placed in an array called a run queue, or runqueue. The run queue may contain priority values for each process, which will be used by the scheduler to determine which process to run next. To ensure each program has a fair share of resources, each one is run for some time period (quantum) before it is paused and placed back into the run queue. When a program is stopped to let another run, the program with the highest priority in the run queue is then allowed to execute.

Computer system with dual operating modes

The present invention is a system that switches between non-secure and secure modes by making processes, applications and data for the non-active mode unavailable to the active mode. That is, non-secure processes, applications and data are not accessible when in the secure mode and visa versa. This is accomplished by creating dual hash tables where one table is used for secure processes and one for non-secure processes. A hash table pointer is changed to point to the table corresponding to the mode. The path-name look-up function that traverses the path name tree to obtain a device or file pointer is also restricted to allow traversal to only secure devices and file pointers when in the secure mode and only to non-secure devices and files in the non-secure mode. The process thread run queue is modified to include a state flag for each process that indicates whether the process is a secure or non-secure process. A process scheduler traverses the queue and only allocates time to processes that have a state flag that matches the current mode. Running processes are marked to be idled and are flagged as unrunnable, depending on the security mode, when the process reaches an intercept point. The switch operation validates the switch process and pauses the system for a period of time to allow all running processes to reach an intercept point and be marked as unrunnable. After all the processes are idled, the hash table pointer is changed, the look-up control is changed to allow traversal of the corresponding security mode branch of the file name path tree, and the scheduler is switched to allow only threads that have a flag that corresponds to the security mode to run. The switch process is then put to sleep and a master process, either secure or non-secure, depending on the mode, is then awakened.
Owner:MORGAN STANLEY +1

Webpage-crawling-based crawler technology

The invention relates to the field of technology, in particular to a webpage-crawling-based crawler technology. After URL (uniform resource locator) link addresses are initiated, the technology comprises the following steps: (1), reading the URL link addresses at the head of a running queue in the queue from a given access by using an equilibrium assignment crawler thread; (2), judging whether the URL link addresses exist or not, stopping crawling if the URL link addresses exist, otherwise crawling and placing the URL link addresses in a completion queue; (3), extracting webpages corresponding to the URL link addresses which are placed in the completion queue; (4), filtering the URL link addresses in the extracted webpages, keeping and writing effective URL link addresses into the running queue, and returning to the step (1) to repeat the steps. According to the technology, corresponding resources are crawled from the Internet, and the URL link addresses are rewritten and stored to pertinently acquire Internet information based on objects set by users according to tasks created by the users; in addition, multi-machine parallel crawling, multi-task scheduling, continuous crawling from a breakpoint, distributed crawler management and crawler control can be implemented.
Owner:BEIJING INFCN INFORMATION TECH

Multiprocessor load balancing system for prioritizing threads and assigning threads into one of a plurality of run queues based on a priority band and a current load of the run queue

A method, system and apparatus for integrating a system task scheduler with a workload manager are provided. The scheduler is used to assign default priorities to threads and to place the threads into run queues and the workload manager is used to implement policies set by a system administrator. One of the policies may be to have different classes of threads get different percentages of a system's CPU time. This policy can be reliably achieved if threads from a plurality of classes are spread as uniformly as possible among the run queues. To do so, the threads are organized in classes. Each class is associated with a priority as per a use-policy. This priority is used to modify the scheduling priority assigned to each thread in the class as well as to determine in which band or range of priority the threads fall. Then periodically, it is determined whether the number of threads in a band in a run queue exceeds the number of threads in the band in another run queue by more than a pre-determined number. If so, the system is deemed to be load-imbalanced. If not, the system is load-balanced by moving one thread in the band from the run queue with the greater number of threads to the run queue with the lower number of threads.
Owner:IBM CORP

Method for scheduling virtual CPU (Central Processing Unit)

The invention discloses a method for scheduling virtual CPU (Central Processing Unit), belongs to the technical field of computer virtualization and solves the problems that I/O processing can not be responded in time, load characteristics can not be satisfied and a load balancing strategy in a traditional scheduling algorithm is too simple. The method comprises the following steps: carrying out scheduling initialization, inserting into a virtual CPU running queue, carrying out virtual CPU operation, carrying out load balancing, updating a credit value and system load, reassigning the credit value, reassigning physical CPUs and revising the type of a virtual machine. The virtual machine is divided into three classes, carries out dynamic isolation on different types of load and is bound to two groups of physical CPU with different types of load to run, and different time slices are given to the virtual CPU on which different types of load are operated to improve operation efficiency and guarantee I/O performance. The virtual CPU scheduling method redesigns the load balancing strategy, and selects the strategy with minimum migration expenses in addition to that the isolation of different types of load is guaranteed, and the problem that the load balancing strategy in the traditional scheduling algorithm is too simple is solved.
Owner:HUAZHONG UNIV OF SCI & TECH

Computer system with dual operating modes

The present invention is a system that switches between non-secure and secure modes by making processes, applications and data for the non-active mode unavailable to the active mode. That is, non-secure processes, applications and data are not accessible when in the secure mode and visa versa. This is accomplished by creating dual hash tables where one table is used for secure processes and one for non-secure processes. A hash table pointer is changed to point to the table corresponding to the mode. The path-name look-up function that traverses the path name tree to obtain a device or file pointer is also restricted to allow traversal to only secure devices and file pointers when in the secure mode and only to non-secure devices and files in the non-secure mode. The process thread run queue is modified to include a state flag for each process that indicates whether the process is a secure or non-secure process. A process scheduler traverses the queue and only allocates time to processes that have a state flag that matches the current mode. Running processes are marked to be idled and are flagged as unrunnable, depending on the security mode, when the process reaches an intercept point. The switch operation validates the switch process and pauses the system for a period of time to allow all running processes to reach an intercept point and be marked as unrunnable. After all the processes are idled, the hash table pointer is changed, the look-up control is changed to allow traversal of the corresponding security mode branch of the file name path tree, and the scheduler is switched to allow only threads that have a flag that corresponds to the security mode to run. The switch process is then put to sleep and a master process, either secure or non-secure, depending on the mode, is then awakened.
Owner:MORGAN STANLEY +1

Massive geoscientific data parallel processing method based on distributed file system

The invention discloses a massive geoscientific data parallel processing method based on a distributed file system. The massive geoscientific data parallel processing method comprises the following steps: 1), taking the distributed file system as a storage system of geoscientific data, and deploying the distributed file system on a computing cluster, wherein the distributed file system has a unified name space; 2), storing received calculating tasks in a waiting queue by a task scheduling system of the computing cluster; 3), selecting one calculating task from the waiting queue by the scheduling system, and entering a running queue; 4), according to information of the calculating task, searching a computing node of a data file required by running the calculating task from metadata of the distributed file system by the scheduling system; and 5), selecting the computing node possessing the maximum data required by running the calculating task by the task scheduling system, remotely acquiring a data file, which is required by the calculating task, but not possessed by the computing node, executing the calculating task at the computing node, and returning an execution result. By the massive geoscientific data parallel processing method, computation localization is achieved to the maximum extent.
Owner:COMP NETWORK INFORMATION CENT CHINESE ACADEMY OF SCI

Control method for equilibrium operation of converter dust-removing flue cooling fan

The invention discloses a method for evenly controlling the running of cooling fan in convertor dust removal flue, the control method comprises the steps of: orderly starting up engines in a state of standby in the queue from the minimum queue number to the end of the queue and leading the engines to the running state, when the temperature in the cooling flue raises and the engines are in need of starting-up; shutting down the running engines from the head of running queue, enabling the end of queue entering the queue to become a new end of queue and leading the engines to the standby state, when the temperature in the cooling flue drops and the engines are in need of shutdown. The invention automatically controls the engines according to the principal first-startup-first-shutdown, the engine running in a long time has the priority of shutdown so that each engine runs evenly; under the condition of fluctuating interval temperature, each engine can averagely share the times of startup and shutdown, which can prolong the service life of the apparatus and improve the dependability. The invention can be suitable for the apparatus including other controlled objects in need of adjustment and using a plurality of engines.
Owner:BAOSHAN IRON & STEEL CO LTD

Method for receiving message passing interface (MPI) information under circumstance of over-allocation of virtual machine

The invention discloses a method for receiving message passing interface (MPI) information under the circumstance of over-allocation of a virtual machine, which comprises the following steps: polling a socket file descriptor set or shared memory, invoking a sched_yield function, and releasing the currently-occupied virtual processor resource of the process in a blocking information receiving process; inquiring the run queue of a virtual processor and selecting a process which can be scheduled to carry out scheduling operation by a user operating system in a virtual machine comprising the virtual processor; when the blocking information receiving process is re-scheduled to operate, judging whether needing to notify a virtual machine manager of executing the rescheduling operation of the virtual processor; and executing the rescheduling operation of the virtual processor by the virtual machine manager through super invoking in the blocking information receiving process, and dealing withthe received information in the blocking information receiving process. The invention can reduce the performance loss caused by 'busy waiting' phenomenon produced by an MPI library information receiving mechanism.
Owner:HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products