Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

466 results about "Thread count" patented technology

Multi-thread parallel processing method based on multi-thread programming and message queue

ActiveCN102902512AFast and efficient multi-threaded transformationReduce running timeConcurrent instruction executionComputer architectureConcurrent computation
The invention provides a multi-thread parallel processing method based on a multi-thread programming and a message queue, belonging to the field of high-performance computation of a computer. The parallelization of traditional single-thread serial software is modified, and current modern multi-core CPU (Central Processing Unit) computation equipment, a pthread multi-thread parallel computing technology and a technology for realizing in-thread communication of the message queue are utilized. The method comprises the following steps of: in a single node, establishing three types of pthread threads including a reading thread, a computing thread and a writing thread, wherein the quantity of each type of the threads is flexible and configurable; exploring multi-buffering and establishing four queues for the in-thread communication; and allocating a computing task and managing a buffering space resource. The method is widely applied to the application field with multi-thread parallel processing requirements; a software developer is guided to carry out multi-thread modification on existing software so as to realize the optimization of the utilization of a system resource; and the hardware resource utilization rate is obviously improved, and the computation efficiency of software and the whole performance of the software are improved.
Owner:LANGCHAO ELECTRONIC INFORMATION IND CO LTD

System and method of testing software and hardware in a reconfigurable instrumented network

A method of testing a computer system in a testing environment formed of a network of routers, servers, and firewalls. Performance of the computer system is monitored. A log is made of the monitored performance of the computer system. The computer system is subjected to hostile conditions until it no longer functions. The state of the computer system at failure point is recorded. The performance monitoring is done with substantially no interference with the testing environment. The performance monitoring includes monitoring, over a sampling period, of packet flow, hardware resource utilization, memory utilization, data access time, or thread count. A business method entails providing a testing environment formed of a network of network devices including routers, servers, and firewalls, while selling test time to a customer on one or more of the network devices during purchased tests that test the security of the customer's computer system. The purchased tests are conducted simultaneously with other tests for other customers within the testing environment. Customer security performance data based on the purchased tests is provided without loss of privacy by taking security measures to ensure that none of the other customers can access the security performance data. The tests may also be directed to scalability or reliability of the customer's computer system. Data about a device under test is gathered using a managed information kernel that is loaded into the devices operating memory before its operating system. The gathered data is prepared as managed information items.
Owner:AVANZA TECH

Server service request parallel processing method based on thread number limit and system thereof

ActiveCN103516536AAvoid "monopoly" situationsIncreased distribution balanceData switching networksEngineeringService efficiency
The present invention provides a server service request parallel processing method based on thread number limiting and a system thereof. According to the server service request parallel processing method and the system, a control solution for classifying service requests according to different processing time lengths, and the upper limit of the number of the threads which are called by the server for parallelly processing each kind of service requests, thereby preventing monopoly occupation of server thread by the service request with long processing time length, ensuring partial threads in the server is used for parallelly processing the service request with short processing time length, so that distribution balance for the threads for processing the service request of the server is improved, thereby improving service request processing executing efficiency and user service efficiency of the integral body of the server. Simultaneously the possibility of long-time monopoly occupation for server system resource caused by parallel processing for a large amount of service requests with complicated operation and long processing time length by the server is reduced, thereby improving system resource distribution performance of the server.
Owner:NEW SINGULARITY INT TECHN DEV

Multiprocessor load balancing system for prioritizing threads and assigning threads into one of a plurality of run queues based on a priority band and a current load of the run queue

A method, system and apparatus for integrating a system task scheduler with a workload manager are provided. The scheduler is used to assign default priorities to threads and to place the threads into run queues and the workload manager is used to implement policies set by a system administrator. One of the policies may be to have different classes of threads get different percentages of a system's CPU time. This policy can be reliably achieved if threads from a plurality of classes are spread as uniformly as possible among the run queues. To do so, the threads are organized in classes. Each class is associated with a priority as per a use-policy. This priority is used to modify the scheduling priority assigned to each thread in the class as well as to determine in which band or range of priority the threads fall. Then periodically, it is determined whether the number of threads in a band in a run queue exceeds the number of threads in the band in another run queue by more than a pre-determined number. If so, the system is deemed to be load-imbalanced. If not, the system is load-balanced by moving one thread in the band from the run queue with the greater number of threads to the run queue with the lower number of threads.
Owner:IBM CORP

Thread pool task processing method in high-availability cluster system

InactiveCN107832146AImprove real-time data transmission efficiencyLoad balancingProgram initiation/switchingResource allocationThread poolThread count
The invention discloses a thread pool task processing method in high-availability cluster system. the method includes the steps of previously creating a certain number of unoccupied working threads atfirst, wherein the unoccupied working threads all stay at a condition blocking state during initiation; forming a working task array; adopting a main thread of a thread pool for sequentially processing a cycle of searching for working tasks, examining the state of the thread pool and distributing working threads for the working tasks; obtaining a working task to be processed from the head part ofthe working task array; if the working task to be processed is obtained successfully, proceeding to the next step; if the working task to be processed is not obtained successfully, maintaining the state of obtaining the working task; when the number of the current occupied threads in the thread pool accounts for over a certain proportion of the total thread number, not processing the current working task; detecting the state of the thread pool, and when the number of the current unoccupied threads is smaller than a minimum unoccupied value, creating a certain number of unoccupied threads to maintain the balance state of the thread pool; when the number of the current unoccupied threads is greater than a maximum unoccupied value, releasing a certain number of unoccupied threads; distributing a working thread for the working task to be processed.
Owner:BEIJING INST OF COMP TECH & APPL

Full-automatic firing method for wideband individual line subscriber based on flowpath optimization

The invention relates to a fully-automatic subscribing method for broadband special line users based on process optimization, which comprises the steps: a special line automatic subscribing system receives a work order from BSS, resolves the work order, obtains information of such as equipment to be configured, and searches upper and lower correlative equipment information, login manner and login password according to the existing equipment information, and then logins in the equipment according to the equipment information to configure related data order; the special line automatic subscribing system is configured with thread counts for processing the work order, starts a work order receiving module, and selects a system operation mode through reading configuration files, namely a parent-child process or an automatic reset mode; socket communication is utilized to simulate telnet to login in the equipment and issue orders, the port information of the equipment is validated and judged during the process of issuing orders and the execution result of the orders issued currently is validated and judged; if other socket demands are connected at the same time, other thread counts are assigned to do the work so as to fulfill the long connection asynchronous communication between the automatic subscribing system and the socket of BSS.
Owner:LINKAGE SYST INTEGRATION

Device and method for pumping and filtering air containing dust and/or fibre on the spinning machine

The device for the suction and filtration of dust- and/or fiber loaded air on spinning machines (1), comprises working places consisting of a suction channel (7), a filtering device (8) with a filter forming a filtering surface (16), a system for removing the filter outflow from the filtering surface, a vacuum source (10) for generating an induced draft, an operating means for carrying out a filter cleaning process using a removal system, and a vacuum sensor arranged in the main suction channel for measuring the vacuum in a vacuum zone before and/or after the filtering. The device for the suction and filtration of dust- and/or fiber loaded air on spinning machines (1), comprises working places consisting of a suction channel (7), a filtering device (8) with a filter forming a filtering surface (16), a system for removing the filter outflow from the filtering surface, a vacuum source (10) for generating an induced draft, an operating means for carrying out a filter cleaning process by a removal system, and a vacuum sensor arranged in the main suction channel for measuring the vacuum in a vacuum zone before and/or after the filtering. The device contains a controller or regulator for vacuum in the vacuum zone before and/or after the filtering surface based on the vacuum values, vacuum target values or vacuum target value areas, which are measured by the vacuum sensor. The controller or regulator is connected with a driving mechanism for filter cleaning. The controlling or regulating of the vacuum takes places by the operation of the driving mechanism. The spinning place contains suction places (17), over which polluted air is sucked out and supplied over a central suction channel or -channel of the filter arrangement. The vacuum source contains an axial- or radial ventilator. The filter outflow is liftable or removable from the filtering surface using removal- or a lifting device and is fed to a collecting- or disposing device. The controller contains a signal converter and a control device, by which the measured values received by the vacuum sensor in the form of vacuum values are compared with the vacuum target values and vacuum target value areas. Controlling or regulating signals are generated for correcting the variation of actual value from the target value for operating on an actuator and/or a final control element containing driving means. The filtering device contains a filter drum with a cylinder shaped, fixed or flexible filter surface, which is arranged on the removal system. The driving mechanism comprises a drive system for turning the filter drum around the drum axis. The filter cleaning is carried out by a continuous or sequential turning of the filter drum, by which the filter surface is directed to the removal device and the filter outflow is removed. The driving mechanism contains a hydraulic or pneumatic piston drive, and a linear motor or electric cylinder, which is connected with a gear between the driving mechanism and the filter. A control is intended for thread count and/or equipment parameters of the pressure ratio in the channels and/or pipes are adjustable. The control consists of a means for changing the vacuum over the filter device and/or over the ventilator output. The filter device contains a continuous filter band forming a space, and electro motor driving mechanism for a circulatory movement of the filter band, which forms a layered filter surface. Independent claims are included for: (1) a spinning machine; and (2) a method for the suction and filtration of dust- and/or fiber loaded air on spinning machines.
Owner:MASCHINENFABRIK RIETER AG

Multi-thread scheduling method and device based on thread pool

The embodiment of the invention discloses a multi-thread scheduling method based on a thread pool. The method comprises the steps that a calling process of a thread scheduling function of the thread pool is detected, and stack characteristic information of a calling stack corresponding to the calling process and a target task object are obtained; an operation function of the target task object is detected and called, a counting lock and/or a countdown lock corresponding to the stack characteristics information are/is added to the operation function of the target task object, the locking state of the counting lock and/or the countdown lock is detected, the counting lock is released when the thread number corresponding to the target task object is larger than or equal to the threshold value of the counting lock, and the countdown lock is released after a preset period of time is waited; when the counting lock and/or the countdown lock are/is in the locked state, the execution of the operation function of the target task object is stopped. When the locked state of the counting lock or the countdown lock is released, the operation function of the target task object is executed. By adopting the multi-thread scheduling method, the program crash recurrence rate can be increased.
Owner:TENCENT TECH (SHENZHEN) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products