Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

455 results about "Bottleneck" patented technology

In production and project management, a bottleneck is one process in a chain of processes, such that its limited capacity reduces the capacity of the whole chain. The result of having a bottleneck are stalls in production, supply overstock, pressure from customers and low employee morale. There are both short and long-term bottlenecks. Short-term bottlenecks are temporary and are not normally a significant problem. An example of a short-term bottleneck would be a skilled employee taking a few days off. Long-term bottlenecks occur all the time and can cumulatively significantly slow down production. An example of a long-term bottleneck is when a machine is not efficient enough and as a result has a long queue.

Scheduling method for semiconductor production line based on multi-ant-colony optimization

The invention relates to a scheduling method for a semiconductor production line based on multi-ant-colony optimization. The method comprises the following steps of: determining bottleneck processing areas of the semiconductor production line, wherein processing areas, of which average utilization rate exceeds 70 percent, of equipment are regarded as the bottleneck processing areas; setting the number of ant colonies as the number of the bottleneck processing areas, and initializing a multi-ant-colony system; parallelly searching scheduling schemes of all bottleneck processing areas by all ant colony systems; restraining and integrating the scheduling schemes of all bottleneck processing areas into one scheduling scheme for all bottleneck processing areas according to a procedure processing sequence, and deducing the scheduling schemes of other non-bottleneck processing areas by using the scheduling scheme and the procedure processing sequence as restraint to obtain the scheduling scheme of the whole semiconductor production line; and judging whether program ending conditions are met, if so, inputting the scheduling scheme which is optimal in performance, otherwise, updating pheromones of the ant colonies by using the scheduling scheme which is current optimal in performance, and guiding a new round of searching process. The method has the advantages that: an important practical value is provided for solving the optimal dispatching problem of the semiconductor production line; and important instructional significance is provided for improving the production management level of semiconductor enterprises of China.
Owner:TONGJI UNIV

Method and system for test, simulation and concurrence of software performance

InactiveCN103544103AAvoid influenceAvoid interference of response time with each otherSoftware testing/debuggingUser inputSoftware engineering
The invention relates to a method for test, simulation and concurrence of software performance. The method for the test, simulation and concurrence of the software performance specifically comprises the steps that (1) user configuration information which is input by a user is read, (2) a user requirement structural body is stored in a shared memory module and mapping is established, (3) service requests of concurrent users are received and at least one test progress is established according to the number of the concurrent users and the user requirement structural body, (4) test threads are established, (5) each test thread is used for processing the service request of a corresponding user and is stopped when a stopping condition is met, (6) operation is ended and the test threads in each test process are stopped after operation of the test threads in each process is ended in sequence, (7) relevant data of each service are stored, analyzed and counted, and then all the processes are finished. According to the method and system for the test, simulation and concurrence of the software performance, the fact that how to simulate user concurrence is explained, bottlenecks are prevented from occurring, and the purpose of a high-concurrency scene by means of a small number of hardware sources is achieved; concurrency stability is guaranteed; support to different user services is achieved; help is provided for positioning and development cycle shorting.
Owner:烟台中科网络技术研究所

Management scheduling technology based on hyper-converged framework

The invention discloses a management scheduling technology based on a hyper-converged framework. The method comprises a hyper-converged system architecture design, resource integrated management basedon a hyper-converged architecture, unified computing virtualization oriented to a domestic heterogeneous platform, storage virtualization based on distributed storage, network virtualization based onsoftware definition and a container dynamic scheduling management technology oriented to a high-mobility environment. According to the management scheduling technology based on the hyper-converged framework provided by the invention, the virtualization capability and the management capability of the tactical cloud platform are improved; a key technical support is provided for constructing army maneuvering tactical cloud full-link ecology; an on-demand flexible virtualized computing storage resource pool is provided, heterogeneous fusion computing virtualization is achieved, meanwhile, a distributed storage technology is used for constructing a storage resource pool, a software definition technology is used for constructing a virtual network, a super-fusion resource pool is formed, localization data and network access of application services are achieved, the I/O bottleneck problem of a traditional virtualization deployment mode is solved, and the service response performance is improved.
Owner:BEIJING INST OF COMP TECH & APPL

Log analysis-based micro-service performance optimization system and analysis method

ActiveCN109756364AReduce workloadQuickly identify performance bottlenecksHardware monitoringData switching networksMicroservicesService gateway
The invention discloses a micro-service performance optimization method based on log analysis. The micro-service performance optimization method comprises the following steps that a key interface of amicro-service module records an access log called by an interface through a log sdk; The log collection agent module collects performance monitoring information of the service system at regular intervals; The unified log analysis platform carries out extraction and analysis according to the access logs to obtain performance bottleneck points of the system; The micro-service gateway updates the routing strategy of the intelligent routing module at regular intervals through the performance indexes of the micro-service modules; Meanwhile, the API monitoring module extracts the external request number and the throughput through log analysis system processing, and then obtains the external current limiting weight of the microservice gateway according to the external request number, the throughput and the performance bottleneck point. According to the method, through automatic extraction and analysis of logs, a complete calling chain topology is generated, hidden performance doubtful pointsare found, performance bottleneck points of the system are quickly found out, and the actual workload of development and operation maintenance personnel can be effectively reduced.
Owner:CHENGDU SEFON SOFTWARE CO LTD

Network access flow limiting control method and device and computer readable storage medium

ActiveCN111030936AGuaranteed uptimeCurrent limiting implementationData switching networksPage viewAccess frequency
The embodiment of the invention discloses a flow limiting control method and device for network access and a computer readable storage medium. The method comprises the steps of when a service access request sent by a user is received and the current access is judged to be the first access, acquiring the accumulated page view of all service interfaces within a preset duration and the total number of access users; calculating access frequency according to the total number of the access users and the accumulated page view; calculating an expected page view according to the accumulated page view and the access frequency; if the expected page view exceeds a preset threshold, refusing the service access request; and if the expected page view does not exceed the preset threshold, responding to the service access request according to the service logic corresponding to the service access request. Based on the scheme, when a new request is received, the expected page view is estimated. When thepage view is judged to be close to the bottleneck of the system in advance, flow limiting is carried out, and part of new user access is refused to guarantee normal operation of the system. Users entering the system are not affected by flow limiting while flow limiting is achieved.
Owner:TENCENT CLOUD COMPUTING BEIJING CO LTD

Method and device for realizing persistence in flow calculation application

The invention discloses a method and a device for realizing persistence in flow calculation application. The method comprises the following steps that: when the current batch message is successfully consumed, whether the persistence operation needs to be performed or not is judged according to the first initial offset and the preset persistence interval; when the persistence operation needs to be performed, the persistence processing is carried out according to the message position indicated by the second initial offset; and after the persistence succeeds, the first initial offset and the second initial offset are updated into the initial offset of a next batch message. The persistence operation is performed after the persistence interval, and the disk persistence time interval is prolonged, so that the real-time calculation efficiency is greatly improved. During fault recovery, at most the batch message in the persistence interval needs to be consumed again; the performance bottleneck caused by frequent disk writing in the existing synchronous persistence process is avoided; the real-time calculation message throughput performance is improved by an order of magnitude; and meanwhile, the delay caused by the fault recovery is reduced to the second stage, and the real-time performance cannot be influenced.
Owner:阿里巴巴华南技术有限公司

Method for solving performance bottleneck of network management system in communication industry based on cloud computing technology

InactiveCN102624558AFix performance issuesSolve difficult system performance problemsData switching networksVirtualizationThird party
The invention provides a method for solving the performance bottleneck of a network management system in the communication industry based on a cloud computing technology. In the method, cloud computing is adopted to determine the guiding principle of the network management system. The method comprises the following steps of: 1) aiming at hardware and third-party software, a mainstream virtualization technology which supports unified cloud computing implementation mode is adopted, and 2) aiming at system software, a design mode adaptive to distributed deployment is adopted; and in terms of three levels (hardware, middleware and application software), the cloud computing technology is utilized to effectively solve the performance problem of the network management system; Based on a cloud computing architecture design and a deployment application system, the system performance problem which is hardly solved by a traditional system can be effectively solved; and simultaneously, by utilizing cloud computing, the advantages in the aspects of low cost, high expansibility and the like can be achieved, and from the macroscopic view, great problems can be solved by utilizing existing technologies without spending great time in processing certain technical details.
Owner:INSPUR TIANYUAN COMM INFORMATION SYST CO LTD

Log organization structure clustered based on transaction aggregation and method for realizing corresponding recovery protocol thereof

The invention discloses a log organization structure clustered according to transaction aggregation and a recovery protocol based on the log organization structure clustered, which can be applied to a transactional data management system of a large-sized computer. A log file is sequentially organized to a plurality of log fragments and each log fragment is used for storing the log content of the same transaction and reserving a transaction number as well as a preceding log fragment pointer of the transaction; a data page number involved in a log entry of the same fragment is stored in the form of an array. When the system is operating, each transaction only writes its own log fragment and writes the log fragment in the log file when the transaction is submitted. In a recovering process, the system can be recovered to a lasting and consistent state by scanning all the log fragments for remake and returning the log fragments of all the active transactions for return. The problem of producing bottlenecks during writing logs in the traditional transactional data management system is resolved and log amount of the system can be effectively reduced.
Owner:天津神舟通用数据技术有限公司

Block chain parallel transaction processing method and system based on isomorphic multi-chain, and terminal

The invention relates to a parallel transaction processing method based on isomorphic multi-chains, which comprises the following steps: constructing one or more sub-network chains, each sub-network chain having the same block chain framework; dividing a logic transaction to be executed into at least one actual transaction; and distributing the actual transaction to the corresponding subnet chainto carry out parallel transaction processing. Transaction processing mainly comprises one-way asset transfer, Dapp application compatibility and asset aggregation and dispersion. The overall architecture is divided into two parts, namely a client and a block chain platform, the client constructs optimized parallel transactions according to statistical information of the block chain platform, userrequirements are considered comprehensively, and the overall performance of the system is improved; meanwhile, information of a user account is tracked, related states are maintained, and communication under a chain is achieved. Aiming at the performance problem of a single chain, a logic transaction parallel execution algorithm is innovatively provided, the performance optimization bottleneck problem in an original block chain technical architecture is solved, and the flux upper limit of global transaction processing is improved.
Owner:INST OF COMPUTING TECH CHINESE ACAD OF SCI

Method and system for guaranteeing application service quality in distributed environment

ActiveCN104486129AReduce overheadReduce request response time fluctuationsData switching networksQos quality of serviceCritical path method
The invention provides a method and a system for positioning bottleneck nodes and ensuring application service quality in the distributed environment. The method for positioning a bottleneck node comprises the following steps: calculating a delay fluctuation value of each node in a processing stage on a critical path of service; determining the bottleneck node according to the delay fluctuation value. The service critical path is obtained by processing the critical path of service request in a period of time; the delay fluctuation value is obtained according to time of processing the request of the node in the processing period in a period of time. A method for ensuring application service quality comprises the following steps: positioning a bottleneck node according to the service with long tail delay; checking whether the delay fluctuation value of the bottleneck node exceeds a predefined threshold or not, and carrying out fault diagnosis according to checking result or carrying out the request for regulating speed or accelerating speed of service request of the bottleneck node. According to the method and the system for positioning bottleneck nodes and ensuring application service quality in the distributed environment, the request response time fluctuation is reduced; the long tail delay is reduced; in addition, the cost for optimizing the nodes one by one and step by step is also reduced.
Owner:INST OF COMPUTING TECH CHINESE ACAD OF SCI

Lock-free, parallel remembered sets

A multi-threaded garbage collector operates in increments and maintains, for each of a plurality of car sections in which it has divided a portion of the heap, a respective remembered set of the locations at which it has found references to objects in those car sections. It stores the remembered sets in respective hash tables, whose contents it updates in a scanning operation, executed concurrently by multiple threads, in which it finds references and records their locations in the appropriate tables. Occasionally, one of the threads replaces the hash table for a given car section. Rather than wait for the replacement operation to be completed, a thread that has an entry to be made into that car section's remembered set accesses the old table to find out whether the entry has already been made. If so, no new entry is necessary. Otherwise, it places an entry into the old table and sometimes places an insertion record containing that entry into a linked list associated with that car section. When the reclaiming thread has finished transferring information from the old table to the new table, it transfers information from the linked list of insertion records into the new table, too. In this way, the replacement process is not a bottleneck to other threads' performing update operations.
Owner:ORACLE INT CORP

Novel broad sense parallel connection platform structure

The invention relates to a novel broad sense parallel connection mechanism, Compared with the traditional parallel connection mechanism, the invention has the advantages of large work space, strong practicability and the like. The traditional parallel connection mechanism has a series of defects of single form, narrow application range and the like. Thereby, after about a half century of development, the parallel connection mechanism subject faces a large research bottleneck. The invention grasps the substance of the parallel connection mechanism, and generates a novel type of novel parallel connection form which is called the broad sense parallel connection mechanism through changing the structure characteristics of movable platforms of the parallel connection mechanism. The mechanism type is based on the simplest and most stable spatial structure of tetrahedroids, and generates the novel parallel connection mechanism through serial connection or parallel connection of a plurality of tetrahedroids. On one hand, the provision of the mechanism widens the application fields of the parallel connection mechanism, and makes the parallel connection mechanism hopeful to be applied to novel fields such as spatial operation arms, movable robots and the like, and on the other hand, the invention greatly enriches the types of the parallel connection mechanism, widens the construction principles of the parallel connection mechanism, and has higher value.
Owner:高金磊

Cloud service method for taxation cloud computing network billing IM (Instant Messaging) online customer system

The invention provides a cloud service method for a taxation cloud computing network billing IM (instant messaging) online customer system, which adopts a cloud computing technology to perform overall structuring of the service. The method comprises the following step of effectively dividing resources through role positioning of the virtual computing node of the cloud platform of the tax industry, so that the limit of the single service structure of an IM software system in the original industry is broken through, free and flexible configuration of nodes is realized, namely, the configured node is used at once, the change of programs is not required, and the problems that service structure extending cannot be performed and an effective service load cannot be formed are solved; as various functional modules are developed by using a distributed computing language, the various functional modules can form code segment mirrors in the cloud computing platform, resource allocation can be dynamically performed on the various functional modules by the cloud computing platform, and the problems that the system resource is wasted and the resource is distributed unevenly can be solved; with the adoption of an independently developed database reverse message proxy module, a message mechanism is adopted to replace the original polling mechanism at the same time of state management information, so that the system efficiency is greatly improved and the bottleneck restriction of the system performance is broken through.
Owner:日照浪潮云计算有限公司

Performance evaluation method and system for GPU applications in CPU-GPU heterogeneous environment

ActiveCN107908536AIntuitive reflectionIntuitive reflection of performance bottlenecksHardware monitoringOccupancy rateAlgorithm
The invention discloses a performance evaluation method and system for GPU applications in a CPU-GPU heterogeneous environment and belongs to the field of GPU performance evaluation. Specifically, themethod comprises the steps of learning performance condition of various applications operated on GPU architecture based on a decision-making tree algorithm in machine learning and establishing a decision-making tree model; acquiring monitoring characteristics of which influences on application performance time are the highest in sequence within a decision-making tree matching process, namely ordering the important degree of the characteristics; carrying out correspondence on screened characteristic sets and four common problems of the applications in sequence, wherein the four common problemsmainly indicate computing correlation, memory correlation, occupation rate correlation and synchronization correlation, thereby preliminarily obtaining problem directions in which performance bottlenecks of to-be-analyzed applications are located. According to the method and the system, through utilization of the method of combining the decision-making tree model with analysis modeling, the universal, relatively accurate, rapid, simple and easy-to-use method for carrying out the performance evaluation on resources and applications on a GPU is provided.
Owner:HUAZHONG UNIV OF SCI & TECH

Information interaction system for industrial interconnection

The invention belongs to the technical field of information, and provides an information interaction system for industrial interconnection. The system utilizes a message queue, a message flow engine, an information management shell, a protocol wrapper and a service-oriented architecture technology to design an integration interconnection engine; the integration interconnection engine enables various IT systems such as an SRM (a supplier relation management system), a CRM (a customer relation management system), an MES (a manufacturing execution system), a PLM (a product life cycle management system) and an ERP (an enterprise resource planning system) in the management field to be mutually connected through an enterprise internal network, and enables various OT systems and physical devices such as an SCADA (a data acquisition and monitoring control system), a DCS (a distributed control system), an MOM (a manufacturing operation management system), a sensor, a robot and the like in the operation execution field to be mutually connected through the industrial Internet of Things, therefore, a huge bottleneck that data are difficult to integrate when an enterprise realizes interconnection, intercommunication and interoperation of industrial total elements, a total value chain and a total industry chain is overcome.
Owner:重庆斯欧智能科技研究院有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products