Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

63 results about "Thread (computing)" patented technology

In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system. The implementation of threads and processes differs between operating systems, but in most cases a thread is a component of a process. Multiple threads can exist within one process, executing concurrently and sharing resources such as memory, while different processes do not share these resources. In particular, the threads of a process share its executable code and the values of its dynamically allocated variables and non-thread-local global variables at any given time.

Request processing method and device, electronic equipment and computer readable storage medium

The invention provides a request processing method and device, electronic equipment and a computer readable storage medium, and relates to the technical field of data processing. The method comprisesthe steps of obtaining n requests, wherein each request corresponds to a cothread on the thread, and n is an integer greater than 0 and is less than or equal to a preset threshold; determining one dominant coroutine in the n coroutines according to a preset competition rule, and taking the other n-1 coroutines as following coroutines; merging the n requests through the dominant coroutine to obtainmerged request data; and submitting the merge request data to the computing device. According to the embodiment of the invention, when the request processing method is used for processing a pluralityof requests input by a user, the coroutines are introduced to serve as the execution units, the obtained n requests correspond to the coroutines respectively, the dominant coroutines are determined according to the preset competition rule to merge the n requests, server resource consumption caused by cross-thread competition is reduced, and therefore the computing performance of the server can bebrought into full play.
Owner:北京小桔科技有限公司

Compiler implementation method and system supporting heterogeneous computing core architecture

The invention discloses a compiler implementation method supporting heterogeneous computing core architecture. The compiler implementation method comprises the steps of converting a high-level language program into an intermediate representation code; converting the intermediate representation code into a machine code instruction; according to the types of the machine code instructions, mapping the machine code instructions of different types to corresponding computing cores in the heterogeneous computing core architecture to be executed, wherein the machine code instructions comprise a universal instruction, a cluster instruction and a thread instruction, and for the cluster instruction, a corresponding self-defined built-in function is adopted for conversion; and for general instructionsand thread instructions, adopting an existing built-in function or instruction of an open source compiler for conversion. According to the compiler implementation method, multiple types of high-levellanguage programs can be automatically processed, and the high-level language programs are sequentially converted into the intermediate representation codes and the machine code instructions which can be finally executed, and the machine code instructions are distributed to different computing cores to be executed according to the attribute types of the machine code instructions, and data transmission through a system bus is avoided, and the instruction execution performance is improved.
Owner:上海芷锐电子科技有限公司

Instruction scheduling method, instruction scheduling device, processor and storage medium

An instruction scheduling method, an instruction scheduling device, a processor and a storage medium, the instruction scheduling method comprising: selecting a first instruction fetch request for a first instruction address initiated by a first thread bundle, and performing instruction fetch operation for the first instruction address; receiving first instruction data which is returned from the first instruction address and corresponds to the first instruction fetching request; and in response to a second fetch request initiated by a second thread bundle for fetch of the first instruction address, broadcasting and sending the first instruction data to the write address of the instruction data access area of the first thread bundle and the write address of the instruction data access area of the second thread bundle in the first clock period. According to the instruction scheduling method, the access of the computing unit to the instruction cache or other cache systems such as the multi-level cache due to the instruction fetching operation can be reduced, and the access bandwidth of the instruction cache or other cache systems such as the multi-level cache can be reduced; and therefore, the access bandwidth of the cache system such as the data cache of the data required for executing the instruction or other multi-level caches can be reduced.
Owner:HYGON INFORMATION TECH CO LTD

Dynamic resource scheduling method based on analysis technology and data flow information on heterogeneous many-cores

The invention discloses a dynamic resource scheduling method based on an analysis technology and data flow information on heterogeneous many-cores, and relates to the field of heterogeneous many-core systems. The method comprises the following steps: analyzing a program, and determining the execution times of a parallelizable loop body; determining a data flow diagram of the loop body; setting a threshold value, and calculating the number of required GPUs; dividing the GPU task size according to dependence; distributing the tasks to different GPUs for operation according to the data streams of the tasks; and checking whether the platform is load-balanced. The invention mainly aims at providing a resource scheduling method based on information during program analysis and program data flow information aiming at the current situation that programmers need to set the number of GPUs and manually divide tasks on a heterogeneous platform. The execution times and the data dependence of the loop statements obtained through analysis are utilized, a threshold value is set to determine the number of the set GPUs, the thread granularity with data dependent task division is increased, tasks are distributed to each GPU for running by utilizing data flow information at the same time, and a work-stepping algorithm is actively detected and combined to solve the problem of platform load balancing, therefore, the dynamic resource scheduling method capable of automatically setting the number of GPUs, improving the utilization rate of GPU computing resources, actively detecting and realizing load balancing is realized.
Owner:SOUTHWEAT UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products