Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

47results about How to "Reduce access conflicts" patented technology

Shared data caching device for a plurality of coarse-grained dynamic reconfigurable arrays and control method

The invention discloses a shared data caching device for a plurality of coarse-grained dynamic reconfigurable arrays and a control method of the shared data caching device. The shared data caching device comprises a reconfigurable array data caching control unit, a reconfigurable array data caching unit, an external memory data prefetching caching unit and a data memory access reconfiguration unit, wherein the reconfigurable array data caching control unit is used for controlling data interaction between the reconfigurable arrays and the reconfigurable array data caching unit and data interaction between the reconfigurable array data caching unit and an external memory, the reconfigurable array data caching unit is used for storing data fetched from the external memory, the external memory data prefetching caching unit is used for prefetching data to be accessed to the reconfigurable array data caching unit, and the data memory access reconfiguration unit is used for sending address information and step length information needed by the reconfigurable array data caching unit. The control method is used for achieving data sharing between the coarse-grained dynamic reconfigurable arrays in a reconfigurable system. By means of the shared data caching device and the control method, access conflict is reduced, data processing time of the reconfigurable system is shortened, and the calculation performance of large-scale coarse-grained reconfigurable arrays is improved.
Owner:SOUTHEAST UNIV

Controller for realizing configuration information cache update in reconfigurable system

The invention discloses a controller for realizing configuration information cache update in a reconfigurable system. The controller comprises a configuration information cache unit, an off-chip memory interface module and a cache control unit, wherein the configuration information cache unit is used for caching configuration information which can be used by a certain reconfigurable array or some reconfigurable arrays within a period of time, the off-chip memory interface module is used for reading the configuration information from an external memory and transmitting the configuration information to the configuration information cache unit, and the cache control unit is used for controlling the reconfiguration process of the reconfigurable arrays. The reconfiguration process includes: mapping subtasks in an algorithm application to a certain reconfigurable array; setting a priority strategy of the configuration information cache unit; replacing the configuration information in the configuration information cache unit according to an LRU_FRQ replacement strategy. The invention further provides a method for realizing configuration information cache update in the reconfigurable system. Cache is updated by the aid of the LRU_FRQ replacement strategy, a traditional mode of updating the configuration information cache is changed, and the dynamic reconfiguration efficiency of the reconfigurable system is improved.
Owner:SOUTHEAST UNIV

High-efficiency passing method for road intersection

The invention discloses a high-efficiency passing method for a road intersection adopting non-circular roundabout. A left turning passage for communicating two-way lanes of a major trunk road is built in the middle section of a straight through isolating band of the two-way lanes of the major trunk road; an approximately quincuncial or oval traffic ring island is formed on the intersection; resources of middle section of the road are thoroughly used for enhancing the passing capability of the intersection; and waste road resources are better applied. Under the condition that no overpass is built on a suburban intersection and a T-junction and buildings along streets are not removed, a left turning vehicle stream and a straight vehicle stream on the major trunk road quickly can pass the intersection without any parking; under the condition that the left turning vehicle stream and the straight vehicle stream on the major trunk road are not disturbed, a left turning vehicle stream and a straight vehicle stream on a secondary trunk road can successfully pass the intersection so that the passing vehicle streams on the intersection are simple and sequential; and 50% traffic light conversion is reduced, therefore, the purposes of largely enhancing the vehicle passing efficiency and realizing quick passing are achieved.
Owner:江西省中业景观工程安装有限公司

Data scheduling method and system thereof, and correlated equipment

The embodiment of the invention discloses a data scheduling method and a system thereof, and correlated equipment, so that multiplexity of a packet radio channel and a success rate of packet radio access can be effectively improved. According to the embodiment of the invention, the method comprises the following steps that: a terminal obtains a time characteristic of a downlink radio block and an uplink state flag (USF) that is distributed to the terminal, wherein the time characteristic of the downlink radio block is used for indicating that the terminal needs to detect a downlink radio block of the USF; a downlink radio block that is sent by network side equipment at a downlink packet data channel (PDCH) is received; when a downlink radio block that accords with the time characteristic is received, it is determined whether a USF contained in the downlink radio block is identical with a USF that is distributed to the downlink radio block itself; and if so, uplink data or signaling are / is sent at an uplink PDCH that is corresponded to the downlink PDCH. In addition, the embodiment of the invention also provides a data scheduling system and correlated equipment. According to the embodiment of the invention, multiplexity of a packet radio channel and a success rate of packet radio access can be effectively improved.
Owner:HUAWEI TECH CO LTD

Cache structure and management method for use in implementing reconfigurable system configuration information storage

Disclosed is a cache structure for use in implementing reconfigurable system configuration information storage, comprising: layered configuration information cache units: for use in caching configuration information that may be used by a certain or several reconfigurable arrays within a period of time; an off-chip memory interface module: for use in establishing communication; a configuration anagement unit: for use in managing a reconfiguration process of the reconfigurable arrays, in mapping each subtask in an algorithm application to a certain reconfigurable array, thus the reconfigurable array will, on the basis of the mapped subtask, load the corresponding configuration information to complete a function reconfiguration for the reconfigurable array. This increases the utilization efficiency of configuration information caches. Also provided is a method for managing the reconfigurable system configuration information caches, employing a mixed priority cache update method, and changing a mode for managing the configuration information caches in a conventional reconfigurable system, thus increasing the dynamic reconfiguration efficiency in a complex reconfigurable system.
Owner:SOUTHEAST UNIV

A tdma-based adaptive time slot allocation method for vehicular ad hoc networks

The invention relates to a vehicle-mounted ad hoc network self-adaptive time slot distributing method based on a time division multiple address (TDMA), the vehicle-mounted ad hoc network self-adaptive time slot distributing method divides a time frame into a left time slot set and a right time slot set, nodes are divided into a left node set and a right node set based on the moving direction of the nodes, and the nodes in the left / right node sets choose competition time slots in the left / right time slot set based on the current geographical location information and according to certain rules. The vehicle-mounted ad hoc network self-adaptive time slot distributing method based on the TDMA reduces the ratio of a connect-in confliction and an amalgamation confliction occurred by the nodes to a large degree; and based on density changes of the nodes sensed by the nodes, the lengths of frames are dynamically adjusted, so that the demand that the nodes swiftly connect-in channels is satisfied; low time delay of the vehicle-mounted ad hoc network self-adaptive time slot distributing method based on the TDMA is authenticated by both theoretical analysis and simulating experiments. Compared with the prior art, the vehicle-mounted ad hoc network self-adaptive time slot distributing method based on the TDMA has the advantages of having less quantities of confliction nodes, higher utilizing ratio of the channels and good expansibility.
Owner:HENAN UNIVERSITY OF TECHNOLOGY +1

Controller for updating configuration information cache in reconfigurable systems

The invention discloses a controller for realizing configuration information cache update in a reconfigurable system. The controller comprises a configuration information cache unit, an off-chip memory interface module and a cache control unit, wherein the configuration information cache unit is used for caching configuration information which can be used by a certain reconfigurable array or some reconfigurable arrays within a period of time, the off-chip memory interface module is used for reading the configuration information from an external memory and transmitting the configuration information to the configuration information cache unit, and the cache control unit is used for controlling the reconfiguration process of the reconfigurable arrays. The reconfiguration process includes: mapping subtasks in an algorithm application to a certain reconfigurable array; setting a priority strategy of the configuration information cache unit; replacing the configuration information in the configuration information cache unit according to an LRU_FRQ replacement strategy. The invention further provides a method for realizing configuration information cache update in the reconfigurable system. Cache is updated by the aid of the LRU_FRQ replacement strategy, a traditional mode of updating the configuration information cache is changed, and the dynamic reconfiguration efficiency of the reconfigurable system is improved.
Owner:SOUTHEAST UNIV

Hybrid branch prediction device and method for out-of-order high-performance cores

The invention discloses a hybrid branch prediction device and method for out-of-order high-performance cores, and relates to the field of computer branch prediction. The device can evaluate the performance of the micro-architecture level of the processor, and reduce out-of-order high-performance processor renaming blockage caused by branch prediction failure and instruction missing; and the deviceprovides a hybrid branch predictor which is high in precision and flexible and configurable in parameterization. The hybrid branch predictor is composed of a global historical information branch TAGEpredictor, a statistical correction predictor and a cycle predictor. According to the TAGE predictor, a parameterized Tagged component and a split reading improvement strategy are utilized, high-precision branch prediction is achieved, and access conflicts are reduced; the statistical correction predictor is used for confirming or restoring the prediction result of the TAGE predictor according tothe prediction result and the confidence coefficient of the TAGE predictor; and the loop predictor is configured to predict a regular loop having a long loop subject using a replacement strategy anda loop branch conversion technique. Limited hardware storage overhead is fully utilized, access conflicts are reduced to a great extent, and the overall performance of the processor is improved whilebranch prediction precision is improved.
Owner:核芯互联科技(青岛)有限公司

High-efficiency passing method for road intersection

The invention discloses a high-efficiency passing method for a road intersection. A left turning passage for communicating two-way lanes of a major trunk road is built in the middle section of a straight through isolating band of the two-way lanes of the major trunk road; an approximately quincuncial or oval traffic ring island is formed on the intersection; resources of middle section of the road are thoroughly used for enhancing the passing capability of the intersection; and waste road resources are better applied. Under the condition that no overpass is built on a suburban intersection and a T-junction and buildings along streets are not removed, a left turning vehicle stream and a straight vehicle stream on the major trunk road quickly can pass the intersection without any parking; under the condition that the left turning vehicle stream and the straight vehicle stream on the major trunk road are not disturbed, a left turning vehicle stream and a straight vehicle stream on a secondary trunk road can successfully pass the intersection so that the passing vehicle streams on the intersection are simple and sequential; and 50% traffic light conversion is reduced, therefore, the purposes of largely enhancing the vehicle passing efficiency and realizing quick passing are achieved.
Owner:江西省中业景观工程安装有限公司

A hybrid branch prediction device and method for out-of-order high-performance cores

The invention discloses a hybrid branch prediction device and method of out-of-order high-performance cores, and relates to the field of computer branch prediction. The device can perform performance evaluation at the microarchitecture level of the processor, and alleviate the renaming blockage of out-of-order high-performance processors caused by branch prediction failures and missing instructions; the device provides a high-precision, flexible and configurable hybrid The branch predictor is composed of a global historical information branch TAGE predictor, a statistical correction predictor and a cycle predictor; the TAGE predictor utilizes parameterized Tagged components and split read improvement strategies to achieve high-precision branch prediction and reduce access conflict; the statistical correction predictor is used to confirm or restore the prediction result of the TAGE predictor according to the prediction result and confidence of the TAGE predictor; the cycle predictor is used to use the replacement strategy and the cycle branch reduction technology to predict The principal's rule loop. The invention makes full use of limited hardware storage overhead, greatly reduces access conflicts, and improves the overall performance of the processor while improving branch prediction accuracy.
Owner:核芯互联科技(青岛)有限公司

Storage component and artificial intelligence processor

The invention relates to a storage component and an artificial intelligence processor. The storage component is applied to computing cores of the artificial intelligence processor, the artificial intelligence processor comprises a plurality of computing cores, each computing core comprises a processing component and a storage component, and the storage component comprises a first storage unit, a second storage unit and a third storage unit; and the processing component includes an axon unit, a cell unit, and a routing unit. According to the storage component provided by the embodiment of the invention, the processing component and the storage component can be arranged in the computing core, so that the storage component directly receives the read-write access of the processing component, and the storage component outside the core does not need to be read and written. The distributed storage architecture of the plurality of storage units can store different data respectively, so that the processing component can access the plurality of storage units conveniently, and the distributed storage architecture is suitable for the processing component of a many-core architecture. The size of the artificial intelligence processor can be reduced, the power consumption of the artificial intelligence processor is reduced, and the processing efficiency of the artificial intelligence processor is improved.
Owner:TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products