Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

170 results about "Cache optimization" patented technology

Software performance optimization method based on central processing unit (CPU) multi-core platform

The invention provides a software performance optimization method based on a CPU multi-core platform. The method comprises software characteristic analysis, parallel optimization scheme formulation and parallel optimization scheme implementation and iteration tuning. Particularly, the method comprises application software characteristic analysis, serial algorithm analysis, CPU multi-in/thread parallel algorithm design, multi-buffer design, design of communication modes among threads, memory access optimization, cache optimization, processor vectorization optimization, mathematical function library optimization and the like. The method is widely applicable to application occasions with multi-thread parallel processing requirements, software developers are guided to perform multi-thread parallel optimization improvement on prior software rapidly and efficiently with short developing periods and low developing costs, the utilization of system resources by software is optimized, data reading and computing and mutual masking of write-back data are achieved, the software running time is shortened furthest, the hardware resource utilization rate is improved apparently, and the software computing efficiency and the software whole performance are enhanced.
Owner:LANGCHAO ELECTRONIC INFORMATION IND CO LTD

Data flow compilation optimization method oriented to multi-core cluster

ActiveCN103970580AImplementing a three-level optimization processImprove execution performanceResource allocationMemory systemsCache optimizationData stream
The invention discloses a data flow compilation optimization method oriented to a multi-core cluster system. The data flow compilation optimization method comprises the following steps that task partitioning and scheduling of mapping from calculation tasks to processing cores are determined; according to task partitioning and scheduling results, hierarchical pipeline scheduling of pipeline scheduling tables among cluster nodes and among cluster node inner cores is constructed; according to structural characteristics of a multi-core processor, communication situations among the cluster nodes, and execution situations of a data flow program on the multi-core processor, cache optimization based on cache is conducted. According to the method, the data flow program and optimization techniques related to the structure of the system are combined, high-load equilibrium and high parallelism of synchronous and asynchronous mixed pipelining codes on a multi-core cluster are brought into full play, and according to cache and communication modes of the multi-core cluster, cache access and communication transmission of the program are optimized; furthermore, the execution performance of the program is improved, and execution time is shorter.
Owner:HUAZHONG UNIV OF SCI & TECH

Software managed cache optimization system and method for multi-processing systems

The present invention provides for a method for computer program code optimization for a software managed cache in either a uni-processor or a multi-processor system. A single source file comprising a plurality of array references is received. The plurality of array references is analyzed to identify predictable accesses. The plurality of array references is analyzed to identify secondary predictable accesses. One or more of the plurality of array references is aggregated based on identified predictable accesses and identified secondary predictable accesses to generate aggregated references. The single source file is restructured based on the aggregated references to generate restructured code. Prefetch code is inserted in the restructured code based on the aggregated references. Software cache update code is inserted in the restructured code based on the aggregated references. Explicit cache lookup code is inserted for the remaining unpredictable accesses. Calls to a miss handler for misses in the explicit cache lookup code are inserted. A miss handler is included in the generated code for the program. In the miss handler, a line to evict is chosen based on recent usage and predictability. In the miss handler, appropriate DMA commands are issued for the evicted line and the missing line.
Owner:INT BUSINESS MASCH CORP

Software managed cache optimization system and method for multi-processing systems

The present invention provides for a method for computer program code optimization for a software managed cache in either a uni-processor or a multi-processor system. A single source file comprising a plurality of array references is received. The plurality of array references is analyzed to identify predictable accesses. The plurality of array references is analyzed to identify secondary predictable accesses. One or more of the plurality of array references is aggregated based on identified predictable accesses and identified secondary predictable accesses to generate aggregated references. The single source file is restructured based on the aggregated references to generate restructured code. Prefetch code is inserted in the restructured code based on the aggregated references. Software cache update code is inserted in the restructured code based on the aggregated references. Explicit cache lookup code is inserted for the remaining unpredictable accesses. Calls to a miss handler for misses in the explicit cache lookup code are inserted. A miss handler is included in the generated code for the program. In the miss handler, a line to evict is chosen based on recent usage and predictability. In the miss handler, appropriate DMA commands are issued for the evicted line and the missing line.
Owner:IBM CORP

Social insurance big data distributed preprocessing method and system

InactiveCN106126601AGive full play to handlingGive full play to processing performance and provide certain scalabilityDatabase distribution/replicationMulti-dimensional databasesPretreatment methodCache optimization
The invention discloses a social insurance big data distributed preprocessing method and system. According to the main technical scheme, the method comprises the steps of defining a data preprocessing process as a data preprocessing operation that contains a plurality of preprocessing operation nodes, and concurrently executing the preprocessing operation nodes in independent threads; allocating a plurality of executive threads to a data operation node with high complexity, and concurrently executing the data preprocessing operation by a distributed cloud server cluster; and loading and writing data of the distributed preprocessing system in a distributed file system by column, and performing cache optimization on the data writing operation by utilizing NoSQL. According to the method and the system, the processing performance of preprocessing cloud servers is brought into full play, the performance bottleneck of a single server is overcome, redundant data transmission between the servers and data nodes of the HDFS (Hadoop Distributed File System) is avoided, and the efficiency of loading the data in the HDFS is improved, so that the big data preprocessing efficiency is enhanced.
Owner:SOUTH CHINA UNIV OF TECH

Differential upgrade patch manufacturing method and device

The invention discloses a differential upgrade patch manufacturing method and device. The method comprises the steps that binary data of a source version are obtained, and mathematical transformation and sorting are carried out on the data to obtain a metadata set, wherein the metadata set comprises set attribute data with the minimum base unit; the data in the metadata set are serialized into an intermediate file; when a differential upgrade patch needs to be manufactured, differentiation analysis is carried out on the source version and a target version according to the intermediate file so as to build a differential upgrade patch. According to the differential upgrade patch manufacturing method and device, the intermediate file is utilized for manufacturing the differential upgrade patch, the differential upgrade patch manufacturing efficiency is improved, and differential upgrade patch manufacturing time is shortened; in addition, due to the fact that a cache optimization mechanism is introduced, the time spent for the processing process in which a differential patch tool is used for manufacturing a differential patch with the longest time is shortened, working efficiency is further improved, the labor cost is saved, and the differential upgrade technology is further applied and popularized.
Owner:ZTE CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products