Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

284results about How to "Improve execution performance" patented technology

Apparatus and method for performing convolutional neural network training

The present invention provides an apparatus and a method for performing convolution neural network inverse training. The apparatus comprises an instruction storage unit, a controller unit, a data access unit, an interconnection module, a main computing module, and a plurality of slave computing modules. The method comprises: for each layer, carrying out data selection on the input neuron vector according to the convolution window; and taking the data from the previous layer and the data gradient from the subsequent layer that are obtained according to selection as the inputs of the computing unit of the apparatus; calculating and updating the convolution kernel; and according to the convolution kernel, the data gradient, and the derivative function of the activation function, calculating the data gradient output by the apparatus, and storing the data gradient to a memory so as to output to the previous layer for inverse propagation calculation. According to the apparatus and method provided by the present invention, data and weight parameters involved in the calculation are temporarily stored in the high-speed cache memory, so that convolution neural network inverse training can be supported more flexibly and effectively, and the executing performance of the application containing a large number of memory access is improved.
Owner:CAMBRICON TECH CO LTD

Method for memory on-line analytical processing (OLAP) query optimization based on field programmable gate array (FPGA)

ActiveCN105868388AReduce memory storage costsReduce Computational Cost and Power ConsumptionMulti-dimensional databasesSpecial data processing applicationsQuery optimizationStorage model
The invention relates to a method for memory on-line analytical processing (OLAP) query optimization based on a field programmable gate array (FPGA). The method comprises the steps of constructing a memory-memory-faced data warehouse heterogeneous storage model; performing query optimization facing a central processing unit (CPU)-FPGA heterogeneous processor based on the heterogeneous storage model: generating a grouping projection vector through subquery; performing dictionary table compression on the grouping projection vector; updating a grouping projection as a grouping projection vector based on dictionary table coding according to a projection dictionary table; performing connection operation on the grouping projection vector and a fact table foreign key, and generating a measurement vector based on measurement list aggregation computation; performing index aggregation computation based on the measurement vector; performing query optimization facing a CPU and FPGA heterogeneous computing platform based on the heterogeneous storage model: causing the FPGA and the CPU to perform shared access of an identical memory address space; when the FPGA is in PCI-E accelerator card configuration, using an FPGA acceleration connection performance and the FPGA to directly access a flash card through a PCI-E channel to perform data processing; and when the FPGA is integrated to a flash memory, accelerating data access and aggregation computation performances of the flash card through the FPGA.
Owner:RENMIN UNIVERSITY OF CHINA

Natural language processing model training method, task execution method, equipment and system

ActiveCN111079406ASolve deployment difficultiesEnhanced natural language processing capabilitiesSemantic analysisMachine learningData setOriginal data
The invention discloses a natural language processing model training method, a natural language processing method, natural language processing equipment and a natural language processing system, whichbelong to the field of natural language processing, and the method comprises the following steps: training a teacher model by utilizing a marked original data set; enhancing text sentences in the original data set to obtain enhanced text sentences, and labeling the enhanced text sentences by using a trained teacher model to obtain a labeled enhanced data set; taking the original data set and theenhanced data set as a training data set, training the student model, and taking the trained student model as a natural language processing model, wherein the teacher model and the student model are both deep learning models and execute the same natural language processing task, and the teacher model is more complex and larger in scale. According to the invention, the data set of the natural language processing task can be effectively enhanced in a knowledge distillation scene, and the processing capability of the natural language processing model is improved, so that the execution effect of the natural language processing task is improved.
Owner:HUAZHONG UNIV OF SCI & TECH

Method for testing transaction performance of terminal

ActiveCN102053872AImprove execution performanceAvoid the disadvantages of losing original transaction informationFinanceError detection/correctionCommunication linkComputer science
The invention discloses a method for testing the transaction performance of a terminal. A test tool comprises a client, a database and a server; a user makes a transaction template and test cases at the client and stores the transaction template and the test cases in the database; and during testing, the server receives a test command and adopts a processing mode comprising the following steps of: a, loading the transaction template and the test cases to a memory pool from the database; b, initializing an extraction algorithm for extracting transactions from the memory pool; c, starting communication connection among the server, an acquiring platform and an encryptor; d, setting communication links as required; e, serving as terminal processed transactions; f, calculating whether an interval between the current time and last statistical time is more than or equal to a statistical period or not, and outputting transaction statistical information according to the statistical period when the condition is met; and g, returning to the step d and circularly processing transactions interacted with communication. By the method for testing the performance, multi-level related transactions can be supported, the actual transaction situation can be truly simulated, and test continuity is ensured.
Owner:CHINA UNIONPAY

FFT accelerator based on DSP chip

The invention discloses an FFT accelerator based on a DSP chip. The accelerator comprises a mode configuring module, an FFT computing control module, a data access control module and an FFT computing module, wherein the mode configuring module is used for receiving the configuring data of a data address, a computing scale and computing times; when the computing scale is less than the maximum computing scale which can be directly supported, the FFT computing control module is used for controlling the FFT computing module to carry out the one-dimensional FFT computing; when the computing scale is greater than the maximum computing scale which can be directly supported, the FFT computing control module is used for controlling the FFT computing module to carry out the two-dimensional FFT computing; the data access control module is used for controlling the read of the computing data from a memory in a DMA manner and writing the computing result back to the memory; the FFT computing module is used for carrying out the FFT computing according to a control signal output by the FFT computing control module. The accelerator has the advantages that various configuring modes of the computing scale, the computing times and the data format can be supported, the FFT computing from the small scale to the large scale can be realized, the implementation effect is high, and the utilization ratio of hardware resources is high.
Owner:NAT UNIV OF DEFENSE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products