Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

8566results about "Physical realisation" patented technology

Artificial neural network calculating device and method for sparse connection

ActiveCN105512723ASolve the problem of insufficient computing performance and high front-end decoding overheadAdd supportMemory architecture accessing/allocationDigital data processing detailsActivation functionMemory bandwidth
An artificial neural network calculating device for sparse connection comprises a mapping unit used for converting input data into the storage mode that input nerve cells and weight values correspond one by one, a storage unit used for storing data and instructions, and an operation unit used for executing corresponding operation on the data according to the instructions. The operation unit mainly executes three steps of operation, wherein in the first step, the input nerve cells and weight value data are multiplied; in the second step, addition tree operation is executed, the weighted output nerve cells processed in the first step are added level by level through an addition tree, or the output nerve cells are added with offset to obtain offset-added output nerve cells; in the third step, activation function operation is executed, and the final output nerve cells are obtained. By means of the device, the problems that the operation performance of a CPU and a GPU is insufficient, and the expenditure of front end coding is large are solved, support to a multi-layer artificial neural network operation algorithm is effectively improved, and the problem that memory bandwidth becomes a bottleneck of multi-layer artificial neural network operation and the performance of a training algorithm of the multi-layer artificial neural network operation is solved.
Owner:CAMBRICON TECH CO LTD

Design of computer based risk and safety management system of complex production and multifunctional process facilities-application to fpso's

InactiveUS20120317058A1Strong robust attributeStrong robust attributesDigital computer detailsFuzzy logic based systemsProcess systemsNerve network
A method for predicting risk and designing safety management systems of complex production and process systems which has been applied to an FPSO System operating in deep waters. The methods for the design were derived from the inclusion of a weight index in a fuzzy class belief variable in the risk model to assign the relative numerical value or importance a safety device or system has contain a risk hazards within the barrier. The weights index distributes the relative importance of risk events in series or parallel in several interactive risk and safety device systems. The fault tree, the FMECA and the Bow Tie now contains weights in fizzy belief class for implementing safety management programs critical to the process systems. The techniques uses the results of neural networks derived from fuzzy belief systems of weight index to implement the safety design systems thereby limiting use of experienced procedures and benchmarks. The weight index incorporate Safety Factors sets SFri {0, 0.1, 0.2 . . . 1}, and Markov Chain Network to allow the possibility of evaluating the impact of different risks or reliability of multifunctional systems in transient state process. The application of this technique and results of simulation to typical FPSO/Riser systems has been discussed in this invention.
Owner:ABHULIMEN KINGSLEY E

Hardware neural network conversion method, computing device, compiling method and neural network software and hardware collaboration system

The invention provides a hardware neural network conversion method which converts a neural network application into a hardware neural network meeting the hardware constraint condition, a computing device, a compiling method and a neural network software and hardware collaboration system. The method comprises the steps that a neural network connection diagram corresponding to the neural network application is acquired; the neural network connection diagram is split into neural network basic units; each neural network basic unit is converted into a network which has the equivalent function with the neural network basic unit and is formed by connection of basic module virtual bodies of neural network hardware; and the obtained basic unit hardware networks are connected according to the splitting sequence so as to generate the parameter file of the hardware neural network. A brand-new neural network and quasi-brain computation software and hardware system is provided, and an intermediate compiling layer is additionally arranged between the neural network application and a neural network chip so that the problem of adaptation between the neural network application and the neural network application chip can be solved, and development of the application and the chip can also be decoupled.
Owner:TSINGHUA UNIV

Calculation apparatus and method for accelerator chip accelerating deep neural network algorithm

The invention provides a calculation apparatus and method for an accelerator chip accelerating a deep neural network algorithm. The apparatus comprises a vector addition processor module, a vector function value calculator module and a vector multiplier-adder module, wherein the vector addition processor module performs vector addition or subtraction and/or vectorized operation of a pooling layer algorithm in the deep neural network algorithm; the vector function value calculator module performs vectorized operation of a nonlinear value in the deep neural network algorithm; the vector multiplier-adder module performs vector multiplication and addition operations; the three modules execute programmable instructions and interact to calculate a neuron value and a network output result of a neural network and a synaptic weight variation representing the effect intensity of input layer neurons to output layer neurons; and an intermediate value storage region is arranged in each of the three modules and a main memory is subjected to reading and writing operations. Therefore, the intermediate value reading and writing frequencies of the main memory can be reduced, the energy consumption of the accelerator chip can be reduced, and the problems of data missing and replacement in a data processing process can be avoided.
Owner:INST OF COMPUTING TECH CHINESE ACAD OF SCI

Apparatus and method for realizing accelerator of sparse convolutional neural network

The invention provides an apparatus and method for realizing an accelerator of a sparse convolutional neural network. According to the invention, the apparatus herein includes a convolutional and pooling unit, a full connection unit and a control unit. The method includes the following steps: on the basis of control information, reading convolutional parameter information, and input data and intermediate computing data, and reading full connected layer weight matrix position information, in accordance with the convolutional parameter information, conducting convolution and pooling on the input data with first iteration times, then on the basis of the full connected layer weight matrix position information, conducting full connection computing with second iteration times. Each input data is divided into a plurality of sub-blocks, and the convolutional and pooling unit and the full connection unit separately operate on the plurality of sub-blocks in parallel. According to the invention, the apparatus herein uses a specific circuit, supports a full connected layer sparse convolutional neural network, uses parallel ping-pang buffer design and assembly line design, effectively balances I / O broadband and computing efficiency, and acquires better performance power consumption ratio.
Owner:XILINX INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products