Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

8651results about "Physical realisation" patented technology

Massively parallel, smart memory based accelerator

Systems and methods for massively parallel processing on an accelerator that includes a plurality of processing cores. Each processing core includes multiple processing chains configured to perform parallel computations, each of which includes a plurality of interconnected processing elements. The cores further include multiple of smart memory blocks configured to store and process data, each memory block accepting the output of one of the plurality of processing chains. The cores communicate with at least one off-chip memory bank.
Owner:NEC CORP

Apparatus and method for detecting or recognizing pattern by employing a plurality of feature detecting elements

A pattern detecting apparatus has a plurality of hierarchized neuron elements to detect a predetermined pattern included in input patterns. Pulse signals output from the plurality of neuron elements are given specific delays by synapse circuits associated with the individual elements. This makes it possible to transmit the pulse signals to the neuron elements of the succeeding layer through a common bus line so that they can be identified on a time base. The neuron elements of the succeeding layer output the pulse signals at output levels based on a arrival time pattern of the plurality of pulse signals received from the plurality of neuron elements of the preceding layer within a predetermined time window. Thus, the reliability of pattern detection can be improved, and the number of wires interconnecting the elements can be reduced by the use of the common bus line, leading to a small scale of circuit and reduced power consumption.
Owner:CANON KK

Artificial neural network calculating device and method for sparse connection

ActiveCN105512723ASolve the problem of insufficient computing performance and high front-end decoding overheadAdd supportMemory architecture accessing/allocationDigital data processing detailsActivation functionMemory bandwidth
An artificial neural network calculating device for sparse connection comprises a mapping unit used for converting input data into the storage mode that input nerve cells and weight values correspond one by one, a storage unit used for storing data and instructions, and an operation unit used for executing corresponding operation on the data according to the instructions. The operation unit mainly executes three steps of operation, wherein in the first step, the input nerve cells and weight value data are multiplied; in the second step, addition tree operation is executed, the weighted output nerve cells processed in the first step are added level by level through an addition tree, or the output nerve cells are added with offset to obtain offset-added output nerve cells; in the third step, activation function operation is executed, and the final output nerve cells are obtained. By means of the device, the problems that the operation performance of a CPU and a GPU is insufficient, and the expenditure of front end coding is large are solved, support to a multi-layer artificial neural network operation algorithm is effectively improved, and the problem that memory bandwidth becomes a bottleneck of multi-layer artificial neural network operation and the performance of a training algorithm of the multi-layer artificial neural network operation is solved.
Owner:CAMBRICON TECH CO LTD

Design of computer based risk and safety management system of complex production and multifunctional process facilities-application to fpso's

InactiveUS20120317058A1Strong robust attributeStrong robust attributesDigital computer detailsFuzzy logic based systemsProcess systemsNerve network
A method for predicting risk and designing safety management systems of complex production and process systems which has been applied to an FPSO System operating in deep waters. The methods for the design were derived from the inclusion of a weight index in a fuzzy class belief variable in the risk model to assign the relative numerical value or importance a safety device or system has contain a risk hazards within the barrier. The weights index distributes the relative importance of risk events in series or parallel in several interactive risk and safety device systems. The fault tree, the FMECA and the Bow Tie now contains weights in fizzy belief class for implementing safety management programs critical to the process systems. The techniques uses the results of neural networks derived from fuzzy belief systems of weight index to implement the safety design systems thereby limiting use of experienced procedures and benchmarks. The weight index incorporate Safety Factors sets SFri {0, 0.1, 0.2 . . . 1}, and Markov Chain Network to allow the possibility of evaluating the impact of different risks or reliability of multifunctional systems in transient state process. The application of this technique and results of simulation to typical FPSO/Riser systems has been discussed in this invention.
Owner:ABHULIMEN KINGSLEY E

Hardware neural network conversion method, computing device, compiling method and neural network software and hardware collaboration system

The invention provides a hardware neural network conversion method which converts a neural network application into a hardware neural network meeting the hardware constraint condition, a computing device, a compiling method and a neural network software and hardware collaboration system. The method comprises the steps that a neural network connection diagram corresponding to the neural network application is acquired; the neural network connection diagram is split into neural network basic units; each neural network basic unit is converted into a network which has the equivalent function with the neural network basic unit and is formed by connection of basic module virtual bodies of neural network hardware; and the obtained basic unit hardware networks are connected according to the splitting sequence so as to generate the parameter file of the hardware neural network. A brand-new neural network and quasi-brain computation software and hardware system is provided, and an intermediate compiling layer is additionally arranged between the neural network application and a neural network chip so that the problem of adaptation between the neural network application and the neural network application chip can be solved, and development of the application and the chip can also be decoupled.
Owner:TSINGHUA UNIV

Calculation apparatus and method for accelerator chip accelerating deep neural network algorithm

The invention provides a calculation apparatus and method for an accelerator chip accelerating a deep neural network algorithm. The apparatus comprises a vector addition processor module, a vector function value calculator module and a vector multiplier-adder module, wherein the vector addition processor module performs vector addition or subtraction and/or vectorized operation of a pooling layer algorithm in the deep neural network algorithm; the vector function value calculator module performs vectorized operation of a nonlinear value in the deep neural network algorithm; the vector multiplier-adder module performs vector multiplication and addition operations; the three modules execute programmable instructions and interact to calculate a neuron value and a network output result of a neural network and a synaptic weight variation representing the effect intensity of input layer neurons to output layer neurons; and an intermediate value storage region is arranged in each of the three modules and a main memory is subjected to reading and writing operations. Therefore, the intermediate value reading and writing frequencies of the main memory can be reduced, the energy consumption of the accelerator chip can be reduced, and the problems of data missing and replacement in a data processing process can be avoided.
Owner:INST OF COMPUTING TECH CHINESE ACAD OF SCI

FPGA-based deep convolution neural network realizing method

The invention belongs to the technical field of digital image processing and mode identification, and specifically relates to an FPGA-based deep convolution neural network realizing method. The hardware platform for realizing the method is XilinxZYNQ-7030 programmable sheet SoC, and an FPGA and an ARM Cortex A9 processor are built in the hardware platform. Trained network model parameters are loaded to an FPGA end, pretreatment for input data is conducted at an ARM end, and the result is transmitted to the FPGA end. Convolution calculation and down-sampling of a deep convolution neural network are realized at the FPGA end to form data characteristic vectors and transmit the data characteristic vectors to the ARM end, thus completing characteristic classification calculation. Rapid parallel processing and extremely low-power high-performance calculation characteristics of FPGA are utilized to realize convolution calculation which has the highest complexity in a deep convolution neural network model. The algorithm efficiency is greatly improved, and the power consumption is reduced while ensuring algorithm correct rate.
Owner:FUDAN UNIV

Adaptive neural network utilizing nanotechnology-based components

Methods and systems for modifying at least one synapse of a physicallelectromechanical neural network. A physical / electromechanical neural network implemented as an adaptive neural network can be provided, which includes one or more neurons and one or more synapses thereof, wherein the neurons and synapses are formed from a plurality of nanoparticles disposed within a dielectric solution in association with one or more pre-synaptic electrodes and one or more post-synaptic electrodes and an applied electric field. At least one pulse can be generated from one or more of the neurons to one or more of the pre-synaptic electrodes of a succeeding neuron and one or more post-synaptic electrodes of one or more of the neurons of the physical / electromechanical neural network, thereby strengthening at least one nanoparticle of a plurality of nanoparticles disposed within the dielectric solution and at least one synapse thereof.
Owner:KNOWM TECH

Producing spike-timing dependent plasticity in a neuromorphic network utilizing phase change synaptic devices

Embodiments of the invention relate to a neuromorphic network for producing spike-timing dependent plasticity. The neuromorphic network includes a plurality of electronic neurons and an interconnect circuit coupled for interconnecting the plurality of electronic neurons. The interconnect circuit includes plural synaptic devices for interconnecting the electronic neurons via axon paths, dendrite paths and membrane paths. Each synaptic device includes a variable state resistor and a transistor device with a gate terminal, a source terminal and a drain terminal, wherein the drain terminal is connected in series with a first terminal of the variable state resistor. The source terminal of the transistor device is connected to an axon path, the gate terminal of the transistor device is connected to a membrane path and a second terminal of the variable state resistor is connected to a dendrite path, such that each synaptic device is coupled between a first axon path and a first dendrite path, and between a first membrane path and said first dendrite path.
Owner:IBM CORP

Apparatus and method for realizing accelerator of sparse convolutional neural network

The invention provides an apparatus and method for realizing an accelerator of a sparse convolutional neural network. According to the invention, the apparatus herein includes a convolutional and pooling unit, a full connection unit and a control unit. The method includes the following steps: on the basis of control information, reading convolutional parameter information, and input data and intermediate computing data, and reading full connected layer weight matrix position information, in accordance with the convolutional parameter information, conducting convolution and pooling on the input data with first iteration times, then on the basis of the full connected layer weight matrix position information, conducting full connection computing with second iteration times. Each input data is divided into a plurality of sub-blocks, and the convolutional and pooling unit and the full connection unit separately operate on the plurality of sub-blocks in parallel. According to the invention, the apparatus herein uses a specific circuit, supports a full connected layer sparse convolutional neural network, uses parallel ping-pang buffer design and assembly line design, effectively balances I / O broadband and computing efficiency, and acquires better performance power consumption ratio.
Owner:XILINX INC

Model-free adaptive process control

A model-free adaptive controller is disclosed, which uses a dynamic artificial neural network with a learning algorithm to control any single-variable or multivariable open-loop stable, controllable, and consistently direct-acting or reverse-acting industrial process without requiring any manual tuning, quantitative knowledge of the process, or process identifiers. The need for process knowledge is avoided by substituting 1 for the actual sensitivity function .differential.y(t) / .differential.u(t) of the process.
Owner:GEN CYBERNATION GROUP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products