Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

76 results about "Neural network hardware" patented technology

Hardware neural network conversion method, computing device, compiling method and neural network software and hardware collaboration system

The invention provides a hardware neural network conversion method which converts a neural network application into a hardware neural network meeting the hardware constraint condition, a computing device, a compiling method and a neural network software and hardware collaboration system. The method comprises the steps that a neural network connection diagram corresponding to the neural network application is acquired; the neural network connection diagram is split into neural network basic units; each neural network basic unit is converted into a network which has the equivalent function with the neural network basic unit and is formed by connection of basic module virtual bodies of neural network hardware; and the obtained basic unit hardware networks are connected according to the splitting sequence so as to generate the parameter file of the hardware neural network. A brand-new neural network and quasi-brain computation software and hardware system is provided, and an intermediate compiling layer is additionally arranged between the neural network application and a neural network chip so that the problem of adaptation between the neural network application and the neural network application chip can be solved, and development of the application and the chip can also be decoupled.
Owner:TSINGHUA UNIV

Convolutional neural network hardware acceleration device, convolution calculation method, and storage medium

The invention relates to a convolutional neural network hardware acceleration device, a convolution calculation method, and a storage medium. The device comprises an instruction processing unit, a hardware acceleration module and an external data memory unit, wherein the instruction processing unit decodes an instruction set to execute a corresponding operation so as to control the hardware acceleration module; the hardware acceleration module comprises an input caching unit, a data operation unit and an output caching unit, wherein the input caching unit executes the memory access operation of the instruction processing unit, and stores data read from the external data memory unit; the data operation unit executes the operation execution operation of the instruction processing unit, processes the data operation of the convolutional neural network, and controls a data operation process and a data flow direction according to an operation instruction set; the output caching unit executesthe memory access operation of the instruction processing unit, and stores a calculation result which is output by the data operation unit and needs to be written into the external data memory unit;and the external data memory unit stores a calculation result output by the output caching unit and transmits data to the input caching unit according to the reading of the input caching unit.
Owner:NATIONZ TECH INC

Design method of hardware accelerator based on LSTM recursive neural network algorithm on FPGA platform

The invention discloses a method for accelerating an LSTM neural network algorithm on an FPGA platform. The FPGA is a field-programmable gate array platform and comprises a general processor, a field-programmable gate array body and a storage module. The method comprises the following steps that an LSTM neural network is constructed by using a Tensorflow pair, and parameters of the neural networkare trained; the parameters of the LSTM network are compressed by adopting a compression means, and the problem that storage resources of the FPGA are insufficient is solved; according to the prediction process of the compressed LSTM network, a calculation part suitable for running on the field-programmable gate array platform is determined; according to the determined calculation part, a softwareand hardware collaborative calculation mode is determined; according to the calculation logic resource and bandwidth condition of the FPGA, the number and type of IP core firmware are determined, andacceleration is carried out on the field-programmable gate array platform by utilizing a hardware operation unit. A hardware processing unit for acceleration of the LSTM neural network can be quicklydesigned according to hardware resources, and the processing unit has the advantages of being high in performance and low in power consumption compared with the general processor.
Owner:SUZHOU INST FOR ADVANCED STUDY USTC

Hardware neural network conversion method, computing device, software-hardware cooperation system

The invention provides a hardware neural network conversion method which converts a neural network application into a hardware neural network meeting the hardware constraint condition, a computing device, a compiling method and a neural network software and hardware collaboration system. The method comprises the steps that a neural network connection diagram corresponding to the neural network application is acquired; the neural network connection diagram is split into neural network basic units; each neural network basic unit is converted into a network which has the equivalent function with the neural network basic unit and is formed by connection of basic module virtual bodies of neural network hardware; and the obtained basic unit hardware networks are connected according to the splitting sequence so as to generate the parameter file of the hardware neural network. A brand-new neural network and quasi-brain computation software and hardware system is provided, and an intermediate compiling layer is additionally arranged between the neural network application and a neural network chip so that the problem of adaptation between the neural network application and the neural network application chip can be solved, and development of the application and the chip can also be decoupled.
Owner:TSINGHUA UNIV

Neural network hardware accelerator

The invention discloses a neural network hardware accelerator. An assembly line architecture of the hardware accelerator comprises an instruction obtaining module used for acquiring instructions an instruction decoding module used for performing instruction decoding operation; a semi-precision floating point operation module used for carrying out one-dimensional vector operation; an activation function calculation module used for calculating an activation function in a table lookup mode; a floating point post-processing unit used for performing floating point operation on the data calculated by the activation function; and a caching module used for caching intermediate data in the neural network algorithm implementation process. Register files which are distributed on an assembly line andlocated on the same level as the instruction decoding module are used for temporarily storing related instructions, data and addresses in the neural network algorithm implementation process. The hardware resource utilization rate of the hardware accelerator in the RNN neural network algorithm implementation process can be greatly improved, so that the unit power consumption efficiency of the RNN neural network algorithm with the unit calculation amount in unit time when the RNN neural network algorithm is operated through the hardware accelerator is improved.
Owner:SHENZHEN DAPU MICROELECTRONICS CO LTD

Multi-focus image fusion method based on memristor pulse coupling neural network

The invention discloses a multi-focus image fusion method based on a memristor pulse coupling neural network. In an existing pulse coupled neural network (PCNN), a self-adaptive change method of a connection coefficient is completely based on computer analog simulation, so the timeliness of a PCNN model in the operation process is possibly low; meanwhile, a self-adaptive change equation of parameters (connection coefficients) is completely set manually, so the feasibility of parameter self-adaptive change in the actual operation process cannot be guaranteed. The method comprises the followingsteps: designing a self-adaptive memristor PCNN model based on a memristor cross array compact circuit structure; designing a flexible and universal mapping function (Mapping function); applying the self-adaptive memristor PCNN model to multi-focus image fusion, and acquiring a better multi-focus image fusion result by further improving the network structure (single channel to multiple channels) of the multi-focus image fusion model. The method not only provides a brand-new solution for inherent parameter estimation problems in numerous parameter-controlled neural network models, but also facilitates promotion of hardware implementation of the neural network.
Owner:STATE GRID BEIJING ELECTRIC POWER +2

Lightweight neural network hardware accelerator based on depth separable convolution

ActiveCN113033794AReduce the inventory of outbound visitsSuitable for applications with limited power consumptionNeural architecturesPhysical realisationActivation functionMultiplexer
The invention discloses a lightweight neural network hardware accelerator based on depth separable convolution. The lightweight neural network hardware accelerator comprises an A-path K * K channel convolution processing unit parallel array, an A-path 1 * 1 point convolution processing unit parallel array and an on-chip memory used for buffering a convolutional neural network and an input and output feature map, The convolutional neural network is a lightweight neural network obtained by compressing a neural network MobileNet by using a quantitative perception training method; The A-path K * K channel convolution processing unit parallel array and the multi-path 1 * 1 point convolution processing unit parallel array are deployed in a pixel-level assembly line; Each K * K channel convolution processing unit comprises a multiplier, a summator and an activation function calculation unit; and each 1 * 1 point convolution processing unit comprises a multiplexer, a two-stage adder tree and an accumulator. According to the invention, the problem of high-energy-consumption off-chip memory access generated in the reasoning process of the accelerator in the prior art is solved, resources are saved, and the processing performance is improved.
Owner:CHONGQING UNIV

Hardware Trojan horse detection method and device

The invention provides a hardware Trojan horse detection method and device. The method comprises the following steps: sampling path delay information of positive and negative sample chips, and constructing a path delay information data set of the positive and negative sample chips; performing path delay information sampling on the chip to be tested, and constructing a path delay information data set of the chip to be tested; sending the positive and negative sample chip path delay information data set into a to-be-trained neural network for training to obtain a neural network hardware Trojan horse detector; sending the path delay information data set of the chip to be tested into the neural network hardware Trojan detector, and extracting spatial structure characteristics of the path delaydata of the chip to be tested; taking the spatial structure characteristics of the path delay data of the chip to be tested as a time sequence, sending the time sequence into a neural network, and extracting the time sequence characteristics of the path delay data of the chip to be tested; and sending the time sequence characteristics of the path delay data of the chip to be tested to a classifier network, and judging whether the chip to be tested is infected with the hardware Trojan horse or not and which hardware Trojan horse is infected.
Owner:XIDIAN UNIV

2-exponential power deep neural network quantification method based on knowledge distillation training

The invention relates to the technical field of neural networks. The invention further discloses a 2-exponential power deep neural network quantification method based on knowledge distillation training. The method comprises a teacher model and a student model with exponential power quantification of 2, and is characterized in that the teacher network model selects a network model with more parameters and higher precision, and the student model generally selects a network model with fewer parameters and lower precision than the teacher model. According to the invention, an exponential power quantification deep neural network method in which a neural network weight value is quantified into 2 is adopted; an error with a full-precision weight value can be reduced; the precision of the trainednetwork and the precision loss of the unquantified network are effectively reduced; moreover, the exponential power weight multiplication operation of 2 can be completed by displacement, the method has obvious calculation advantages in hardware equipment deployment, the calculation efficiency of neural network hardware can be improved, and the neural network model trained based on the knowledge distillation algorithm can effectively improve the accuracy of the quantitative network.
Owner:HEFEI UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products