Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

40results about How to "Reduce data bandwidth" patented technology

Neural network processor based on efficient multiplex data stream, and design method

ActiveCN107085562AReduced on-chip data bandwidthImprove data sharing rateEnergy efficient computingArchitecture with single central processing unitHardware accelerationData sharing
The invention puts forward a neural network processor based on efficient multiplex data stream, and a design method, and relates to the technical field of the hardware acceleration of neural network model calculation. The processor compares at least one storage unit, at least one calculation unit and a control unit, wherein the at least one storage unit is used for storing an operation instruction and arithmetic data; the at least one calculation unit is used for executing neural network calculation; and the control unit is connected with the at least one storage unit and the at least one calculation unit for obtaining the operation instruction stored by the at least one storage unit via the at least one storage unit, and analyzing the operation instruction to control the at least one calculation unit, wherein the arithmetic data adopts a form of the efficient multiplex data stream. By use of the processor, the efficient multiplex data stream is adopted in a neural network processing process, a weight and data only need to be loaded into one row of calculation unit in a calculation unit array each time, the bandwidth of data on chip is lowered, a data sharing rate is improved, and energy efficiency is improved.
Owner:INST OF COMPUTING TECHNOLOGY - CHINESE ACAD OF SCI

Nonvolatile storage equipment and method of carrying out data manipulation therethrough

The invention provides nonvolatile storage equipment and a method of carrying out data manipulation therethrough. The method comprises the following steps of receiving specific operating commands which are transmitted from a host end and are other data manipulation commands except for read-write operating commands; converting the specific operating commands into the read-write operating commands of a nonvolatile storage device, and reading or writing data from or into the nonvolatile storage device according to the read-write operating commands; carrying out specific operations corresponding to the specific operating commands on the read or write data stream, and returning operating result data to the host end. Except for read and write operations, other operations are carried out on the data through the nonvolatile storage equipment, so that a data processing method and a data processing flow are changed; the operating command of a data content is completed by utilizing the processing capacity of a chip controller in the nonvolatile storage equipment, so that the data bandwidth is effectively saved, the utilization rate of the bandwidth of a port is increased, and the load of a CPU (Central Processing Unit) on the host end is reduced. In the specification, a flash is taken as an embodiment of the nonvolatile storage device, but the right of the invention is not considered to be limited to the flash.
Owner:RAMAXEL TECH SHENZHEN

Sensor sensing image recognition method, system and device and storage medium

The invention discloses a sensor sensing image recognition method, system and device and a storage medium, and the method comprises the steps: obtaining first array data of a sensor sensing image, and adding Gaussian noise to the first array data to obtain second array data; performing function relation conversion on the second array data to obtain a pixel capacitor array, and inputting the pixel capacitor array into a pre-trained convolutional neural network; performing convolution calculation on the pixel capacitor array through a convolution layer of the convolutional neural network to obtain a sensor charge quantity array; performing data scaling on the sensor charge quantity array to obtain third array data, and extracting data feature information of the third array data through an activation function of the convolutional neural network; and inputting the data feature information into a full connection layer of the convolutional neural network, calculating to obtain an output vector, and determining a recognition result of the sensor sensing image according to the output vector. The method improves the recognition efficiency of the sensor sensing image, and can be widely applied to the technical field of artificial intelligence.
Owner:SUN YAT SEN UNIV

Data processing device and related product

The invention relates to a data processing device and a related product. The system comprises a master processing unit and at least one slave processing unit group, and each slave processing unit group comprises a shared slave processing unit and at least one parallel slave processing unit; and the master processing unit is used for sending shared data to the shared slave processing units and sending parallel computing data to the parallel slave processing units. The shared slave processing unit transmits the shared data to each parallel slave processing unit, and the parallel slave processingunits respectively receive parallel computing data in two clock periods and transmit the parallel computing data to other parallel slave processing units step by step. According to the data processing device and the related product provided in the invention, the machine learning data is split into the shared data and the parallel computing data, data interaction between the master processing unitand the slave processing unit is realized through two clock periods, the data bandwidth occupied by data interaction between the master processing unit and the slave processing unit is reduced, and the hardware overhead of a machine learning chip for transmission is further reduced.
Owner:SHANGHAI CAMBRICON INFORMATION TECH CO LTD

Three-dimensional model loading method and device and electronic equipment

The invention discloses a three-dimensional model loading method. The method comprises the following steps: establishing a corresponding relationship between each part in a generated three-dimensionalmodel part package and each part of position information in a three-dimensional model description file by utilizing identity recognition information of the part, restoring each part in the three-dimensional model part package to a corresponding position in the three-dimensional model to obtain a displayable original three-dimensional model, the generated three-dimensional model part package comprising different parts disassembled from the three-dimensional model and identity recognition information set for each part; wherein the three-dimensional model description file comprises identity recognition information of each part and position information of the corresponding part in the three-dimensional model. According to the three-dimensional model loading method disclosed by the invention,the size of the three-dimensional model part package is reduced, and the model loading efficiency is improved. The invention further discloses a three-dimensional model loading device, a three-dimensional model splitting method and device, a three-dimensional model splitting and loading method and system and electronic equipment.
Owner:长江工程监理咨询有限公司(湖北)

A Neural Network Processor and Design Method Based on Efficient Multiplexing Data Stream

The invention puts forward a neural network processor based on efficient multiplex data stream, and a design method, and relates to the technical field of the hardware acceleration of neural network model calculation. The processor compares at least one storage unit, at least one calculation unit and a control unit, wherein the at least one storage unit is used for storing an operation instruction and arithmetic data; the at least one calculation unit is used for executing neural network calculation; and the control unit is connected with the at least one storage unit and the at least one calculation unit for obtaining the operation instruction stored by the at least one storage unit via the at least one storage unit, and analyzing the operation instruction to control the at least one calculation unit, wherein the arithmetic data adopts a form of the efficient multiplex data stream. By use of the processor, the efficient multiplex data stream is adopted in a neural network processing process, a weight and data only need to be loaded into one row of calculation unit in a calculation unit array each time, the bandwidth of data on chip is lowered, a data sharing rate is improved, and energy efficiency is improved.
Owner:INST OF COMPUTING TECH CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products