Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

84 results about "Web accelerator" patented technology

A web accelerator is a proxy server that reduces web site access time. They can be a self-contained hardware appliance or installable software. Web accelerators may be installed on the client computer or mobile device, on ISP servers, on the server computer/network, or a combination. Accelerating delivery through compression requires some type of host-based server to collect, compress and then deliver content to a client computer.

Media Data Processing Using Distinct Elements for Streaming and Control Processes

A hardware accelerated streaming arrangement, especially for RTP real time protocol streaming, directs data packets for one or more streams between sources and destinations, using addressing and handling criteria that are determined in part from control packets and are used to alter or supplement headers associated with the stream content packets. A programmed control processor responds to control packets in RTCP or RTSP format, whereby the handling or direction of RTP packets can be changed. The control processor stores data for the new addressing and handling criteria in a memory accessible to a hardware accelerator, arranged to store the criteria for multiple ongoing streams at the same time. When a content packet is received, its addressing and handling criteria are found in the memory and applied, by action of the network accelerator, without the need for computation by the control processor. The network accelerator operates repetitively to continue to apply the criteria to the packets for a given stream as the stream continues, and can operate as a high date rate pipeline. The processor can be programmed to revise the criteria in a versatile manner, including using extensive computation if necessary, because the processor is relieved of repetitive processing duties accomplished by the network accelerator.
Owner:AGERE SYST INC

Neural network accelerator suitable for edge equipment and neural network acceleration calculation method

The invention discloses a neural network accelerator suitable for edge equipment and a neural network acceleration calculation method, and relates to the technical field of neural networks. The network accelerator comprises a configuration unit, a data buffer unit, a processing matrix component (PMs) and a post-processing unit, and a main controller writes feature parameters of different types ofnetwork layers into a register of the configuration unit to control the mapping of different network layer operation logics to the processing matrix hardware, so as to realize the multiplexing of theprocessing matrix component, i.e., the operation acceleration of different types of network layers in the neural network is realized by using one hardware circuit without additional hardware resources; and the different types of network layers comprise a standard convolution layer and a pooling network layer. The multiplexing accelerator provided by the invention not only ensures the realization of the same function, but also has the advantages of less hardware resource consumption, higher hardware multiplexing rate, lower power consumption, high concurrency, high multiplexing characteristic and strong structural expansibility.
Owner:上海赛昉科技有限公司

Neural network accelerator

ActiveCN111931918AImprove the ratio of computing power to power consumptionRealize data reuseNeural architecturesPhysical realisationMassively parallelExternal data
The invention relates to the technical field of artificial intelligence, and provides a neural network accelerator. According to the neural network accelerator, simplified design is carried out by taking a layer as a basic unit; all the modules can run in parallel to achieve simultaneous running of all layers of the neural network on hardware so as to greatly improve the processing speed, a main control module can complete cutting distribution of operation tasks and corresponding data of all the layers, and then all the modules execute the operation tasks of the corresponding layers. Moreover,a three-dimensional multiply-accumulate array is also realized in the neural network accelerator architecture; therefore, multiplication and accumulation operations can be operated in parallel on a large scale; the convolution calculation efficiency is effectively improved; data multiplexing and efficient circulation of convolution operation parameters and convolution operation data among the modules are realized through a first cache region and a second cache region, data multiplexing between layers of the neural network is realized to reduce external data access, so that the power consumption is reduced while the data processing efficiency is improved, and the neural network accelerator can improve the computing power consumption ratio.
Owner:SHENZHEN MINIEYE INNOVATION TECH CO LTD

Method for quickly deploying convolutional neural network on FPGA (Field Programmable Gate Array) based on Pytorch framework

ActiveCN111104124AMake up for the problem of not including network topology informationVersatilityNeural architecturesPhysical realisationTheoretical computer scienceReconfigurable computing
The invention discloses a method for quickly deploying a convolutional neural network on an FPGA (Field Programmable Gate Array) based on a Pytorch framework. The method comprises the steps of establishing a model quick mapping mechanism, constructing a reconfigurable computing unit and carrying out self-adaptive processing flow based on rule mapping; when a convolutional neural network is definedunder a Pytorch framework, establishing a model fast mapping mechanism through construction of naming rules; making optimization strategy calculation under a hardware resource constraint condition, establishing a template library based on a hardware optimization strategy and creating a reconfigurable calculation unit i at an FPGA end; and finally, decomposing the complex network model file at the FPGA end in the self-adaptive processing flow based on rule mapping, abstracting the network into a directed acyclic graph, and finally generating a neural network accelerator to realize an integrated flow from the model file of the Pytorch framework to FPGA deployment. The directed acyclic graph of the network can be established through a model fast mapping mechanism, the FPGA deployment process can be completed only by inputting hardware design variables in the FPGA deployment process, and the method is simple and high in universality.
Owner:BEIHANG UNIV

Accelerator of pulmonary-nodule detection neural-network and control method thereof

The invention provides an accelerator of a pulmonary-nodule detection neural-network and a control method thereof. Input data enter an FIFO module through a control module, then enter a convolution module to complete multiplication and accumulation operations in convolutions, and enter an accumulation module after the multiplication and accumulation operations to accumulate intermediate values; the intermediate values after accumulation enter an activation function module for activation function operations, enter a down-sampling module after the activation function operations for mean pooling,and enter a rasterization module after mean pooling for rasterization; output is converted to a one-dimensional vector, and is returned to the control module; and the control module calls and configures the FIFO module, the convolution module, the accumulation module, the activation function module, the down-sampling module and the rasterization module to control iteration, and transmits iteration results to a fully connected layer for multiplication and accumulation operations and probability comparison. According to the accelerator, iteration control logic is optimized for the pulmonary-nodule detection network through the control module to reduce resource consumption, and a data throughput rate is increased.
Owner:SHANGHAI JIAO TONG UNIV

Convolutional network accelerator, configuration method and computer readable storage medium

The invention belongs to the technical field of hardware acceleration of a convolutional network. The invention discloses a convolutional network accelerator, a configuration method and a computer readable storage medium. The method comprises the steps: judging the number of layers, where a whole network model is located, of a currently executed forward network layer through a mark; obtaining a configuration parameter of the currently executed forward network layer, and loading a feature map and a weight parameter from a DDR through the configuration parameter; meanwhile, the acceleration kernel of the convolution layer configures the degree of parallelism according to the obtained executed forward network layer configuration parameters. According to the method, the network layer structureis changed through configuration parameters, only one layer structure can be used when the network FPGA is deployed, flexible configurability is achieved, and meanwhile the effect of saving and fullyutilizing on-chip resources of the FPGA is achieved. A method of splicing a plurality of RAMs into an overall cache region is adopted, the bandwidth of data input and output is improved, ping-pong operation is adopted, and therefore feature map and weight parameter loading and accelerator operation are in pipeline work.
Owner:HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products