Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

36results about How to "Reduce cache requirements" patented technology

Hardware structure for realizing forward calculation of convolutional neural network

The present application discloses a hardware structure for realizing forward calculation of a convolutional neural network. The hardware structure comprises: a data off-chip caching module, used for caching parameter data in each to-be-processed picture that is input externally into the module, wherein the parameter data waits for being read by a multi-level pipeline acceleration module; the multi-level pipeline acceleration module, connected to the data off-chip caching module and used for reading a parameter from the data off-chip caching module, so as to realize core calculation of a convolutional neural network; a parameter reading arbitration module, connected to the multi-level pipeline acceleration module and used for processing multiple parameter reading requests in the multi-level pipeline acceleration module, so as for the multi-level pipeline acceleration module to obtain a required parameter; and a parameter off-chip caching module, connected to the parameter reading arbitration module and used for storing a parameter required for forward calculation of the convolutional neural network. The present application realizes algorithms by adopting a hardware architecture in a parallel pipeline manner, so that higher resource utilization and higher performance are achieved.
Owner:智擎信息系统(上海)有限公司

Method and device for configuring machine class communication terminal capacity

The invention discloses a method and a device for configuring machine class communication terminal capacity. The method and the device for configuring machine class communication terminal capacity are used for promoting a (machine to machine) M2M service to progress from a global system for mobile communication (GSM) to a long term evolution (LTE) system. Capacity parameters of machine type communication (MTC) terminal equipment are configured according to a bandwidth and a highest modulation system supported by the MTC terminal equipment, and the capacity of a buffer of the MTC terminal equipment is configured according to a total buffering channel bit number parameter. By limiting the receiving bandwidth and sending bandwidth of the MTC terminal equipment, the largest modulation order and the supported largest hybrid automatic repeat request (HARQ) process number of the MTC terminal equipment are limited, a total buffering channel bit number of the MTC terminal equipment is largely lowered, the buffering requirement of the MTC terminal equipment is lowered, and therefore the cost of the MTC terminal equipment is largely lowered, and the MTC service is promoted to progress from the GSM to the LTE system.
Owner:ZTE CORP

Neural network acceleration device and method and communication equipment

The invention provides a neural network acceleration device and method and communication equipment, and belongs to the field of data processing, and the method specifically comprises the steps that a main memory receives and stores feature map data and weight data of a to-be-processed image; a main controller generates configuration information and an operation instruction according to the structure parameters of the neural network; a data caching module comprises a feature data caching unit for caching feature line data extracted from the feature map data and a convolution kernel caching unit for caching convolution kernel data extracted from the weight data; a data controller adjusts a data path according to the configuration information and the instruction information and controls the data flow extracted by a data extractor to be loaded to a corresponding neural network calculation unit, the neural network calculation unit at least completes convolution operation of one convolution kernel and feature map data and completes accumulation of multiple convolution results in at least one period, and therefore, circuit reconstruction and data multiplexing are realized; and an accumulator accumulates the convolution results and outputs output feature map data corresponding to a convolution core.
Owner:绍兴埃瓦科技有限公司 +1

System and method for non-stop routing during switching of main and standby disks of control plane

ActiveCN110958176ANonstop routing implementationImplement non-stop routing schemeData switching networksComputer hardwareEngineering
The invention discloses a system and method for non-stop routing during the switching of the main and standby disks of a control plane, and relates to the technical field of communication, and the method comprises the stepsthat the main disk stores the related information of a socket in a first socket proxy module when operating the socket, and then synchronizes the stored socket information to asecond socket proxy module; after receiving the socket information, the second socket proxy module stores the socket information and calls a TCP/IP module interface of the standby disk to obtain a corresponding socket; when the main disk and the standby disk are switched, the second socket proxy module traverses locally stored socket information to replace a standby disk protocol module to providekeep-alive messages of protocols corresponding to sockets; and after the switching between the main disk and the standby disk is completed, the socket proxy module stops replacing the standby disk protocol module to provide the keep-alive message. According to the method, the connection state between the standby disk and the neighbor equipment node protocol module can be kept uninterruptible in the lifting process of the standby disk, and non-stop routing is realized.
Owner:FENGHUO COMM SCI & TECH CO LTD

Goods source display method and device, electronic equipment and storage medium

The invention provides a goods source display method and device, electronic equipment and a storage medium. The method comprises the steps of obtaining goods source information of a to-be-displayed goods source; determining a trained transaction prediction model according to a region cluster to which the region where the goods source information is located belongs; extracting first-level feature data of the to-be-displayed goods source information; inputting the first-level feature data of the to-be-displayed goods source information into a trained second-level feature prediction model, the second-level feature prediction model being obtained by fusing a plurality of prediction models; obtaining second-level feature data predicted by the second-level feature prediction model; inputting thefirst-level feature data and the second-level feature data of at least part of the to-be-displayed goods source information into the transaction prediction model; sorting the goods source informationof the to-be-displayed goods source according to the prediction result of the transaction prediction model; and displaying the sorted goods source information. The goods source browsing time of the user is shortened, the order receiving efficiency is improved, and then the overall freight transport efficiency of the platform is improved.
Owner:江苏运满满信息科技有限公司

AI-based intelligent image preprocessing method and system

The invention discloses an AI-based intelligent image preprocessing method and system. The AI-based intelligent image preprocessing method comprises the steps: obtaining a sensing signal from an imagesensor in batches, carrying out the artificial intelligence processing, outputting a processing result to the image sensor for selective optimization, and enabling the image sensor to output the optimized sensing signal to an image signal processor for generating a digital image. According to the invention, the AI processing mode and the final imaging strategy can be flexibly adjusted according to different applications and requirements; in addition, the requirement for cache of the image sensor is low, and the whole picture can be processed only through a small amount of cache, and hardwarecost and power consumption are remarkably reduced. According to the invention, when AI processing is carried out on the input image, more pixel information in the vertical direction can be contained according to an adaptive occasion, and the algorithm performance of the intelligent camera in the application field sensitive to the context information in the vertical direction can be improved on thepremise that the hardware cost is not increased. The AI-based intelligent image preprocessing method can be used for but is not limited to face detection, pedestrian detection, vehicle identificationand the like.
Owner:淄博凝眸智能科技有限公司

A collaborative transmission method realized in isomerization wireless network with cooperating relay nodes

The invention discloses a method for implementing heterogeneous and collaborative transmission in a heterogeneous wireless network by using collaborative trunk nodes; one or more collaborative trunk nodes are arranged, and each of which is provided with two set of antenna modules; a base station implements an optimized selection of proper collections of the heterogeneous and collaborative trunk nodes respectively for each destination user terminal that needs trunking, selects and determines antennas with proper number for the uplink and the downlink of each terminal, thus being convenient foracquiring self-adaptive diversity and reuse gain; the emission power of each antenna is reasonably set by comprehensively considering the transmission characteristics of the uplink and the downlink, thus ensuring the balance between the transmission performance of the first-hop link and the second-hop link and optimizing the integrative performance of the system. Based on the sufficient consideration of the existing constructions of wireless network frameworks, the invention discloses two set of antennas creatively, which are respectively used as heterogeneous trunk nodes of two-hips link transmission for implementing heterogeneous transmission, thus having the advantages of simple and convenient operation, flexibility, practicability, less investment, good effectiveness and great application generalization prospect.
Owner:BEIJING UNIV OF POSTS & TELECOMM

Display method, device, electronic device, and storage medium of source of goods

The present invention provides a method, device, electronic device and storage medium for displaying a source of goods. The method includes: acquiring source information of the source of goods to be displayed; determining a trained transaction prediction model according to a region cluster to which the region where the source of goods information is located belongs; The first-level feature data of the source of supply information to be displayed; the first-level feature data of the source of supply information to be displayed is input into a trained second-level feature prediction model, and the second-level feature prediction model is obtained by fusing multiple prediction models; Obtain the secondary feature data predicted by the secondary feature prediction model; input at least part of the primary feature data of the source information to be displayed and the secondary feature data into the transaction prediction model; according to the transaction prediction model The forecast result of , sort the source information of the source to be displayed; display the sorted source information. The present invention reduces the browsing time of the user for the source of goods, improves the efficiency of receiving orders, and further improves the overall freight efficiency of the platform.
Owner:江苏运满满信息科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products