Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

178 results about "Network processing unit" patented technology

Network processors are typically software programmable devices and would have generic characteristics similar to general purpose central processing units that are commonly used in many different types of equipment and products.

Method and system for network processor scheduling based on service levels

InactiveUS20020023168A1Error preventionTransmission systemsMaximum burst sizeComing out
A system and method of moving information units from an output flow control toward a data transmission network in a prioritized sequence which accommodates several different levels of service. The present invention includes a method and system for scheduling the egress of processed information units (or frames) from a network processing unit according to service based on a weighted fair queue where position in the queue is adjusted after each service based on a weight factor and the length of frame, a process which provides a method for and system of interaction between different calendar types is used to provide minimum bandwidth, best effort bandwidth, weighted fair queuing service, best effort peak bandwidth, and maximum burst size specifications. The present invention permits different combinations of service that can be used to create different QoS specifications. The "base" services which are offered to a customer in the example described in this patent application are minimum bandwidth, best effort, peak and maximum burst size (or MBS), which may be combined as desired. For example, a user could specify minimum bandwidth plus best effort additional bandwidth and the system would provide this capability by putting the flow queue in both the NLS and WFQ calendar. The system includes tests when a flow queue is in multiple calendars to determine when it must come out.
Owner:IBM CORP

Power consumption information acquisition system safety isolation gateway and application method thereof

InactiveCN106941494AGuaranteed uptimePerfect and effective safety protection measuresTransmissionNetwork processing unitComputer terminal
The invention relates to a power consumption information acquisition system safety isolation gateway and an application method thereof; the safety isolation gateway comprises the following units: an internal network processing unit used for receiving a message sent by an acquisition server, sending packaged pure application data to an isolation exchange unit, and receiving the data transmitted by an external network processing unit from the isolation exchange unit; the external network processing unit used for receiving the message sent by the acquisition terminal, sending the packaged pour application data to the isolation exchange unit, and receiving the data transmitted by the internal network processing unit from the isolation exchange unit; the isolation exchange unit arranged between the internal and external network processing units and used for storing the pure application data transmitted by the internal and external network processing units, thus realizing controllable exchange of the pure application data between the internal and external network processing units; a code processing unit used for carrying out code protocol inspection for the data processed by the isolation exchange unit in a flow pass mode, and providing code examination and decryption services.
Owner:CHINA ELECTRIC POWER RES INST +1

Efficient conversion method and device for deep learning model

ActiveCN107480789ADecreased structural correlationAchieve early optimizationFuzzy logic based systemsAlgorithmNetwork processing unit
An efficient conversion method for a deep learning model provided by the embodiment of the invention is used to solve the technical problem that the development efficiency and operation efficiency of a deep learning model are low. The method includes the following steps: building a data standardization framework corresponding to an NPU (Neural-Network Processing Unit) model according to a general deep learning framework; using the data standardization framework to convert the parameters of a deep learning model into the standard parameters of the data standardization framework; and converting the standard parameters into the parameters of the NPU model. According to the invention, a unified data standardization framework is built for a specific processor according to the parameter structures of general deep learning frameworks. Standard data can be formed using the unified data structure of the data standardization framework according to the parameters of a deep learning model formed by a general deep learning framework. Thus, the process of data analysis by the processor depends much less on the structure of the deep learning model, and the development of the processing process of the processor and the development of the deep learning model can be separated. A corresponding efficient conversion device is also provided.
Owner:VIMICRO CORP

Method and system for network processor scheduling based on service levels

A system and method of moving information units from an output flow control toward a data transmission network in a prioritized sequence which accommodates several different levels of service. The present invention includes a method and system for scheduling the egress of processed information units (or frames) from a network processing unit according to service based on a weighted fair queue where position in the queue is adjusted after each service based on a weight factor and the length of frame, a process which provides a method for and system of interaction between different calendar types is used to provide minimum bandwidth, best effort bandwidth, weighted fair queuing service, best effort peak bandwidth, and maximum burst size specifications. The present invention permits different combinations of service that can be used to create different QoS specifications. The “base” services which are offered to a customer in the example described in this patent application are minimum bandwidth, best effort, peak and maximum burst size (or MBS), which may be combined as desired. For example, a user could specify minimum bandwidth plus best effort additional bandwidth and the system would provide this capability by putting the flow queue in both the NLS and WFQ calendar. The system includes tests when a flow queue is in multiple calendars to determine when it must come out.
Owner:IBM CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products