Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

38 results about "Performance per watt" patented technology

In computing, performance per watt is a measure of the energy efficiency of a particular computer architecture or computer hardware. Literally, it measures the rate of computation that can be delivered by a computer for every watt of power consumed. This rate is typically measured by performance on the LINPACK benchmark when trying to compare between computing systems.

Apparatus and method for realizing accelerator of sparse convolutional neural network

The invention provides an apparatus and method for realizing an accelerator of a sparse convolutional neural network. According to the invention, the apparatus herein includes a convolutional and pooling unit, a full connection unit and a control unit. The method includes the following steps: on the basis of control information, reading convolutional parameter information, and input data and intermediate computing data, and reading full connected layer weight matrix position information, in accordance with the convolutional parameter information, conducting convolution and pooling on the input data with first iteration times, then on the basis of the full connected layer weight matrix position information, conducting full connection computing with second iteration times. Each input data is divided into a plurality of sub-blocks, and the convolutional and pooling unit and the full connection unit separately operate on the plurality of sub-blocks in parallel. According to the invention, the apparatus herein uses a specific circuit, supports a full connected layer sparse convolutional neural network, uses parallel ping-pang buffer design and assembly line design, effectively balances I / O broadband and computing efficiency, and acquires better performance power consumption ratio.
Owner:XILINX INC

Heterogeneous computing system and method based on CPU+GPU+FPGA architecture

InactiveCN107273331AGive full play to the advantages of management and controlTake full advantage of parallel processingArchitecture with single central processing unitEnergy efficient computingFpga architectureResource management
The invention provides a heterogeneous computing system based on CPU+GPU+FPGA architecture. The system comprises a CPU host unit, one or more GPU heterogeneous acceleration units and one or more FPGA heterogeneous acceleration units. The CPU host unit is in communication connection with the GPU heterogeneous acceleration units and the FPGA heterogeneous acceleration units. The CPU host unit is used for managing resources and allocating processing tasks to the GPU heterogeneous acceleration units and / or the FPGA heterogeneous acceleration units. The GPU heterogeneous acceleration units are used for carrying out parallel processing on tasks from the CPU host unit. The FPGA heterogeneous acceleration units are used for carrying out serial or parallel processing on the tasks from the CPU host unit. According to the heterogeneous computing system provided by the invention, the control advantages of the CPU, the parallel processing advantages of the GPU, the performance and power consumption ratio and flexible configuration advantages of the FPGA can be exerted fully, the heterogeneous computing system can adapt to different application scenes and can satisfy different kinds of task demands. The invention also provides a heterogeneous computing method based on the CPU+GPU+FPGA architecture.
Owner:SHANDONG CHAOYUE DATA CONTROL ELECTRONICS CO LTD

Device and method for neural network operation supporting floating point number with few digits

ActiveCN107340993ASmall area overheadReduce area overhead and optimize hardware area power consumptionDigital data processing detailsPhysical realisationData operationsComputer module
The invention provides a device and a method for executing artificial neural network forward operation. The device comprises a floating point data statistical module, a floating point data conversion unit and a floating point data operation module, wherein the floating point data statistical module is used for performing statistical analysis on all types of needed data to obtain index digit offset and length EL of index digits; the floating point data conversion unit is used for realizing conversion from a long-digit floating point data type to a short-digit floating point data type according to the index digit offset and the length EL of the index digits; and the floating point data operation module is used for performing artificial neural network forward operation on short-digit floating point data. According to the device, the data in multi-level artificial neural network forward operation is expressed with short-digit floating points, and the corresponding floating point operation module is used, so that forward operation of the short-digit floating points of an artificial neural network is realized, and accordingly the performance-to-consumption ratio of hardware is greatly improved.
Owner:CAMBRICON TECH CO LTD

Method and system for reducing soft error rate of processor

ActiveCN103365731AReduce soft error rateMaintain or improve performance per wattError detection/correctionProgram segmentLearning methods
The invention discloses a method and system for reducing soft error rate of a processor. The method comprises the following steps: constructing a prediction model: adopting a machine learning method to construct the prediction model, so as to predict processor optimum configuration capable of reducing soft error rate of the processor with low cost; recognizing program segments: dividing a program into a plurality of continuous program segments during program running; obtaining statistical characteristics: obtaining the statistical characteristics of the program segments within a short period of time when the program segments are run initially; predicting optimum configuration: inputting the obtained statistical characteristics into the prediction model, so as to predict the processor optimum configuration corresponding to the program segments as predicting results; adjusting: according to the predicting results, adjusting processor components configuration, so as to reduce the soft error rate of the processor under the condition that performance power consumption ratio is maintained or improved. According to the method and the system for reducing soft error rate of the processor, the purpose that the reduction of soft error rate of the processor with low cost is achieved through dynamically adjusting the processor components configuration is achieved.
Owner:INST OF COMPUTING TECH CHINESE ACAD OF SCI

Energy consumption-oriented cloud workflow scheduling optimization method

InactiveCN105260005AImplement energy consumption calculation methodKeep Execution Time EfficientResource allocationPower supply for data processingCloud workflowWorkflow scheduling
The invention discloses an energy consumption-oriented cloud workflow scheduling optimization method. The method comprises the following steps: (1) establishing energy consumption-oriented cloud workflow process model and resource model; (2) calculating a task priority; (3) taking out a task t with highest priority from a task set T, finding out a virtual machine set VMt capable of executing the task t, and calculating energy consumption for distributing the task t to each virtual machine in the VMt and completing all distributed tasks; (4) finding out a vm with minimal energy consumption, if only one vm has the minimal energy consumption, distributing the task t to the vm, and if a plurality of vms have the minimal energy consumption, distributing the task t to the vm with characteristics that the vm has the minimal energy consumption and a host in which the vm is located has highest performance per watt; deleting the task t from the task set T, and if the task set T is not null, going to the step (3), or otherwise, going to the step (5); and (5) outputting a workflow scheduling scheme. According to the scheduling optimization method provided by the invention, an energy consumption factor is considered, so that the energy consumption for task processing by the host is effectively reduced while workflow execution time efficiency is kept.
Owner:南京喜筵网络信息技术有限公司

Multi-spectral skin color calibration and power consumption optimization device and working method of PPG technology

ActiveCN108606801AImprove battery lifeOvercoming the problem of not being able to collect smoothlyDiagnostic recording/measuringSensorsBand-pass filterSkin color
The invention discloses a multi-spectral skin color calibration and power consumption optimization device based on a PPG technology. The device comprises a detection device body, the detection devicebody comprises an LED, a PD and a processor, the LED and the PD are disposed on the same side, facing a target to be detected, of the detection device body, the processor is sequentially in controlling connection with the LED through an LED timing controller and an LED driver, and the PD is sequentially in communication connection with the processor through a transconductance amplifier, a band pass filter and an analog-to-digital converter; compared with a traditional method, a working method of the device achieves two major purposes that one, two-color light is used for calibrating skin colors, light with a longer wavelength such as infrared light is used for overcoming the problem that dark skin cannot be collected smoothly, so that the device adapts to more user groups, and user feelings are significantly improved; two, a method of dynamically adjusting PPG detection parameters is proposed, so that a working response state is entered at the fastest speed, the data is collected by using the best performance-to-power ratio, and the capability of prolonging runtime of a detection device is significantly improved.
Owner:福建守众安智能科技有限公司

Multi-instruction out-of-order transmitting method based on instruction withering and processor

ActiveCN111538534ASolve the problem of not being able to increase the number of entries in the launch queueSolve the problem of increasing latencyConcurrent instruction executionEnergy efficient computingEngineeringLow delay
The invention discloses a multi-instruction out-of-order transmitting method based on instruction withering and a processor, and belongs to the field of processor design. According to the invention, aredundant arbitration structure in a traditional transmitting architecture is abandoned, an instruction withering circuit is added, and an instruction age array is adopted to represent the storage time of instructions in a CPU. In addition, an awakening state bit is added, the instructions exceeding the withering threshold value are stored in a settling pond so that a CPU can directly transmit the instructions, circuit structures such as an instruction request circuit, an instruction distribution circuit and an awakening circuit are improved, and the time sequence of a key path in the processor for multi-instruction transmission is effectively improved; and when an instruction is awakened, delayed awakening is performed on an instruction with a short execution period, the instruction witha long execution period is awakened in advance so as to ensure that the instruction can be executed back to back, the requirements of high power consumption ratio, low delay and high IPC in a modernsuperscalar out-of-order processor are met, and the problems that in the prior art, the number of items of a launch queue table of a processor cannot be increased day by day, and delay is also increased day by day are solved.
Owner:JIANGNAN UNIV

An Energy-Consumption-Oriented Cloud Workflow Scheduling Optimization Method

InactiveCN105260005BImplement energy consumption calculation methodKeep Execution Time EfficientResource allocationPower supply for data processingParallel computingCloud workflow
The invention discloses an energy consumption-oriented cloud workflow scheduling optimization method. The method comprises the following steps: (1) establishing energy consumption-oriented cloud workflow process model and resource model; (2) calculating a task priority; (3) taking out a task t with highest priority from a task set T, finding out a virtual machine set VMt capable of executing the task t, and calculating energy consumption for distributing the task t to each virtual machine in the VMt and completing all distributed tasks; (4) finding out a vm with minimal energy consumption, if only one vm has the minimal energy consumption, distributing the task t to the vm, and if a plurality of vms have the minimal energy consumption, distributing the task t to the vm with characteristics that the vm has the minimal energy consumption and a host in which the vm is located has highest performance per watt; deleting the task t from the task set T, and if the task set T is not null, going to the step (3), or otherwise, going to the step (5); and (5) outputting a workflow scheduling scheme. According to the scheduling optimization method provided by the invention, an energy consumption factor is considered, so that the energy consumption for task processing by the host is effectively reduced while workflow execution time efficiency is kept.
Owner:南京喜筵网络信息技术有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products