Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

315 results about "Resource constraints" patented technology

The resource constraint definition refers to the limitations of inputs available to complete a particular job: primarily people time, equipment and supplies. Every project you accept will require some combination of time and resources.

Iterative repair optimization with particular application to scheduling for integrated capacity and inventory planning

A schedule for a complex activity is obtained by a scheduling system using a method of constraint-based iterative repair. A predetermined initial schedule is iteratively repaired, repairs being made during each iteration only to portions of the schedule that produce a constraint violation, until an acceptable schedule is obtained. Since repairs are made to the schedule only to repair violated constraints, rather than to the entire schedule, schedule perturbations are minimized, thereby reducing problems with the dynamic performance of the scheduling system and minimizing disruption to the smooth operation of the activity. All constraints on the scheduling activity can be evaluated simultaneously to produce a solution that is near optimal with respect to all constraints. In particular, consumable resource constraints can be evaluated simultaneously with other constraints such as, for example, reusable resource constraints, temporal constraints, state constraints, milestone constraints and preemptive constraints. The scheduling system of the invention is much quicker than previous scheduling systems that use, for example, constructive scheduling method. The system of the invention can also be easily modified to add, delete or modify constraints. Because of the minimization of schedule perturbation, coupling of all constraints, speed of operation, and ease of modification, the scheduling system of the invention is particularly useful for scheduling applications that require frequent and rapid rescheduling.
Owner:ORACLE INT CORP

Automated design method, device and optimization method applied for neural network processor

The invention discloses an automated design method, device and optimization method applied for a neural network processor. The method comprises the steps that neural network model topological structure configuration files and hardware resource constraint files are obtained, wherein the hardware resource constraint files comprise target circuit area consumption, target circuit power consumption and target circuit working frequency; a neural network processor hardware architecture is generated according to the neural network model topological structure configuration files and the hardware resource constraint files, and hardware architecture description files are generated; according to a neural network model topological structure, the hardware resource constraint files and the hardware architecture description files, modes of data scheduling, storage and calculation are optimized, and corresponding control description files are generated; according to the hardware architecture description files and the control description files, cell libraries meet the design requirements are found in constructed reusable neural network cell libraries, corresponding control logic and a corresponding hardware circuit description language are generated, and the hardware circuit description language is transformed into a hardware circuit.
Owner:INST OF COMPUTING TECH CHINESE ACAD OF SCI

Blocked convolution optimization method and device for convolution neural network

The invention relates to the field of deep neural networks and provides a blocked convolution optimization method and a device for a convolution neural network, so as to solve the bottleneck problem of convolution operation in a hardware processing system in the neural network. The optimization method comprises steps: a to-be-blocked convolution layer is selected, and the upper limit of the block size is determined; according to the upper limit of the block size, a block number and the block size of an input feature map are determined; based on the block number, the block size, the size of a convolution kernel, the size of the input feature map and the filling size of an input feature map boundary, the block boundary filling size of a block feature map is calculated; and based on the block number, the block size and the block boundary filling size, a convolution based on the block boundary filling is built to replace the original convolution. The resource constraint problem of the convolution neural network during operation of an embedded hardware platform is greatly alleviated, the burst length is improved maximally when a memory is read and written, the throughput is improved, the time delay is reduced, and the efficiency is improved.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products