Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

270 results about "Vector processor" patented technology

In computing, a vector processor or array processor is a central processing unit (CPU) that implements an instruction set containing instructions that operate on one-dimensional arrays of data called vectors, compared to the scalar processors, whose instructions operate on single data items. Vector processors can greatly improve performance on certain workloads, notably numerical simulation and similar tasks. Vector machines appeared in the early 1970s and dominated supercomputer design through the 1970s into the 1990s, notably the various Cray platforms. The rapid fall in the price-to-performance ratio of conventional microprocessor designs led to the vector supercomputer's demise in the later 1990s.

Digital camera system containing a VLIW vector processor

A digital camera has a sensor for sensing an image, a processor for modifying the sensed image in accordance with instructions input into the camera and an output for outputting the modified image where the processor includes a series of processing elements arranged around a central crossbar switch. The processing elements include an Arithmetic Logic Unit (ALU) acting under the control of a writeable microcode store, an internal input and output FIFO for storing pixel data to be processed by the processing elements and the processor is interconnected to a read and write FIFO for reading and writing pixel data of images to the processor. Each of the processing elements can be arranged in a ring and each element is also separately connected to its nearest neighbors. The ALU receives a series of inputs interconnected via an internal crossbar switch to a series of core processing units within the ALU and includes a number of internal registers for the storage of temporary data. The core processing units can include at least one of a multiplier, an adder and a barrel shifter. The processing elements are further connected to a common data bus for the transfer of a pixel data to the processing elements and the data bus is interconnected to a data cache which acts as an intermediate cache between the processing elements and a memory store for storing the images.
Owner:GOOGLE LLC

Vector processor-oriented vectorization realization method for two-dimensional matrix convolution

The invention discloses a vector processor-oriented vectorization realization method for two-dimensional matrix convolution. The method comprises the steps of S1, moving a convolution matrix A and a convolution matrix B to a vector storage unit and a scalar storage unit respectively through a DMA controller; S2, multiplying a row of elements of the convolution matrix A by a row of corresponding elements after broadcast of an element of the convolution matrix B in a one-to-one correspondence manner, and accumulating results; S3, extracting first K-1 elements of the row of the elements extracted from the convolution matrix A to a vector processing unit through a shuffle instruction, multiplying the first K-1 elements by the second element, extracted currently and broadcast to the vector processing unit, of the convolution kernel matrix B in a one-to-one correspondence manner, and accumulating results; S4, judging whether the calculation of the row of the elements is finished or not; and S5, enabling data addresses of the two matrixes to point to a next data row, finishing the calculation of a first row of elements of a matrix C, and finishing the calculation of the whole matrix C through circulation. The method has the advantages that the principle is simple, the operation is convenient, the algorithm parallelism can be greatly improved, the calculation efficiency is improved, and the like.
Owner:NAT UNIV OF DEFENSE TECH

Open-circuit fault diagnosis method for drive system of double-winding fault-tolerant permanent-magnet motor

The invention discloses an open-circuit fault diagnosis method for the drive system of a double-winding fault-tolerant permanent-magnet motor. According to the method, the collected phase current of a motor is respectively subjected to the converting and processing process by a low-pass filter, a Clarke convertor, a Park vector processor, a normalization processor, an average value calculator, an absolute value calculator and an absolute value averaging processor, so as to obtain the average value of the normalized phase current of the motor and the average value of the absolute value of the normalized phase current of the motor. With the average value of the normalized phase current of the motor and the average value of the absolute value of the normalized phase current of the motor as a basis, fault diagnostic variables for the system are constructed, so that the open-circuit fault of the drive system of the double-winding fault-tolerant permanent-magnet motor can be detected and positioned in real time. Based on the above method, no extra current sensor is required, so that the method is simple, feasible and high in reliability. Based on the combined utilization of the average value of the normalized current and the average value of the absolute value of the normalized current, the incorrect diagnosis caused by load sudden change and other reasons during the conventional diagnosis process can be avoided. Meanwhile, the diagnosis time is greatly reduced. Therefore, the open-circuit fault of the drive system can be effectively detected and positioned in real time.
Owner:NANJING UNIV OF AERONAUTICS & ASTRONAUTICS

Triangular matrix multiplication vectorization method of vector processor

The invention discloses a triangular matrix multiplication vectorization method of a vector processor. The triangular matrix multiplication vectorization method of the vector processor comprises the steps that (1) triangular matrix elements in a multiplicand triangular matrix T are stored continuously by row; (2) a multiplier matrix B is divided into a plurality of sub-matrixes Bi by row according to the number of vector processing units of the vector processor and the number of MAC parts of the vector processing units; (3) the sub-matrixes Bi are multiplied by the multiplicand triangular matrix T in sequence and then the results are stored on storage positions of the original sub-matrixes Bi; (4) the sub-matrixes Bi of the multiplier matrix are traversed and then the fact that whether sub-matrixes Bi which are not multiplied by the multiplicand triangular matrix exist is judged, the I is updated according to the formula i=i+1 and the steps are repeated from the step (3) if sub-matrixes Bi which are not multiplied by the multiplicand triangular matrix exist, and step (5) is executed if sub-matrixes Bi which are not multiplied by the multiplicand triangular matrix do not exist; (5) triangular matrix multiplication is accomplished. The triangular matrix multiplication vectorization method of the vector processor has the advantages that the principle is simple, operation is easy and convenient, and the calculation efficiency of the vector processor can be fully performed.
Owner:NAT UNIV OF DEFENSE TECH

Vector access and storage device supporting SIMT in vector processor and control method

The invention discloses a vector access and storage device supporting SIMT in a vector processor and a control method. The device comprises a base address vector register unit, an offset vector register unit and a vector address calculation unit; each of the base address vector register unit and the offset vector register unit comprises a plurality of groups of vector registers; each group of vector registers is composed of a plurality of vector registers; the vector address calculation unit comprises a plurality of address calculation subunits; each address calculation subunit is connected with each memory bank in the vector processor correspondingly one to one; each of the base address and the offset address of each thread is obtained by use of one group of the vector registers, and then the base address and the offset address are output to the address calculation subunits for calculation, and the obtained access and storage address of each thread is output to the corresponding memory bank; the method is the control method of the vector access and storage device. The vector access and storage device has the advantages of high vector access and storage flexibility, high parallel access and storage efficiency and low power consumption, and is capable of supporting SIMT thread level parallelism.
Owner:NAT UNIV OF DEFENSE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products