Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

556 results about "Inverse quantization" patented technology

Efficient de-quantization in a digital video decoding process using a dynamic quantization matrix for parallel computations

An efficient digital video (DV) decoder process that utilizes a specially constructed quantization matrix allowing an inverse quantization subprocess to perform parallel computations, e.g., using SIMD processing, to efficiently produce a matrix of DCT coefficients. The present invention utilizes a first look-up table (for 8x8 DCT) which produces a 15-valued quantization scale based on class number information and a QNO number for an 8x8 data block ("data matrix") from an input encoded digital bit stream to be decoded. The 8x8 data block is produced from a deframing and variable length decoding subprocess. An individual 8-valued segment of the 15-value output array is multiplied by an individual 8-valued segment, e.g., "a row," of the 8x8 data matrix to produce an individual row of the 8x8 matrix of DCT coefficients ("DCT matrix"). The above eight multiplications can be performed in parallel using a SIMD architecture to simultaneously generate a row of eight DCT coefficients. In this way, eight passes through the 8x8 block are used to produce the entire 8x8 DCT matrix, in one embodiment consuming only 33 instructions per 8x8 block. After each pass, the 15-valued output array is shifted by one value position for proper alignment with its associated row of the data matrix. The DCT matrix is then processed by an inverse discrete cosine transform subprocess that generates decoded display data. A second lookup table can be used for 2x4x8 DCT processing.
Owner:SONY ELECTRONICS INC +1

Quantization loop with heuristic approach

A quantizer finds a quantization threshold using a quantization loop with a heuristic approach. Following the heuristic approach reduces the number of iterations in the quantization loop required to find an acceptable quantization threshold, which instantly improves the performance of an encoder system by eliminating costly compression operations. A heuristic model relates actual bit-rate of output following compression to quantization threshold for a block of a particular type of data. The quantizer determines an initial approximation for the quantization threshold based upon the heuristic model. The quantizer evaluates actual bit-rate following compression of output quantized by the initial approximation. If the actual bit-rate satisfies a criterion such as proximity to a target bit-rate, the quantizer sets accepts the initial approximation as the quantization threshold. Otherwise, the quantizer adjusts the heuristic model and repeats the process with a new approximation of the quantization threshold. In an illustrative example, a quantizer finds a uniform, scalar quantization threshold using a quantization loop with a heuristic model adapted to spectral audio data. During decoding, a dequantizer applies the quantization threshold to decompressed output in an inverse quantization operation.
Owner:MICROSOFT TECH LICENSING LLC

Depth neural network based vector quantization system and method

ActiveCN106203624AEffective dimensionality reductionData error is smallNeural learning methodsCode modulePattern recognition
The invention provides a depth neural network based vector quantization system and method, comprising a normalization preprocessing module for normalizing original data through normalized data and outputting preprocessed data after normalization; a vector quantization and coding module for receiving the preprocessed data and the codebook and carrying out vector quantization coding to the preprocessed data through the codebook and outputting the coded data; a neural network inverse quantization module for performing the decoding of the inverse quantization to the coded data through a depth neural network and outputting the decoded data; a processing module after inverse normalization for performing an inverse normalization process to the decoded data through the normalized data and outputting the restored original data after the inverse normalization; and a neural network training module for carrying out trainings to the neural network through the pre-processed training data and coded training data after normalization processing and outputting the neural network to the neural network inverse quantization module. The system and the method of the invention can effectively solve the problem that the quantization error is large in high dimension signal vector quantization.
Owner:SHANGHAI JIAO TONG UNIV

Video sequence coding method aiming at HEVC (High Efficiency Video Coding)

The invention relates to a video sequence coding method aiming at HEVC (High Efficiency Video Coding), and the method is good in compaction effect, low in complexity and suitable for practical application. The video sequence coding method comprises the following steps of: (1) obtaining a prediction block according to the brightness component of a PU (Prediction Unit), subtracting an original value to obtain residual, and dividing based on the rate-distortion cost to obtain a TU (Transformation Unit); (2) determining whether the current TU selects to transform or not according to the minimum rate-distortion cost principle; (3) quantizing by utilizing a relative method, and transmitting marks of a quantization coefficient and transformation or non-transformation to a decoding end; (4) reading out the marks and judging whether inverse transformation is executed or not at the decoding end by each TU; (5) executing corresponding inverse quantization according to the read marks by each TU; (6) deciding whether the inverse transformation is executed according to the read marks, and executing corresponding operations; and (7) combining TUs to be the PU, and adding the PU with a prediction value, obtained through motion compensation, of the current PU to obtain a reconfiguration value of the PU so as to reconfigure a current CU (Coding Unit).
Owner:BEIJING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products