Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

373 results about "Huffman coding" patented technology

In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code proceeds by means of Huffman coding, an algorithm developed by David A. Huffman while he was a Sc.D. student at MIT, and published in the 1952 paper "A Method for the Construction of Minimum-Redundancy Codes".

Using gain-adaptive quantization and non-uniform symbol lengths for improved audio coding

Techniques like Huffman coding can be used to represent digital audio signal components more efficiently using non-uniform length symbols than can be represented by other coding techniques using uniform length symbols Unfortunately, the coding efficiency that can be achieved by Huffman coding depends on the probability density function of the information to be coded and the Huffman coding process itself requires considerable processing and memory resources. A coding process that uses gain-adaptive quantization according to the present invention can realize the advantage of using non-uniform length symbols while overcoming the shortcomings of Huffman coding. In gain-adaptive quantization, the magnitudes of signal components to be encoded are compared to one or more thresholds and placed into classes according to the results of the comparison. The magnitudes of the components placed into one of the classes are modified according to a gain factor that is related to the threshold used to classify the components. Preferably, the gain factor may be expressed as a function of only the threshold value. Gain-adaptive quantization may be used to encode frequency subband signals in split-band audio coding systems. Additional features including cascaded gain-adaptive quantization, intra-frame coding, split-interval and non-overloading quantizers are disclosed.
Owner:DOLBY LAB LICENSING CORP

Object and fractal-based binocular three-dimensional video compression coding and decoding method

The invention provides an object and fractal-based binocular three-dimensional video compression and decompression method. In binocular three-dimensional video coding, a left channel is used as a basic layer, a right channel is used as an enhancement layer, and the left channel is encoded by an independent motion compensation prediction (MCP) mode. The object and fractal-based binocular three-dimensional video compression coding method comprises the following steps of: firstly, acquiring a video object partition plane, namely an Alpha plane by a video partition method, encoding the initial frame of a left eye through block discrete cosine transformation (DCT), and performing block motion estimation / compensation coding on a non-I frame of the left eye; secondly, determining the area attribute of an image block by utilizing the Alpha plane, and if the block is not within a video object area of the current code, not processing an external block, and if the block is within the video object area of the current code completely, searching the most similar matching block by a full-searching method in a previous frame of an internal block, namely a reference frame searching window of a left eye video; and finally, compressing coefficients of an iterated function system by a Huffman coding method, and if part of pixels of the block are within the video object area of the current code, and the other part of pixels are not within the video object area of the current code, processing a boundary block independently. The right channel is encoded by a MCP mode and a disparity compensation prediction (DCP) mode, the MCP is similar to the processing of the left eye, and the block with the minimum error is used as a prediction result. When the DCP coding mode is performed, the polarization and directionality in a three-dimensional parallel camera structure are utilized fully.
Owner:BEIHANG UNIV

Intra-class coefficient scrambling-based JPEG image encryption method

The invention relates to an intra-class coefficient scrambling-based JPEG image encryption method. The method comprises the following steps: firstly reading in a JPEG image file, acquiring a Huffman coding table and image data which undergoes JPEG coding compression, carrying out decoding to acquire all the non-zero quantized DCT coefficient numerical values and positions and carrying out classification; selecting a password, carrying out chaotic iteration by utilizing the password, so as to generate a chaotic sequence, and scrambling the non-zero coefficients and 8*8 block of each class by utilizing the chaotic sequence; and carrying out entropy coding on a scrambled quantized DCT coefficient matrix, and writing the coded data into the JPEG image file so as to complete intra-class coefficient scrambling-based JPEG image encryption. According to the method disclosed in the invention, scrambling is carried out on different classes of quantized DCT coefficients through the chaotic sequence, and the quantized DC coefficients and the non-zero AC coefficients are processed by directly using one encryption scheme, so that the safety and the high efficiency are both considered; and the encrypted images are similar to clear text image files in the aspect of size, and have high compression ratios.
Owner:CHANGAN UNIV

Method and system for compressing reduced instruction set computer (RISC) executable code

A method and system for a compression scheme used with program executables that run in a reduced instruction set computer (RISC) architecture such as the PowerPC is disclosed. Initially, a RISC instruction set is expanded to produce code that facilitates the removal of redundant fields. The program is then rewritten using this new expanded instruction set. Next, a filter is applied to remove redundant fields from the expanded instructions. The expanded instructions are then clustered into groups, such that instructions belonging to the same cluster show similar bit patterns. Within each cluster, the scopes are created such that register usage patterns within each scope are similar. Within each cluster, more scopes are created such that literals within each instruction scope are drawn from the same range of integers. A conventional compression technique such as Huffman encoding is then applied on each instruction scope within each cluster. Dynamic programming techniques are then used to produce the best combination of encoding among all scopes within all the different clusters. Where applicable, instruction scopes are combined that use the same encoding scheme to reduce the size of the resulting dictionary. Similarly instruction clusters are combined that use the same encoding scheme to reduce the size of the resulting dictionary.
Owner:IBM CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products