Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

80 results about "Residual frame" patented technology

In video compression algorithms a residual frame is formed by subtracting the reference frame from the desired frame. This difference is known as the error or residual frame. The residual frame normally has less information entropy, due to nearby video frames having similarities, and therefore requires fewer bits to compress.

Overcomplete basis transform-based motion residual frame coding method and apparatus for video compression

The present invention provides a method to compress digital moving pictures or video signals based on an overcomplete basis transform using a modified Matching Pursuit algorithm. More particularly, this invention focuses on the efficient coding of the motion residual image, which is generated by the process of motion estimation and compensation. A residual energy segmentation algorithm (RESA) can be used to obtain an initial estimate of the shape and position of high-energy regions in the residual image. A progressive elimination algorithm (PEA) can be used to reduce the number of matching evaluations in the matching pursuits process. RESA and PEA can speed up the encoder by many times for finding the matched basis from the pre-specified overcomplete basis dictionary. Three parameters of the matched pattern form an atom, which defines the index into the dictionary and the position of the selected basis, as well as the inner product between the chosen basis pattern and the residual signal. The present invention provides a new atom position coding method using quad tree like techniques and a new atom modulus quantization scheme. A simple and efficient adaptive mechanism is provided for the quantization and position coding design to allow a system according to the present invention to operate properly in low, medium and high bit rate situations. These new algorithm components can result in a faster encoding process and improved compression performance over previous matching pursuit based video coders.
Owner:ETIIP HLDG

Distributed video coding and decoding methods based on classification of key frames of correlation noise model (CNM)

The invention discloses distributed video coding and decoding methods based on classification of key frames of a correlation noise model (CNM). The coding method comprises the following steps of: (1) computing a residual frame; (2) calculating Laplace parameter values of transformation coefficient grades of frames, blocks and frequency bands, and establishing CNM parameter tables of the differentfrequency bands according to the Laplace parameter value of the transformation coefficient grade of the frequency bands; and (3) according to the value of the residual frame and the CNM, dividing a coding sequence into a high-speed motion sequence block, a medium-speed motion sequence block and a low-speed motion sequence block which are coded by adopting an intra-frame mode, an inverse motion vector estimation mode and a frame skipping mode respectively. The decoding method comprises an adaptive three-dimensional recursive search method and an adaptive overlapped block motion compensation method based on the classification of the key frames of the CNM. By the methods provided by the invention, the quality of side information in distributed video coding can be improved effectively, the problem of incorrect estimation of motion vectors in the distributed video coding can be solved more effectively in a situation of no increase of the computational complexity of a coding terminal, and the more accurate motion vectors can be obtained simultaneously.
Owner:松日数码发展(深圳)有限公司 +1

Depth map sequence fractal coding method based on motion vectors of color video

The invention provides a depth map sequence fractal coding method based on motion vectors of a color video. The method includes the steps that firstly, the color video is coded through a fractal video compression method; secondly, the color video is decoded through a fractal video decompression method so as to acquire the motion vectors of all macro blocks of the color video and the motion vectors of all small blocks of the color video; thirdly, for coding of frames I in a depth map sequence, a smooth block is defined based on the H.264 intra-frame prediction coding method, and for smooth blocks, the values of adjacent reference pixels are directly copied while various predication directions do not need traversing; fourthly, for coding of frames P in the depth map sequence, block motion estimation/compensation fractal coding is carried out, predication on motion vectors of macro blocks of frames of the depth map sequence is carried out according to the correlation between the motion vectors of the macro blocks of the frames of the depth map sequence and the corresponding motion vectors of the macro blocks of the color video, an enhanced non-uniform multilevel hexagon search template is designed to replace an original non-uniform cross multilevel hexagon search template in the UMHexagonS, the most similar matching blocks are searched for through the improved UMHexagonS, and fractal parameters are recoded; fifthly, residual frames of the frames I, residual frames of the frames P and the fractal parameters of the frames P are compressed through CABAC entropy coding.
Owner:江苏华普泰克石油装备有限公司

Hybrid distributed video encoding method based on intra-frame intra-frame mode decision

A hybrid distributed video encoding method based on wavelet domain intra-frame mode decision capable of improving rate aberration performance, is characterized by the following steps: (1) a low complexity encoding, including the following steps: using traditional intra-frame encoder to code key frames, generating the reference frame of Wyner-Ziv frame by weighted average interpolation, generating a residual frame by a subtraction arithmetic, performing discrete wavelet switch DWT to the residual frame, generating a wavelet block, intra-frame mode decision of the wavelet block, entropy coding of the mode information, and inter-frame SW-SPIHT coding or intra-frame SPIHT coding of the wavelet block; (2) a high-complexity decoding, including the following: using traditional intra-frame decoding algorithm to decode key frames, adopting a motion estimation interpolation to produce side information frame of the Wyner-Ziv, generating the reference frame of the decoding terminal by weighting average interpolation, generating a residual frame of the decoding terminal by a subtraction arithmetic, performing DWT to the residual frame, entropy decoding of the mode information, adopting LBS to perform motion estimation to generate more accurate side information, fine reconstruction of wavelet coefficients, and recovering the original pixels by inverse discrete wavelet transform (IDWT) and addition operations.
Owner:TAIYUAN UNIVERSITY OF SCIENCE AND TECHNOLOGY

Video compression and decompression method based on fractal and H.264

The invention provides a video compression and decompression method based on fractal and H.264. Intra prediction of coding based on the H.264 is adopted for an I frame to obtain a prediction frame. A block motion estimation or complementary coding which are based on fractal is used for obtaining a prediction frame of a P frame. Difference between each initial frame and each corresponding prediction frame is a residual frame. After the residual frames are transformed through discrete cosine transformation (DCT) and are quantified, on one hand, the residual frames are written in a code rate, on the other hand, the residual frames are inversely quantified and reversely transformed though the DCT and then plus the prediction frames to get rebuilt frames (as reference frames). Fractal parameter is generated when the P frame is encoded in a predictive mode, the residual frames of the I frame and the P frame are compressed by the utilization entropy encoding CALCV, and the fractal parameter is compressed by utilization of an exponential-Golomb code with symbols, and a corresponding decompression process is that a prediction frame is obtained from an intra prediction of the I frame, and a prediction frame is obtained by a prediction of a frame in front of the P frame. Residual information in the code rate is reversely quantified and reversely transformed through the DCT to respectively obtain the residual frames of the I frame and the P frame.
Owner:海宁经开产业园区开发建设有限公司

Method of spatial and snr fine granular scalable video encoding and transmission

The invention relates to a method of coding video data available in the form of a first input stream of video frames, and to a corresponding coding device. This method, implemented for instance in three successives stages (101, 102, 103), comprises the steps of (a) encoding said first input stream to produce a first coded base layer stream (BL1) suitable for a transmission at a first base layer bitrate; (b) based on said first input stream and a decoded version of said encoded first base layer stream, generating a first set of residual frames in the form of a first enhancement layer stream and encoding said stream to produce a first coded enhancement layer stream (EL1); and (c) repeating at least once a similar process in order to produce further coded base layer streams (BL2, BL3, . . . ) and further coded enhancement layer streams (EL2, EL3, . . . ). The first input stream is thus, for obtaining a required spatial resolution, compressed by encoding the base layers up to said spatial resolution with a lower bitrate and allocating a higher bitrate to the last base layer and/or to the enhancement which corresponds to said required spatial resolution. A corresponding transmission method is also proposed.
Owner:KONINKLIJKE PHILIPS ELECTRONICS NV

Multi-view stereoscopic video compression and decompression method based on fractal and H.264

The invention provides a multi-view stereoscopic video compression and decompression method based on fractal and H.264. A middle view is used as a base layer and encoded on the principle of a motion compensated prediction (MCP) method in a compression mode, and other views are encoded on the principle of MCP + digitally controlled potentiometers (DCP) in a compression mode. Three views are taken for example, intra prediction coding of the H.264 is adopted by an I frame of the middle view, a rebuilt frame (as a reference frame) is obtained after filtration, a P frame of the middle view is encoded with block MCP by the utilization of the encoded rebuilt frame as the reference frame, the most matched fractal parameter of the block of each time is recorded, the fractal parameters are respectively substituted in a compression iterative function system to achieve a prediction frame of the P frame, and the rebuilt frame is obtained after filtration. The MCP + DCP is adopted for coding by a left view and a right view and a reference frame can be an encoded front frame of the respective view or an encoded frame of other views. Entropy encoding CALCV is used for compressing a residual frame of each frame and an exponential-Golomb code with symbols is used for compressing the fractal parameter.
Owner:海宁经开产业园区开发建设有限公司

Method and device for identifying video monitoring scenes

The invention discloses a method and device for identifying video monitoring scenes. The method for identifying the video monitoring scenes comprises the steps of (1) obtaining a front video image frame and a current video image frame obtained through monitoring of a front-end video monitoring device; (2) carrying out subtraction between the front video image frame and the obtained current video image frame to obtain an image residual frame; (3) determining brightness values of pixels of the obtained image residual frame; (4) determining the value of the ratio of the number of the pixels with the nonzero brightness values to the number of all the pixels of the image residual frame according to the determined brightness value of each pixel; (5) determining that the current monitoring scene where the front-end video monitoring device is located is a moving scene when the determined value of the ratio is larger than a set ratio threshold value; (6) determining that the current monitoring scene where the front-end video monitoring device is located is a static scene when the determined value of the ratio is smaller than or equal to the set ratio threshold value. The method and device for identifying the video monitoring scenes can achieve identification of the different monitoring scenes.
Owner:CHINA MOBILE COMM GRP CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products