Check patentability & draft patents in minutes with Patsnap Eureka AI!

Image encoding method and image encoding device

An image coding and coding technology, applied in the field of predictive coding, which can solve problems such as the inability to directly use motion vector calculation and the inconsistency of precision.

Inactive Publication Date: 2007-07-04
PANASONIC INTELLECTUAL PROPERTY CORP OF AMERICA
View PDF8 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0016] However, in the case of temporal prediction in the direct method, there is a problem that when motion compensation is performed on a block subjected to inter-picture predictive coding by the direct method, since the block referring to the motion vector belongs to B6 in FIG. In the case of a B-picture, the above-mentioned blocks have multiple motion vectors, so the motion vector calculations made by scaling according to Equation 1 cannot be directly used
In addition, since the division operation is performed after calculating the motion vector, the accuracy of the motion vector value (for example, the accuracy of 1 / 2 pixel and 1 / 4 pixel) may not match the predetermined accuracy.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image encoding method and image encoding device
  • Image encoding method and image encoding device
  • Image encoding method and image encoding device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment approach 1

[0097] Using the block diagram shown in FIG. 6, the video coding method according to Embodiment 1 of the present invention will be described.

[0098] The moving images to be encoded are input into the frame memory 101 in units of images in time order, and rearranged in the order of encoding. Each image is divided into groups called blocks, such as horizontal 16×vertical 16 pixels, and the following processing is performed in units of blocks.

[0099] The blocks read from the frame memory 101 are input to the motion vector detection unit 106 . Here, a decoded image stored in the frame memory 105 is used as a reference image to detect a motion vector of a block to be encoded. At this time, the optimum prediction method is determined by the mode selection unit 107 by continuously referring to the motion vector obtained by the motion vector unit 106 and the motion vector used in the following encoded picture stored in the motion vector storage Section 108. The prediction metho...

Embodiment approach 2

[0122] The outline of the encoding process shown in FIG. 6 is completely equivalent to that of the first embodiment. Here, the bidirectional prediction operation in the direct method will be described in detail using FIG. 9 .

[0123] FIG. 9 shows the operation when a block to be referred to for specifying a motion vector in the direct method has two motion vectors that refer to two images that are later in order of display time. The picture P43 is a picture currently to be encoded, and bidirectional prediction is performed using the picture P42 and the picture P44 as reference pictures. If the block to be coded is defined as the block MB41, two necessary motion vectors at this time are determined using the motion vector of the block MB42 that is in the coded rear reference image (referenced by the second reference image). at the same position as the picture P44 of the second reference picture specified by the index. Since this block MB42 has two motion vectors MV45 and MV46...

Embodiment approach 3

[0144] Using the block diagram shown in FIG. 11, the video decoding method according to Embodiment 3 of the present invention will be described. Here, the coded sequence generated by the video coding method according to Embodiment 1 is input.

[0145] First, from the input coded sequence, the coded sequence analyzer 601 extracts various information such as a prediction method, motion vector information, and prediction residual coded data.

[0146] The prediction method and motion vector information are output to the prediction method / motion vector decoding unit 608 , and the prediction residual residual coded data is output to the prediction residual residual decoding unit 602 . The prediction method / motion vector decoding unit 608 decodes the prediction method and decodes the motion vector used in the prediction method. When decoding the motion vector, the decoded motion vector stored in the motion vector storage unit 605 is used. The decoded prediction method and motion ve...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

When a block (MB22) of which motion vector is referred to in the direct mode contains a plurality of motion vectors, 2 motion vectors MV23 and MV24, which are used for inter picture prediction of a current picture (P23) to be coded, are determined by scaling a value obtained from averaging the plurality of motion vectors or selecting one of the plurality of the motion vectors.

Description

[0001] This application is a divisional application of Chinese patent application No. 03800471.2. technical field [0002] The present invention relates to a coding method and a decoding method of a dynamic image, in particular to a plurality of images which have been coded in the front according to the order of display time or a plurality of images which are in the rear according to the order of display time or which are located in front and behind according to the order of display time A method of performing predictive coding by referring to a plurality of pictures on both sides. Background technique [0003] Generally, in the encoding process of dynamic images, the amount of information is compressed by reducing the redundancy in the time direction and space direction. Therefore, in the process of inter-picture predictive coding for the purpose of reducing temporal redundancy, motion detection and motion compensation are performed in units of blocks with reference to fron...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N7/26H04N7/36H04N7/46H04N7/50H04N19/51G06T9/00H04N7/12H04N19/103H04N19/105H04N19/109H04N19/127H04N19/137H04N19/176H04N19/503H04N19/61H04N19/70
CPCH04N19/00781H04N19/00266H04N19/00545H04N19/00145H04N7/26031H04N19/00224H04N7/361H04N7/26218H04N19/00715H04N7/26015H04N19/00484H04N19/00018H04N7/26122H04N19/00721H04N7/26872H04N7/366H04N19/00139H04N7/26707H04N19/00884H04N7/26244H04N7/26132H04N19/0003H04N19/00024H04N7/367H04N7/2609H04N7/26037H04N7/26021H04N19/00587H04N19/00696H04N7/462H04N19/00278H04N7/26313H04N19/00727H04N19/00103H04N7/50H04N19/00036H04N7/26271H04N7/26946H04N19/105H04N19/52H04N19/176H04N19/70H04N19/172H04N19/46H04N19/51H04N19/61H04N19/103H04N19/107H04N19/109H04N19/127H04N19/136H04N19/137H04N19/16H04N19/423H04N19/573H04N19/58H04N19/577
Inventor 近藤敏志角野真也羽饲诚安倍清史
Owner PANASONIC INTELLECTUAL PROPERTY CORP OF AMERICA
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More