Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

82 results about "Depth map coding" patented technology

Depth map coding method and device

The invention provides a depth map coding method and a device. The depth map coding method includes establishing a plurality of partition lines which are used for performing wedge-shape partition to a depth macro block, and forming the partition lines into a partition set; using an intra coding mode to code the depth micro block to obtain a first rate-distortion cost value; judging whether an inter coding mode is used for coding the depth micro block and using the inter coding mode to code the depth micro block to obtain a second rate-distortion cost value if the inter coding mode is used for coding the depth micro block; and continuing judging whether the depth micro block contains a discontinuous motion vector field and using a geometric partitioning coding mode to code the depth micro block if the depth micro block contains the discontinuous motion vector field, wherein the geometric partitioning coding mode includes selecting an optimal partition line to partition the depth micro block to obtain a first depth subdomain and a second depth subdomain and subjecting the two subdomains to predictive coding to obtain a third rate-distortion cost value; and comparing the rate-distortion cost values in different coding modes so as to select the coding mode with the minimum rate-distortion cost value for coding the depth micro block. According to the depth map coding method and the device, depth map compression efficiencies are improved, and coding complexities are reduced.
Owner:TSINGHUA UNIV +1

Allocation method for optimal code rates of texture video and depth map based on models

The invention discloses an allocation method for optimal code rates of a texture video and a depth map based on models, mainly solving the problem of code rate allocation of the texture video and the depth map in three-dimensional video encoding. The proposal is as follows: determining the relationship between virtual view distortion and the quantization step of texture video and the quantization step of the depth map; calculating the optimal quantization step of the texture video and the optimal quantization step of the depth map by using the relationship between the encoding rates of the texture video and the quantization step of the texture video and the relationship between the encoding rates of the depth map and the quantization step of the depth map; and encoding the texture video and the depth map with the optimal quantization step of the texture video and the optimal quantization step of the depth map to achieve allocation for the optimal coding rate of the texture video and the depth map. The method has the advantages that complexity is low and the optimal code rates of the texture video and the depth map can be reached. The method can be used for allocation of the code rates of the texture video and the depth map in three-dimensional video coding.
Owner:XIDIAN UNIV

Free viewpoint video depth map coding method and distortion predicting method thereof

The invention provides a free viewpoint video depth map coding method and a distortion predicting method thereof. The distortion predicting method comprises the following steps: A1, acquiring a stereoscopic video texture map with more than two viewpoints and a depth map; A2, adopting a viewpoint synthesis algorithm to synthesize the current viewpoint to-be-coded and a middle viewpoint of the viewpoint to-be-coded, adjacent to the to-be-coded viewpoint; A3, recording composite characters of all pixels in the current to-be-coded viewpoint depth map and generating corresponding distortion predicting weight according to synthesis results of the step A2; A4, carrying out distortion summation for all pixels in the coding block of a current depth map by using a distortion predicting model according to the composite characters of all pixels and the corresponding distortion predicting weight, so as to obtain total distortion. According to the invention, accuracy in predicting the distortion of the depth map during a free viewpoint video depth map coding process can be improved, and meanwhile, the calculating complexity of a distortion predicting algorithm in free viewpoint video depth map coding can be lowered greatly.
Owner:SHENZHEN GRADUATE SCHOOL TSINGHUA UNIV

Free viewpoint video depth map distortion prediction method and free viewpoint video depth map coding method

The invention provides a free viewpoint video depth map distortion prediction method and a free viewpoint video depth map coding method. The free viewpoint video depth map distortion prediction method includes the steps that for a map block to be coded and used for hole-filling synthesis of a given frame of a multi-view three-dimensional video sequence given viewpoint, a coded texture map block, a depth map block coded by adopting a preselected coding mode in a trial mode, a corresponding original texture map block and an original depth map block are input; a combined weight matrix of the map block to be coded is input, wherein combined weight when synthetic viewpoint texture is obtained by using a left viewpoint texture map and a right viewpoint texture map is marked; distortion of the synthetic texture obtained after mapping and hole filling synthesis are completed by using pixel points in a depth map is calculated, and the distortion is used as a prediction value of synthetic viewpoint distortion; distortion prediction values of all pixels in the map block to be coded are summed to obtain a prediction value of synthetic viewpoint distortion caused by coding the map block to be coded. According to the method, coding distortion of the depth map can be predicated accurately, meanwhile the situation that a synthetic view algorithm is executed repeatedly is avoided, and the computation complexity is greatly lowered.
Owner:SHENZHEN GRADUATE SCHOOL TSINGHUA UNIV

Rapid depth image frame internal mode type judgment method aiming at 3D-HEVC (Three Dimensional- High Efficiency Video Coding) standard

The invention provides a rapid depth image frame internal mode type judgment method aiming at a 3D-HEVC (Three Dimensional- High Efficiency Video Coding) standard. The rapid depth image frame internal mode type judgment method comprises the following steps: selecting a coding unit in a depth image; performing intra-frame predication mode judgment of a traditional HEVC standard and carrying out gross prediction; judging whether an optimal mode of the coding unit CU is a Planar mode or a DC mode; performing rapid DMM1 search in a 3D-HEVC coder to obtain a judgment formula; after judging the DMM1 search according to the judgment formula, performing DMM4 search in the 3D-HEVC coder and calculating a minimum rate distortion cost; and judging an optimal intra-frame predication mode and processing the next coding unit in the depth image. The current depth image intra-frame predication mode is primarily judged by utilizing the characteristics of the depth image, and an intra-frame mode, which does not usually occur in a coding process of certain depth images, is jumped; and the rapid depth image frame internal mode type judgment method has the prominent advantages that the calculation amount is small, the coding complexity is low, the coding time is short, and the compression performance is consistent to that of original 3D-HEVC and the like.
Owner:HENAN UNIVERSITY OF TECHNOLOGY

Depth image coding method based on edge lossless compression

The invention provides a depth image coding method based on edge lossless compression, and belongs to the field of 3D video coding. The depth image coding method based on edge lossless compression is characterized by comprising the following steps that edge detection based on threshold values is conducted; chain codes are respectively used for coding foreground edges and background edges; forward differential predictive coding is conducted so as to obtain edge pixel values; downsampling is conducted; forward differential predictive coding is conducted so as to obtain seedy images; arithmetic coding is conducted so as to obtain residual sequences and chain code sequences; binary code streams are transmitted; arithmetic decoding is conducted; chain-code decoding is conducted on the foreground edges and the background edges; forward predictive differential decoding is conducted; the seedy images are restored; the seedy images are represented sparsely; reconstruction is conducted by adopting a reconstruction method based on partial differential equations and a natural near interpolation reconstruction method so that restored images can be obtained. The depth image coding method based on edge lossless compression has the advantages that the characteristic that smooth regions of depth images are segmented by sharp edges can be effectively explored, coding performance of the depth images can be improved significantly, and meanwhile drawing quality of virtual viewpoints is also improved.
Owner:TAIYUAN UNIVERSITY OF SCIENCE AND TECHNOLOGY

Depth map encoding method and apparatus thereof, and depth map decoding method and apparatus thereof

Disclosed is a depth map frame decoding method including reconstructing a color frame obtained from a bitstream based on encoding information of the color frame; splitting a largest coding unit of a depth map frame obtained from the bitstream into one or more coding units based on split information of the depth map frame; splitting the one or more coding units into one or more prediction units for prediction decoding; determining whether to split a current prediction unit into at least one partition and decode the current prediction unit by obtaining information indicating whether to split the current prediction unit into the at least one or more partitions from the bitstream; if it is determined that the current prediction unit is to be decoded by being split into the at least one or more partitions, obtaining prediction information of the one or more prediction units from the bitstream and determining whether to decode the current prediction unit by using differential information indicating a difference between a depth value of the at least one or more partitions corresponding to an original depth map frame and a depth value of the at least one or more partitions predicted from neighboring blocks of the current prediction unit; and decoding the current prediction unit by using the differential information based on whether to split the current prediction unit into the at least one or more partitions and whether to use the differential information.
Owner:SAMSUNG ELECTRONICS CO LTD

DCT-based 3D-HEVC fast intra-frame prediction decision-making method

The invention discloses a DCT-based 3D-HEVC fast intra-frame prediction decision-making method. The method comprises the following steps: firstly, calculating a DCT matrix of a current prediction block by utilizing a DCT formula; then, judging whether the left upper corner coefficient of a current coefficient block is provided with an edge and further judging whether the right lower corner coefficient thereof is provided with an edge; and finally, judging whether DMMs need to be added into an intra-frame prediction mode candidate list through judging whether the edges are provided. According to the DCT-based 3D-HEVC fast intra-frame prediction decision-making method, a depth map is introduced into the 3D-HEVC to achieve better view synthesis; aiming at the depth map intra-frame predicationcoding, a 3D video coding extension development joint cooperative team proposes four new kinds of intra-frame predication modes DMMs for the depth map. The DCT has the characteristic of energy aggregation, so that whether a coding block is provided with an edge can be obviously distinguished in the 3D-HEVC depth map coding process. The DCT-based 3D-HEVC fast intra-frame prediction decision-makingmethod has the advantages of low calculation complexity, short coding time and good video reconstruction effect.
Owner:HANGZHOU DIANZI UNIV

Method for coding and decoding three-dimensional video depth map based on segmentation of irregular homogenous blocks

The invention relates to a method for coding and decoding a three-dimensional video depth map based on the segmentation of irregular homogenous blocks. The method comprises the following steps: (1) inputting a frame of depth map and a corresponding texture image; (2) coding a texture video by virtue of a standard coding method; (3) carrying out superpixel segmentation on the reconstructed texture video; (4) dividing the depth map into the irregular homogenous blocks; (5) calculating to enable the irregular homogenous blocks to correspond to a depth pixel value with minimum distortion in a synthetic region, wherein the depth pixel value represent the whole irregular homogenous block; (6) carrying out lossless coding on the depth map represented by the irregular homogenous blocks; (7) receiving and decoding a frame of three-dimensional video image code stream; (8) carrying out superpixel segmentation on the decoded video image; (9) reconstructing the decoded depth map; and (10) carrying out quality enhancement on the reconstructed depth map by virtue of a fake edge filtering method. By adequately considering the internal characteristic of smooth fragments of the depth map and the synthetic distortion of a virtual viewpoint, the coding efficiency of the depth map can be improved, the computation complexity of the coding of the depth map can be decreased, and the method can be compatible with any standard depth map coding method.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Three-dimensional video depth map coding method based on just distinguishable parallax error estimation

The invention discloses a three-dimensional video depth map coding method based on just distinguishable parallax error estimation. The method comprises the following steps: (1), inputting a frame of three-dimensional video depth map and a corresponding texture image; (2), synthesizing the texture image of a virtual view point; (3), calculating the just distinguishable error graph of the texture image of the virtual view point; (4), calculating the scope of a just distinguishable parallax error of the three-dimensional video depth map; (5), performing intraframe and interframe prediction on the three-dimensional video depth map, and selecting a prediction mode, which is provided with minimum prediction residual error energy, of the three-dimensional video depth map; (6), performing prediction residual error adjusting on the three-dimensional video depth map, and obtaining a prediction residual error block, which has a minimum variance, of the three-dimensional video depth map; and (7), encoding the three-dimensional video depth map of a current frame. The method provided by the invention, can greatly reduce the code rate of depth map coding under the condition that the invariability of the PSNR of a virtual synthesis video image is ensured and can also substantially improve the subjective quality of a virtual synthesis viewpoint at the same time.
Owner:ZHEJIANG UNIV

Method of reducing stereo video depth map coding complexity

The present invention discloses a method of reducing stereo video depth map coding complexity. The method is mainly used to reduce the coding complexity of a depth map edge in the 3D-HEVC, and comprises the steps of using a K-means clustering method to divide the inputted depth map PU block pixels into two categories having obvious difference, and generating a K-means clustering template; calculating the similarity matching degree of the K-means clustering template and the wedge segmentation templates generated at the coding initialization, and recording the optimal similarity matching degreeand an index value of the wedge segmentation template corresponding to the optimal similarity matching degree; according to the optimal similarity matching degree, determining a search radius for searching the optimal wedge segmentation template, calculating the rate distortion of all wedge segmentation templates within the search radius, and finding the optimal wedge segmentation template havingthe smallest rate distortion. The method of the present invention abandons a search mode that needs to store the wedge nodes at a coding/decoding end in advance, saves the system cache, enables the calculation complexity of a DMMI mode to be reduced, and saves 7.1% of the total coding time averagely while guaranteeing the coding quality.
Owner:HUAZHONG UNIV OF SCI & TECH

K-means clustering based depth image encoding method

The invention relates to a K-means clustering based depth image encoding method, and belongs to the field of depth image encoding and decoding in 3D video. The K-means clustering based depth image encoding method is characterized by comprising the steps of segmenting a depth image into n types by adopting K-means clustering; extracting the boundary of a new depth image formed by each type of the depth image after segmentation, carrying out entropy encoding and transmitting the entropy encoded boundaries to a decoding terminal; carrying out down sampling on non boundary region pixel values, and carrying out entropy encoding on down sampling values; transmitting an encoded bit stream to the decoding terminal; recovering each type of data by using a partial differential equation (PDE) method to acquire n reconstructed depth images at the decoding terminal; overlaying the n reconstructed depth images acquired by recovery so as to form a complete depth image; and synthesizing a required virtual viewpoint image by using depth image based viewpoint synthesis technologies. The advantage is that the quality of a virtual viewpoint synthesized under guidance of the depth image acquired by compression according to the scheme provided by the invention is higher than JPEG and JPEG2000 compression standards.
Owner:TAIYUAN UNIVERSITY OF SCIENCE AND TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products