Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

8739results about "Image coding" patented technology

Apparatus and method for optimized compression of interlaced motion images

An interlaced image processing module and corresponding method facilitate improved processing of interlaced motion images. In one embodiment, the interlaced image processing module receives image data frames having interlaced first and second fields and produces a reference field and error field. The reference field corresponds to the still image content of the interlaced frame, whereas the error field corresponds to the motion content of the interlaced frame, particularly the motion between fields. Motion between fields is thus represented in the error field, without redundant representation of the still image content provided by the first field. Where there is little motion between fields, the error terms will be small so the predictor preserves the coding efficiency provided by any auto-correlation in the image. Further, the interlaced image processing method does not rely upon pixel group classification, and thus avoids classification errors, and the loss of coding efficiency from still image content in motion classified blocks. Finally, problems presented by relative motion between fields are avoided, as are local artifacts. Another embodiment transforms the interlaced fields into frame data having a high frequency field and a low frequency field.
Owner:QUVIS +1

Method for representing real-time motion

A system 100 for tracking the movement of multiple objects within a predefined area using a continuation of overhead X-Y tracking cameras 24 with attached frequency selective filter 24f. Also employed are perspective Z filming cameras sets 30. Objects to be tracked, such as player 17, have been marked to include some form of frequency selective reflective material such as an ink. Typical markers include patches 7r and 7l, sticker 9 and tape 4a as well as additional body joint markers 17af through 17l. System 100 radiates selected energy 23a throughout the predefined area of tracking that is specifically chosen to reflect off said reflective materials used to mark for instance player 17. The reflected energy is then received by tracking cameras 24 while all other ambient light is blocked by filter 24f. Local Computer System 60 continuously captures images from said tracking cameras 24 which include only the minimum information created by said reflected energy. System 60 efficiently locates said markings on said multiple objects and uses this location information to determine for each marking its angle of rotation, angle of azimuth and distance from a designated origin 17o local to player 17. Local origin 17o is then expressed as a three-dimensional coordinate with respect to the origin of the playing venue 2a. The continuous stream of tracked three-dimensional coordinates, defining the body joints on players such as 17, is then transmitted to a remote computer where it can be used to drive a graphic re-animation of the object movement. Along with this re-animation, additional performance measurements may be derived from the continuous stream and automatically made available in real-time.
Owner:MAXX HLDG

Video, audio and graphics decode, composite and display system

A video, audio and graphics system uses multiple transport processors to receive in-band and out-of-band MPEG Transport streams, to perform PID and section filtering as well as DVB and DES decryption and to de-multiplex them. The system processes the PES into digital audio, MPEG video and message data. The system is capable of decoding multiple MPEG SLICEs concurrently. Graphics windows are blended in parallel, and blended with video using alpha blending. During graphics processing, a single-port SRAM is used equivalently as a dual-port SRAM. The video may include both analog video, e.g., NTSC / PAL / SECAM / S-video, and digital video, e.g., MPEG-2 video in SDTV or HDTV format. The system has a reduced memory mode in which video images are reduced in half in horizontal direction only during decoding. The system is capable of receiving and processing digital audio signals such as MPEG Layer 1 and Layer 2 audio and Dolby AC-3 audio, as well as PCM audio signals. The system includes a memory controller. The system includes a system bridge controller to interface a CPU with devices internal to the system as well as peripheral devices including PCI devices and I / O devices such as RAM, ROM and flash memory devices. The system is capable of displaying video and graphics in both the high definition (HD) mode and the standard definition (SD) mode. The system may output an HDTV video while converting the HDTV video and providing as another output having an SDTV format or another HDTV format.
Owner:AVAGO TECH INT SALES PTE LTD

Real-time video coding/decoding

A video codec for real-time encoding / decoding of digitized video data with high compression efficiency, comprising a frame encoder receiving input frame pixels; a codec setting unit for setting and storing coding setting parameters; a CPU load controller for controlling desired frame encoding time and CPU loading; a rate controller for controlling frame size; a coding statistics memory for storing frequency tables for arithmetic coding of bitstream parameters and a reference frame buffer for storing reference frames. The frame encoder comprises a motion estimation unit, a frame head coding unit, a coded frame reconstruction and storage unit and a macroblock encoding unit. The macroblock encoding unit provides calculation of texture prediction and prediction error, transforming texture prediction error and quantization of transform coefficient, calculation of motion vector prediction and prediction error and arithmetic context modeling for motion vectors, header parameters and transform coefficients. The codec also includes a deblocking unit for processing video data to eliminate blocking effect from restored data encoded at high distortion level, which may be a part of encoder or decoder, an internal resize unit, providing matching downscaling of a frame before encoding and upscaling of decoded frame according to the coding setting parameters, and a noise suppression unit.
Owner:BEAMR IMAGING LTD

Reducing blocking and ringing artifacts in low-bit-rate coding

A technique to reduce blocking and ringing artifacts in low bit-rate block-based video coding is applied to each reconstructed frame output from the decoder. For each pixel block of a reconstructed frame, its DC value and DC values of the surrounding eight neighbor blocks are exploited to predict AC coefficients which might be lost in the quantization stage in the encoding process. The predicted AC coefficients are used to classify each reconstructed block as either a low-activity or a high-activity block. Low-pass filtering is then adaptively applied according to the classification of the block. Strong low-pass filtering is applied in low-activity blocks where the blocking artifacts are most noticeable, whereas weak low-pass filtering is applied in high-activity blocks where ringing noise as well as blocking artifacts may exist. The adaptive filtering reduces ringing noise as well as blocking artifacts without introducing undesired blur. In low activity blocks, the blocking artifacts are reduced by one dimensional horizontal and vertical low-pass filters which are selectively applied in either the horizontal and / or vertical direction depending on the locations and absolute values of the predicted AC coefficients. In high activity blocks, de-blocking and de-ringing is conducted by a single filter, applied horizontally and / or vertically, which makes the architecture simple.
Owner:SEIKO EPSON CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products