Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An audio and video synchronization output method for multi-process control

An audio and video synchronization and output method technology, applied in the field of digital audio and video, can solve the problems of large storage resource consumption, audio and video out of synchronization, audio and video decoding and output difficulties, and achieve the effect of saving storage resources and achieving low complexity

Inactive Publication Date: 2008-05-28
CENT ACADEME OF SVA GROUP
View PDF0 Cites 24 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] A very important technology in the field of digital TV is the synchronous output of audio and video. Since the popular video coding standards all adopt a hybrid coding / decoding method, this coding and decoding method takes advantage of the redundancy of video frames in time and space. The coding methods of prediction, transformation, quantization and entropy coding encode video frames into different frame types: intra-frame prediction frame (I frame), unidirectional prediction frame (P frame) and bidirectional prediction frame (B frame), so that It leads to time inconsistencies in the transmission and decoding process of video encoding data, and audio and video are encoded and transmitted separately, but it is required to output and play synchronously during playback. If a better control method is not adopted, it is easy to cause audio The phenomenon that the video output is out of sync
To solve this problem, the general way is to use a larger buffer to buffer the encoded data and decoded image and audio frames, relying on the floating of the data storage queue in the buffer to overcome this problem, but in this case, the storage resources of the system The consumption is very large, and it is difficult to control the decoding and output of the audio and video separately. The two are separated. Although the video and audio can be output normally separately, it is difficult to effectively control the synchronization of the two.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0020] The multi-process controlled audio-video synchronous output method of the present invention will be further described in detail below.

[0021] The synchronous output method of audio and video of multi-process control of the present invention is based on the DSP Blackfin BF533 platform of the embedded operating system of American Analog Devices Company, and the concrete realization steps are as follows:

[0022] Step 1, five processes are set up under the operating system, which are respectively a system layer demultiplexing process, a video decoding process, an audio decoding process, a video synchronization output process and an audio synchronization output process;

[0023] Step 2, the system layer demultiplexing process completes the demultiplexing of the video and audio transport stream, and then the transport stream data is unpacked into two parts: (1) video ES (Element Stream, elementary stream) stream and audio ES stream, and (2) Time information: Video ES strea...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an audio / video synchronous output method for the multi-process control. Five progresses of system layer de-multiplexing, video decoding, audio decoding, video synchronous output and audio synchronous output are required to be established in the method. A transport stream is de-multiplexed and unpacked into an audio / video basic stream and time information in the system layer de-multiplexing progress; the audio basic stream and the video basic stream are respectively transported to the video decoding progress and the audio decoding progress to be decoded, and the time information is used for updating the clock of a local system; the time information and data storage space information after being decoded in the video decoding progress and the audio decoding progress are transmitted together to the video synchronous output progress and the audio synchronous output progress. According to the time information of the data after being decoded, after being compared with a local clock system, the data which is according with an output time is chosen to be output in the video synchronous output progress and the audio synchronous output progress. A large capacity buffering area is not required in the method to buffer a coding data and an image audio / video frame after being decoded, and the memory resource of the system can be saved; the invention is developed on the basis of an embedded operating system, and the realization complexity is low.

Description

technical field [0001] The invention belongs to the technical field of digital audio and video, in particular to a multi-process controlled audio and video synchronous output method. Background technique [0002] A very important technology in the field of digital TV is the synchronous output of audio and video. Since the popular video coding standards all adopt a hybrid coding / decoding method, this coding and decoding method takes advantage of the redundancy of video frames in time and space. The coding methods of prediction, transformation, quantization and entropy coding encode video frames into different frame types: intra-frame prediction frame (I frame), unidirectional prediction frame (P frame) and bidirectional prediction frame (B frame), so that It leads to time inconsistencies in the transmission and decoding process of video encoding data, and audio and video are encoded and transmitted separately, but it is required to output and play synchronously during playbac...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N7/54H04N7/56
Inventor 张钰于玥
Owner CENT ACADEME OF SVA GROUP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products