Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A video segmentation method and system for translation

A video segmentation and video technology, applied in the field of video segmentation of video translation, can solve problems such as wasting translation time and failing to meet translation needs

Active Publication Date: 2020-12-22
IOL WUHAN INFORMATION TECH CO LTD
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this brings up another problem
There are often a large number of dialogue-free scenes in the video. These scenes do not need to be translated, but the translator has to wait until the sound stream or dialogue appears again. This waiting process is inevitable (the translator cannot predict The time point when the next sound stream appears, so it cannot be fast-forwarded or skipped), which wastes translation time
[0004] Although there are a variety of video segmentation algorithms in the prior art, none of them can meet the above translation needs

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A video segmentation method and system for translation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] To segment the content of the video text, there are various segmentation algorithms in the prior art. However, most of these segmentation methods are based on the attributes of the video itself, such as picture recognition, scene recognition, person recognition, etc., and the segmentation results mostly segment the continuous pictures of a certain scene, regardless of whether there is a sound flow in the scene composed of these continuous pictures. . This method of segmentation is not suitable for translation. Because, in a scene composed of a certain continuous field of pictures, there may be some dialogues and some no dialogues; for the pictures without dialogues, the translators can only wait.

[0033] while using figure 1 The method shown above can avoid the above phenomenon.

[0034] exist figure 1 , for the part of the text video (1), identify the sound stream (2) therein, and start to detect the initial start point (20), middle pause point (21), middle start ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention firstly discloses a video partition method for translation. The method is used for partitioning segments required to be translated and segments not required to be translated in a video separately. The video partition method is different from existing diversified video partition algorithms; according to the method, multiple time nodes are obtained by detecting a sound stream in the video, and on the basis of the time nodes, automatic partition is conducted on the video file to obtain multiple video sub-segments. The invention further discloses a video translation method utilizingthe video partition method above. By adopting the translation method, the work of converting video and audio files into text files can be removed, and meanwhile, the video parts not required to be translated can be skipped, the workload of video translation is greatly reduced, and the translation quality is ensured at the same time.

Description

technical field [0001] The invention belongs to the field of video segmentation, in particular to a video segmentation method for video translation. Background technique [0002] In order to enable audiences using different languages ​​to enjoy movies and TV plays from different countries, it is necessary to translate the video language of the movies and TV plays. This process mainly includes: first convert the sound files in movies and TV dramas into text (speech recognition plus manual proofreading, or purely manual transcription), then hand over the text to the translator for translation, and then hand over the translation to the proofreaders after proofreading , as subtitles embedded in the original film and TV series. [0003] However, in the above process, the process of converting the sound file into a text file has a huge workload. To avoid this process, translators can also use the method of translating while watching the video. However, this brings up another pr...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F40/58H04N21/234H04N21/44
Inventor 郑丽华
Owner IOL WUHAN INFORMATION TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products