Method for detecting and positioning text area in video

A technology of area detection and video, which is applied in the direction of instruments, character and pattern recognition, computer parts, etc., to achieve the effect of fast detection and positioning

Active Publication Date: 2012-07-04
北京中科阅深科技有限公司
View PDF1 Cites 32 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, with the rapid development of video content today, these methods can no longer meet people's needs

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for detecting and positioning text area in video
  • Method for detecting and positioning text area in video
  • Method for detecting and positioning text area in video

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0017] In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to specific embodiments and accompanying drawings.

[0018] The principle of the method for detecting and locating text in a video of the present invention is mainly as follows: sampling the input video, performing edge detection on the video image obtained by sampling, using the image obtained after detection to generate a text confidence map, and obtaining the text confidence from the generated text confidence. The text candidate regions are extracted in the figure, and the text candidate regions of the multi-frame images with approximately the same text candidate regions are fused to obtain the final text region, and the text regions are divided into lines according to the horizontal and vertical projections.

[0019] figure 1 This is a flow chart of the method for detecting and locating t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a method for detecting and positioning a text area in a video. The method is characterized by comprising the following steps of: inputting the video, and sampling the input video at equal time intervals; carrying out edge detection on an image obtained by sampling; generating a text confidence map by utilizing the image obtained by detection; extracting a text candidate area according to the generated text confidence map; fusing the text candidate areas of approximately same multiple frames of images in the text candidate area; and analyzing the images of in the text area after fusing. With the adoption of the method, multi-language texts appearing in the video can be accurately positioned in real time. The method is applicable to various functions such as video content editing, indexing and retriving and the like.

Description

technical field [0001] The invention belongs to the field of pattern recognition and computer vision, in particular to a method for detecting and locating text areas in videos. Background technique [0002] Today, video, as one of the most popular forms of media, is widely disseminated through TV stations and the Internet. In order to make it easier and faster for users to find interesting video content, video retrieval and classification have gradually become the focus of research in the field of pattern recognition and computer vision. Among them, the text information in the video, especially the subtitle information, has the most significant effect on the retrieval and classification of the video. This is because: (1) the text information in the video is closely related to the current content of the video; (2) the characters in the video have very obvious visual features, which are easy to extract; (3) the character recognition (OCR) technology is better than the current...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/32
Inventor 刘成林白博殷飞
Owner 北京中科阅深科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products