Morphological filtering enhancement-based maximally stable extremal region (MSER) video text detection method

A technology of morphological filtering and text detection, applied in image enhancement, image data processing, instruments, etc., can solve the problems of blurred text boundaries, low contrast between video background and video text, etc.

Active Publication Date: 2012-10-24
DALIAN UNIV OF TECH
View PDF2 Cites 29 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] The technical problem to be solved by the present invention is: in the case where video lossy compression causes blurred text boundaries, when the contrast between the video background and the video text is relatively low, and the video con

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Morphological filtering enhancement-based maximally stable extremal region (MSER) video text detection method
  • Morphological filtering enhancement-based maximally stable extremal region (MSER) video text detection method
  • Morphological filtering enhancement-based maximally stable extremal region (MSER) video text detection method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0060] Step 1: In the framework of the present invention, since subtitles have a temporary storage feature, there is no need to process each frame of image, and we take one frame of image every five frames for processing. The present invention processes based on pixels. If the picture is too large, it will take a long time to process one frame of image, so the real-time performance will be worse. Therefore, before the image processing, the size of the video image is first converted to 448× by linear interpolation. 336.

[0061] The Euclidean distance between two points in the HSI color space is approximately proportional to the degree of human perception, and it has an important feature: the brightness component and the chrominance component are separated, and the brightness component I has nothing to do with the color information of the image, that is, the HSI color space There are relatively independent features between the chromaticity and brightness of the image, so in the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the technical field of video retrieval, relates to related image processing knowledge, and in particular relates to a video text detection method. The video text detection method is characterized by extracting video subtitles from a video to be detected and being used for recognition and video retrieval. The video text detection method comprises the steps of: firstly enhancing the text boundary of an input image by utilizing a gradient amplitude map (GAM); secondly, filtering partial background interference by using morphological filtering in two directions and enhancing the contrast of text and background; thirdly, detecting the saliency map of video text by using a maximally stable extremal region (MSER) detector, and acquiring the optimal segmentation of the text by utilizing Graph Cuts; and finally, connecting the texts to a text row by utilizing the geometrical distribution feature of the texts, and removing non-text regions by using multiframe confirmation and a certain starting education method. The detection method has the effects and benefits that the sensitive technical difficulties of blur text boundary, low contrast and complicated background and the like in text detection are overcome, and the detection results can be directly used for character recognition.

Description

technical field [0001] The invention belongs to the field of video retrieval, relates to relevant knowledge of image processing, and in particular relates to a video text detection method. Background technique [0002] Since the 1990s, video retrieval technology based on video subtitle information has attracted much attention from researchers, and many excellent technologies and methods have emerged. The research hotspots mainly focus on video image text detection and positioning. The representative articles and patents published successively since 2005 are described below. [0003] In "A comprehensive method for multilingual video text detection, localization, and extraction.In T-CSVT, 2005", Lyu, M.R et al. analyzed the sobel edge density of the text to locate the position of the text. In the article "A New Approach for Overlay Text Detection and Extraction From Complex Video Scene.In TIP, 2009" by Wonjun Kim et al., the color transition map is used to locate the position...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/46G06K9/40G06K9/00G06T5/00
Inventor 陈丽娇卢湖川
Owner DALIAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products