Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method and device for extracting characters

A character and segmentation point technology, applied in the field of character extraction and device, can solve the problems of incomplete consideration, wrong segmentation results, no candidate segmentation point screening, etc., to achieve the effect of improving recall rate and reducing the number of branches

Inactive Publication Date: 2010-07-07
北京新岸线网络技术有限公司
View PDF0 Cites 27 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The shortcomings of the above two character segmentation methods are: when generating candidate segmentation points, the consideration is not complete, and the candidate segmentation points are not screened, and many wrong segmentation results are retained, which affects the estimation of subsequent character features
Both of these shortcomings reduce the accuracy of character segmentation and extraction

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and device for extracting characters
  • Method and device for extracting characters
  • Method and device for extracting characters

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0022] Before character segmentation and recognition, the image needs to be preprocessed. An optional preprocessing procedure is:

[0023] First, binarize the received image to obtain a binary image. In this way, it is possible to describe the brightness changes near the character strokes without retaining too much background noise.

[0024] Then, the connected domains in the binary image are labeled. After calibration, information such as the position, size, and number of pixels of each connected region in the binary image can be obtained.

[0025] Then, merge according to the size and positional relationship of each connected domain to form a complete connected domain close to character features, such as figure 1 shown.

[0026] In the binary image of the text area, the number of characters is large, and it is suitable to estimate character characteristics by statistical methods. However, each character is composed of multiple scattered strokes. If the connected domains...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention relates to a method for extracting characters in images, which comprises the following steps: candidate break point assembles are set up aiming to all characters in the same row in images at a candidate text region, and comprise a left break point assemble and a right break point assemble; on the basis of every left break point, corresponding right break points are searched in a estimated interval to generate candidate break sets; word recognition is made for the candidate break sets; and the recognition results of the candidate break sets are filtered by recognition price according to the positions of the candidate break sets. The present invention also discloses a device for extracting the characters in the images.

Description

technical field [0001] The invention relates to a technology for processing text in images, in particular to a method and device for extracting characters in images. Background technique [0002] In content-based video retrieval, text is a kind of information that is easy to extract and closely related to video image content, which provides important clues for video content understanding. The process of video text extraction is divided into three parts: localization, segmentation and recognition. Positioning is to use the edge, texture and other features of the text area to accurately identify its position in the video image. Segmentation is to accurately identify the boundary of a single character in the candidate text area. Recognition is to correctly classify the single character image obtained by segmentation . [0003] In the video text extraction system, if there is an error in the segmentation, the correct recognition result cannot be obtained. Moreover, since the ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/34
Inventor 周景超苗广艺徐成华鲍东山
Owner 北京新岸线网络技术有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products