Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Connected domain-based natural scene text detection method

A technology of natural scenes and connected domains, which is applied in character recognition, character and pattern recognition, instruments, etc., can solve problems such as the decline in the accuracy of text detection, achieve the effect of improving accuracy and ensuring detection speed

Inactive Publication Date: 2017-06-13
XIDIAN UNIV
View PDF4 Cites 22 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At the same time, because there are many scene elements similar to text in natural scenes, such as doors and windows, railings, leaf meshes, lampposts, etc., and these non-text elements are very similar to text in shape and color, so many non-text elements are detected. The MSER connected area of ​​the text leads to a decrease in the accuracy of text detection
This is one of the main challenges of text localization based on the largest extremum stable region

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Connected domain-based natural scene text detection method
  • Connected domain-based natural scene text detection method
  • Connected domain-based natural scene text detection method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0030] refer to figure 1 , the present invention is based on the method for the natural scene image character detection of connected domain, comprises the following steps:

[0031] Step 1: Acquiring a Grayscale Image I G .

[0032] Input the original image I, perform grayscale transformation on the original image, and obtain the grayscale image I of the image G .

[0033] Step 2: Get character candidate region image I m .

[0034] Using connected region detection operator MSER to grayscale image I G Perform region detection to obtain connected regions containing text and non-text, use these connected regions as character candidate regions, and place these character candidate regions in image I G The above is displayed in color, and the character candidate area image I is obtained m .

[0035] Step 3: Filter out the character candidate region image I m In some candidate regions that do not contain text, get the character candidate region image I after preliminary filte...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a connected domain-based natural scene text detection method, and mainly solves the problem of low accuracy of an existing text detection method. The method is implemented by the steps of 1) performing grayscale transform on an input original image to obtain a grayscale image IG; 2) extracting character candidate regions from the IG, and obtaining a character candidate region image Im; 3) filtering a few candidate regions containing no characters in the Im to obtain a preliminarily filtered image I1; 4) filtering a few candidate regions containing no characters in the I1 to obtain a final image I2; 5) combining the residual character candidate regions in the I2 into text line regions; and 6) inputting the text line regions to a convolutional neural network text detector in sequence, and filtering the text line regions containing no texts to obtain final text-containing text line regions. According to the method, by filtering the candidate regions containing no texts for multiple times, the text detection accuracy is improved; and the method can be used for automatically extracting the texts in the image.

Description

technical field [0001] The invention belongs to the technical field of computer vision, in particular to a method for detecting characters in natural scene images, which can be used to automatically extract characters in images. Background technique [0002] With the rapid development of the mobile Internet and the popularity of mobile electronic devices such as smartphones, the acquisition and transmission of natural scene images has become more and more convenient. The text in natural scene images contains a wealth of information. People expect that computers can replace humans to automatically detect and extract text information in images, and this technology will be industrialized and used in daily production and life, such as text recognition The technology is applied in areas such as driverless cars, navigation for the blind, industrial automation, Internet information mining, e-commerce anti-counterfeiting, and brand exposure research. [0003] Different from traditi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/32G06K9/34
CPCG06V20/63G06V10/267G06V30/10
Inventor 冯冬竹余航郑毓杨旭坤何晓川刘清华许录平
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products