Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Scene picture character detection method based on discrimination dictionary learning and sparse representation

A technology of dictionary learning and sparse representation, which is applied in character and pattern recognition, instruments, computer components, etc., and can solve problems such as the difficulty of text detection in research scenes and images

Active Publication Date: 2016-12-07
云南联合视觉科技有限公司
View PDF4 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0009] The technical problem to be solved by the present invention is to provide a method for scene image text detection based on discriminative dictionary learning and sparse representation, so as to solve the problem that the prior art is difficult to study scene image text detection. The scene image text detection of the present invention The method can provide strong support for upper-level applications such as image and video understanding and retrieval in different application scenarios

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Scene picture character detection method based on discrimination dictionary learning and sparse representation
  • Scene picture character detection method based on discrimination dictionary learning and sparse representation
  • Scene picture character detection method based on discrimination dictionary learning and sparse representation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0059] Embodiment 1: as Figure 1-7 As shown, a method of scene image text detection based on discriminative dictionary learning and sparse representation, first uses the training data and the proposed discriminative dictionary learning method to train and learn two dictionaries: the text dictionary and the background dictionary, and then sequentially merge the text Dictionary and background dictionary; then the sparse representation coefficients of the text and background corresponding to the image to be detected are calculated from the merged dictionary, the image to be detected, and the sparse representation method; finally, the learned dictionary corresponds to the calculated image to be detected Sparse representation coefficients to reconstruct the text in the image to be detected; use heuristic rules to process the text area in the reconstructed text image to detect the candidate text area in the image to be detected;

[0060] The specific steps are:

[0061] Step1, fir...

Embodiment 2

[0095] Embodiment 2: as Figure 1-7 shown, will be attached figure 2 The text in the source image to be detected in is detected. attached figure 2 It is a scene image with a complex background. The overall image is seriously polluted by light, and the geometric features of the background are very similar to those of the text. It is difficult to accurately detect the text in the image with traditional methods. The following describes the detection figure 2 TextArea steps in:

[0096] Step1, first construct the training samples of text and background;

[0097] Step1.1. Collect text images and background images from the Internet, wherein the text images only contain text without background texture, and the background images do not contain text.

[0098] Step1.2, collect the text image and background image data in Step1.1 in the form of sliding window, each window (n×n) collects data as a column vector (n 2 ×1) (hereinafter collectively referred to as atoms, n is the size ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a scene picture character detection method based on discrimination dictionary learning and sparse representation and belongs to the technical field of digital image processing. Firstly, training data and a discrimination dictionary learning method are used for obtaining two dictionaries by training and learning, namely, a character dictionary and a background dictionary, and the character dictionary and the background dictionary are sequentially merged; then sparse representation coefficients of characters and a background corresponding to a to-be-detected image are calculated through the merged dictionaries, the to-be-detected image and a sparse representation method; finally, characters in the to-be-detected image are rebuilt through the learned dictionaries and the calculated sparse representation coefficients corresponding to the to-be-detected image; and a heuristic rule is used for processing a character area in a rebuilt character image so as to detect a candidate character area in the to-be-detected image. The scene picture character detection method based on discrimination dictionary learning and sparse representation can greatly improve the character identification accuracy.

Description

technical field [0001] The invention relates to a method for detecting text in scene pictures based on discriminative dictionary learning and sparse representation, and belongs to the technical field of digital image processing. Background technique [0002] Since entering the 21st century, the Internet industry has developed rapidly, coupled with the vigorous development of smart phones in recent years, the digital information on PC and mobile terminals is growing rapidly. Digital images and videos are just one of the main elements of today's digital world. Digital images and videos often contain a large number of text areas, and these text information are important clues to understand the meaning of the images and videos. How to extract text information from complex natural scene images will have extraordinary significance for image understanding and image retrieval. Therefore, the research on text positioning technology in scene images has attracted many scholars at home ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/20G06K9/62
CPCG06V10/22G06F18/214
Inventor 李华锋刘舒萍汤宏颖余正涛
Owner 云南联合视觉科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products