Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-specification text collaborative positioning and extracting method

A technology of co-location and extraction methods, applied in character and pattern recognition, instruments, computer parts and other directions, can solve the problems of classification and collection of difficult recognition results, can not directly meet the application requirements of text recognition and digital collection, and avoid text and the interference of noise information, overcoming missed detection and false detection, and improving the accuracy and precision

Inactive Publication Date: 2018-11-23
南通艾思达智能科技有限公司
View PDF4 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The invention provides a multi-standard text collaborative positioning and extraction method to solve the problem that the existing text recognition software is difficult to classify and collect the recognition results, and cannot directly meet the application requirements of text recognition and digital collection with format requirements

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-specification text collaborative positioning and extracting method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0060] For example, on a general computer, the image of the hospital outpatient and emergency bill obtained by scanning is processed, and the method shown in the present invention is used. After the image is normalized in step 120, the angle of the bill is corrected, and the brightness and direction of the bill are consistent. Then, after step 130 foreground and background information separation, foreground information such as text and images on the bill about the hospital name, outpatient number, consultation fee, seal, etc. can be obtained, and then after step 140 global collaborative search and step 150 local optimization, it can be obtained The information to be extracted at the corresponding position on the bill, the multi-standard text positioning and extraction results are finally output through step 160 .

Embodiment 2

[0062] On a general computer, the scanned hospital admission bill image is processed, using the method shown in the present invention, after step 120 image normalization, an image with consistent size, brightness, and direction is obtained, and then through step 130 foreground and background information After separation, the key information on the inpatient bill can be obtained, such as the name of the hospital, gender, expense details, date, etc., and then after the global collaborative search in step 140 and the local optimization in step 150, the corresponding position of the key information on the bill can be obtained, such as the The multi-standard text positioning and extraction results such as title, name, gender, and diagnosis fee details are finally output through step 160.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a multi-specification text collaborative positioning and extracting method. The method comprises the following steps of 110, acquiring text image data; 120, performing image normalization operation; 130, separating image background information and foreground information needed to be collected; 140, performing global collaborative search, and extracting text block regions in a preset format; 150, performing local optimization search, and optimizing the positions of the text regions one by one within a small range; and 160, outputting text block positioning results, andproviding the text block positioning results for a subsequent individual character segmentation and recognition module. According to the method, the technologies of image processing, target detection,collaborative searching, local optimization and the like are utilized; the requirements of formatted data collection are met; the problems of missing detection and wrong detection after degradation of certain text blocks are solved, and unnecessary interference of text and noise information is avoided to the maximum extent; and the accuracy and precision of formatted text information collection are improved.

Description

technical field [0001] The invention belongs to the technical field of image processing and target detection, and in particular relates to a multi-standard text collaborative positioning and extraction method. Background technique [0002] In recent years, with the popularization of digital imaging equipment and the wide application of deep learning algorithms, text recognition software continues to emerge, such as Hanwang, Tencent Cloud Recognition, Baidu Cloud Recognition, etc., and the accuracy of text recognition continues to improve. However, these text recognition software are general-purpose recognition. As long as the text appears in the image, it will try to recognize it. In addition to returning the recognition result and coordinate position, it is difficult to classify the recognition results and cannot directly meet the format requirements. Text recognition and digital collection application requirements. Contents of the invention [0003] The invention provid...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/20G06K9/00
CPCG06V30/413G06V10/22
Inventor 严京旗张成栋李进文罗宝娟
Owner 南通艾思达智能科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products