Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

OCR recognition method based on deep learning model and terminal

A technology of deep learning and recognition methods, applied in the field of data processing, can solve problems such as many interference factors and affect the recognition accuracy of deep learning models, and achieve the effect of high recognition accuracy and good anti-interference ability

Active Publication Date: 2019-05-21
厦门商集网络科技有限责任公司
View PDF7 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The existing OCR recognition method based on the deep learning model directly inputs the entire character fragment image to the deep learning model for recognition. Since the entire character fragment image contains many interference factors, when the interference degree is heavy, it will affect deep learning. Accuracy of Model Recognition

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • OCR recognition method based on deep learning model and terminal
  • OCR recognition method based on deep learning model and terminal
  • OCR recognition method based on deep learning model and terminal

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0079] Such as figure 1 As shown, this embodiment provides an OCR recognition method based on a deep learning model, including:

[0080] S1. Divide a preset character segment image into multiple single-character images to obtain a single-character image set.

[0081] Among them, this embodiment uses the open-source deep learning target detection model RFCN to train and detect the position of a single character in the bill image, and obtain the coordinates of the upper left corner and the lower right corner of the circumscribed rectangular frame of each character on the bill image. According to the coordinate information corresponding to each character, multiple single-character images are cut from the original bill image.

[0082] For example, a character segment image contains the character segment "value-added tax invoice". The coordinates of each character are recognized through the target detection model, and the character segment image is divided according to the coordin...

Embodiment 2

[0127] Such as Figure 4 As shown, this embodiment also provides an OCR recognition terminal based on a deep learning model, including one or more processors 1 and a memory 2, the memory 2 stores a program, and is configured to be controlled by the one or more Processor 1 performs the following steps:

[0128] S1. Divide a preset character segment image into multiple single-character images to obtain a single-character image set.

[0129] Among them, this embodiment uses the open-source deep learning target detection model RFCN to train and detect the position of a single character in the bill image, and obtain the coordinates of the upper left corner and the lower right corner of the circumscribed rectangular frame of each character on the bill image. According to the coordinate information corresponding to each character, multiple single-character images are cut from the original bill image.

[0130] For example, a character segment image contains the character segment "va...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an OCR recognition method based on a deep learning model and a terminal, and belongs to the field of data processing. The method comprises the steps of segmenting a preset character segment image into a plurality of single-character images to obtain a single-character image set; sequentially identifying elements in the single-character image set by a preset first OCR deeplearning model to obtain a first feature vector set; wherein one single-character image corresponds to a first feature vector; according to a preset feature database, each first feature vector in thefirst feature vector set is converted into a corresponding single character, and a single character set is obtained; wherein a record in the feature database stores a single character and a feature vector corresponding to the single character; and arranging elements in the single character set to obtain a character string corresponding to the character segment image. The anti-interference capability of OCR character recognition is improved.

Description

technical field [0001] The invention relates to an OCR recognition method based on a deep learning model and a terminal, belonging to the field of data processing. Background technique [0002] OCR recognition refers to the process in which an electronic device, such as a scanner or a digital camera, acquires an image, and then uses a character recognition method to detect the character area on the image and translate it into computer text. In the field of character recognition, the description characteristics of characters largely determine the accuracy and speed of OCR recognition. [0003] Commonly used OCR recognition methods are as follows: [0004] The first one, the traditional OCR recognition method, first divides the character fragment image into single-character images, and then uses the binary image recognition method or the grayscale image recognition method to recognize each single-character image respectively. The OCR recognition method based on binary images...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/34G06K9/62
Inventor 林玉玲郝占龙陈文传吴建杭庄国金方恒凯
Owner 厦门商集网络科技有限责任公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products