OCR recognition method and terminal based on deep learning model

A technology of deep learning and recognition methods, applied in the field of data processing, can solve problems such as many interference factors and affect the recognition accuracy of deep learning models, and achieve the effect of high recognition accuracy and good anti-interference ability

Active Publication Date: 2021-03-12
厦门商集网络科技有限责任公司
View PDF7 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The existing OCR recognition method based on the deep learning model directly inputs the entire character fragment image to the deep learning model for recognition. Since the entire character fragment image contains many interference factors, when the interference degree is heavy, it will affect deep learning. Accuracy of Model Recognition

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • OCR recognition method and terminal based on deep learning model
  • OCR recognition method and terminal based on deep learning model
  • OCR recognition method and terminal based on deep learning model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0079] Such as figure 1 As shown, this embodiment provides an OCR recognition method based on a deep learning model, including:

[0080] S1. Divide a preset character segment image into multiple single-character images to obtain a single-character image set.

[0081] Among them, this embodiment uses the open-source deep learning target detection model RFCN to train and detect the position of a single character in the bill image, and obtain the coordinates of the upper left corner and the lower right corner of the circumscribed rectangular frame of each character on the bill image. According to the coordinate information corresponding to each character, multiple single-character images are cut from the original bill image.

[0082] For example, a character segment image contains the character segment "value-added tax invoice". The coordinates of each character are recognized through the target detection model, and the character segment image is divided according to the coordin...

Embodiment 2

[0127] Such as Figure 4 As shown, this embodiment also provides an OCR recognition terminal based on a deep learning model, including one or more processors 1 and a memory 2, the memory 2 stores a program, and is configured to be controlled by the one or more Processor 1 performs the following steps:

[0128] S1. Divide a preset character segment image into multiple single-character images to obtain a single-character image set.

[0129] Among them, this embodiment uses the open-source deep learning target detection model RFCN to train and detect the position of a single character in the bill image, and obtain the coordinates of the upper left corner and the lower right corner of the circumscribed rectangular frame of each character on the bill image. According to the coordinate information corresponding to each character, multiple single-character images are cut from the original bill image.

[0130] For example, a character segment image contains the character segment "va...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to an OCR recognition method based on a deep learning model and a terminal, belonging to the field of data processing. The present invention obtains a set of single-character images by dividing the preset character segment image into multiple single-character images; the preset first OCR deep learning model sequentially recognizes the elements in the single-character image set to obtain the first feature vector A set; a single-character image corresponding to a first feature vector; according to a preset feature database, converting each first feature vector in the first feature vector set into a corresponding single character to obtain a single-character set; A record in the feature database saves a single character and a feature vector corresponding to the single character; arranging the elements in the single character set to obtain a character string corresponding to the character fragment image. Realize improving the anti-interference ability of OCR recognition characters.

Description

technical field [0001] The invention relates to an OCR recognition method based on a deep learning model and a terminal, belonging to the field of data processing. Background technique [0002] OCR recognition refers to the process in which an electronic device, such as a scanner or a digital camera, acquires an image, and then uses a character recognition method to detect the character area on the image and translate it into computer text. In the field of character recognition, the description characteristics of characters largely determine the accuracy and speed of OCR recognition. [0003] Commonly used OCR recognition methods are as follows: [0004] The first one, the traditional OCR recognition method, first divides the character fragment image into single-character images, and then uses the binary image recognition method or the grayscale image recognition method to recognize each single-character image respectively. The OCR recognition method based on binary images...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/34G06K9/62
Inventor 林玉玲郝占龙陈文传吴建杭庄国金方恒凯
Owner 厦门商集网络科技有限责任公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products