Inscription label detection and recognition system based on deep neural network

A deep neural network and recognition system technology, applied in biological neural network models, neural architectures, neural learning methods, etc., can solve problems such as detection, recognition and segmentation of mature inscriptions that have not yet appeared, so as to reduce time consumption, improve efficiency, increase The effect of labeling accuracy

Active Publication Date: 2020-10-30
天津恒达文博科技股份有限公司 +1
View PDF6 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In order to automatically recognize the inscriptions, some people at home and abroad have done some research on the machine recognition of inscriptions and the clarity of inscriptions and rubbings, but there is no mature system integrating inscription detection, recognition and segmentation, and the establishment of it. Inscription retrieval system

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Inscription label detection and recognition system based on deep neural network
  • Inscription label detection and recognition system based on deep neural network
  • Inscription label detection and recognition system based on deep neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0056] This example shows the process of detecting and labeling, recognizing and labeling, and segmenting and labeling an inscription image.

[0057] 1. Detection and labeling, interface modules such as figure 2 The 001 module is shown. Since the text of the inscription image is generally dense, each text is relatively small compared to the entire image, so if you want to mark the position accurately, you need to minimize the display of irrelevant areas and focus on the current text to be marked. Therefore, we propose a process of "focusing" the current annotation text through two steps. The first step is to locate the text area in the picture to avoid the influence of unnecessary areas on the field of vision; the second step is to Each text area in is divided into blocks, so that the text in the current block to be marked is displayed each time.

[0058] Specifically: (1) After setting the parameters on the left side, we click the "frame text area" button to start to frame...

example 2

[0068] Example 2: Detector and Recognizer Training

[0069] After labeling, we can train the detector and recognizer.

[0070] Figure 6 The interface diagram of the detector is shown, which mainly includes three areas: data acquisition area (training image and labeling result storage path setting); training image list and current image display area; detector parameter configuration and training start area. After the detection frame in Example 1 is labeled, the labeling result is saved as a text file with the same name as the image. During training, the system obtains the corresponding image from the source image and labeling result folder for training; if you want to check the labeling result of a certain image, you can Select a file in the training image list, and then press the right mouse button, the image will be displayed in the large image area on the right, as well as the labeling results of all character text boxes (such as Figure 5 shown in the blue frame on the r...

example 3

[0076] Example 3: Detection, Recognition, Segmentation and Retrieval Test Module

[0077] 1. Detection function test:

[0078] First, we click the "call east_py for text positioning" button in the test module, and the system uses the trained The detector performs text detection, and displays the original image plus the detection frame in the large image area on the right. If the "Show EAST detection frame" check box is selected, the frame obtained by the EAST algorithm (based on the non-maximum value suppression strategy) will be displayed in a green box in the figure. If the "Show my detection frame" check box is selected, then The detection frame obtained by the patent algorithm (based on connected component analysis and average position strategy) is shown in red frame in the figure. from Figure 9 In the example of , we can see that the red box is closer to the true bounding box of the text, which can better avoid the "truncated" text area, which will lead to recognition...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides an inscription labeling, detecting and recognizing system based on a deep neural network, and finally, information such as inscription positions, word meanings and fonts can beaccurately, effectively and automatically extracted, so that a basis is provided for subsequent inscription retrieval work. The whole system is divided into an annotation module group, a training module group and a test module group from the overall structure, wherein the annotation module group comprises a character position annotation module based on pre-positioning, a character annotation module based on pre-recognition and a segmentation annotation module based on connected components; the training module group comprises a detector training module and a classifier training module; and thetest module group is used for detecting, identifying and segmenting an input image, and establishing the retrieval function on the basis.

Description

technical field [0001] The invention belongs to the technical field of text detection, recognition and segmentation, and in particular relates to an inscription mark detection and recognition system based on a deep neural network. Background technique [0002] As the carrier of our country's long history, culture and art, inscriptions are the splendid treasures of Chinese civilization. Although the words engraved on stone tablets can be preserved for a long time, they cannot avoid the traces of erosion by the years. It is becoming increasingly important for the digital protection of inscriptions. The inscriptions are mainly in traditional Chinese characters and there are differences between the modern standard simplified characters, and the inscriptions bear the calligraphic marks and corrosion traces of the inscriptions. How to make the machine accurately and quickly locate, translate and rub the inscriptions has become a meaningful and challenging task. a subject. [0003...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/20G06K9/34G06K9/46G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V10/22G06V10/267G06V10/462G06N3/045G06F18/24Y02D10/00
Inventor 马晋闫升贾国福杜鹏樊文博韩国民
Owner 天津恒达文博科技股份有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products