Handwritten signature recognition method and device, equipment, medium and program product
A handwritten signature and recognition method technology, applied in the field of image recognition, can solve problems such as low accuracy, and achieve the effect of improving accuracy
Pending Publication Date: 2022-08-05
CHINA CONSTRUCTION BANK +1
0 Cites 0 Cited by
AI-Extracted Technical Summary
Problems solved by technology
This overall recognition method has the problem of low ac...
Method used
Thus, the HMM model can divide the stroke trajectory of the signature image into several non-overlapping regional grid images, if each regional grid image is regarded as a node, then the whole image can be regarded as multiple rows, Each row consists of multiple nodes, each node can be connected to each other, and there can...
Abstract
The embodiment of the invention provides a handwritten signature recognition method and device, equipment, a medium and a program product. The method comprises the steps that a stroke track in a signature image to be recognized is acquired, and the stroke track comprises a plurality of sites; calculating the cutting probability of each site in the stroke track; determining a target cutting site of the to-be-recognized signature image according to the cutting probability; cutting the to-be-recognized signature image according to the target cutting site to obtain a plurality of cut single character images; inputting the plurality of cut single character images into a trained single character recognition model, and obtaining candidate single characters corresponding to each single character image through the single character recognition model; and determining a signature corresponding to the to-be-recognized signature image according to the candidate single word. Therefore, single character recognition of the handwritten signature is realized, and the recognition accuracy of the handwritten signature is improved.
Application Domain
Signature reading/verifying
Technology Topic
Signature recognitionSingle character +3
Image
Examples
- Experimental program(1)
Example Embodiment
[0054] The features and exemplary embodiments of various aspects of the present application will be described in detail below. In order to make the purpose, technical solutions and advantages of the present application more clear, the present application will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are only intended to explain the present application, but not to limit the present application. It will be apparent to those skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely to provide a better understanding of the present application by illustrating examples of the present application.
[0055] It should be noted that, in this document, relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply any relationship between these entities or operations. any such actual relationship or sequence exists. Moreover, the terms "comprising", "comprising" or any other variation thereof are intended to encompass non-exclusive inclusion such that a process, method, article or device comprising a list of elements includes not only those elements, but also includes not explicitly listed or other elements inherent to such a process, method, article or apparatus. Without further limitation, an element defined by the phrase "comprises" does not preclude the presence of additional identical elements in a process, method, article, or device that includes the element.
[0056]The acquisition, storage, use, and processing of data in the technical solution of this application are in compliance with the relevant provisions of national laws and regulations.
[0057] As described in the background art, in the prior art, for the recognition of handwritten signatures, the overall recognition method is mostly adopted, that is, the signature image is taken as a whole, and the corresponding signature is directly recognized through the convolutional neural network model. This overall recognition method has the problem of low accuracy for the recognition scene of handwritten signatures.
[0058] In addition, due to the variety of Chinese characters and the difficulty of grasping the rules of Chinese names, if the overall recognition mechanism is adopted, a huge training set is required to train the handwritten signature recognition model.
[0059] After in-depth research, the inventor proposes a handwritten signature recognition method based on word recognition, which can effectively improve the accuracy of handwritten signature recognition.
[0060] In view of this, embodiments of the present application provide a handwritten signature recognition method, apparatus, device, computer-readable storage medium, and computer program product.
[0061] The handwritten signature method provided by the embodiments of the present application will be introduced below through specific embodiments and application scenarios with reference to the accompanying drawings. In the handwritten signature recognition method provided by the present application, the execution body may be a handwritten signature recognition device, or a part of the modules in the handwritten signature recognition device for executing the handwritten signature recognition method. In the embodiments of the present application, the handwritten signature recognition method provided by the embodiments of the present application is described in detail by taking the handwritten signature recognition method performed by the device as an example.
[0062] In addition, it should be noted that, in the handwritten signature recognition method provided by the embodiments of the present application, after obtaining the single-character image by cutting the signature image to be recognized, the single-character image needs to be recognized by the trained single-character recognition model. Therefore, before the single-character image is recognized by the single-character recognition model, the single-character recognition model needs to be trained first. Therefore, the following combined with the appendix figure 1 The specific implementation of the method for training the single-character recognition model used in the handwritten signature recognition method provided in the embodiment of the present application will be described.
[0063] figure 1 A schematic flowchart of a handwritten signature recognition method provided by an embodiment of the present application is shown, which may specifically be a training method of a word recognition model adopted in the handwritten signature recognition method provided by an embodiment of the present application.
[0064] like figure 1 As shown, the training of the single-character recognition model used in the handwritten signature recognition method provided by the embodiment of the present application may include steps S110 to S130.
[0065] S110: Acquire multiple sample single-character images and first feature vectors corresponding to the multiple sample single-character images.
[0066] The sample single-character image may be a cut single-character image obtained by cutting a handwritten signature image, or may be a directly obtained sample single-character image, for example, a directly obtained handwritten single-character image. The first feature vector may be a feature vector of the reference word image corresponding to the sample word image. In one example, the reference single-character image may be a single-character image obtained from a preset Chinese character database, for example, may be a single-character image obtained from a Xinhua dictionary database.
[0067] S120: Create a single-character recognition model training sample from each sample single-character image and the first feature vector corresponding to each sample single-character image, respectively.
[0068] S130, train a single-character recognition model according to a plurality of single-character recognition model training samples to obtain a trained single-character recognition model.
[0069] Among them, the single-word recognition model can be created in several ways. In one embodiment, a Siamese network architecture based on contrastive learning can be used to create. In one example, the network architecture of TripleNet can be used on the Siamese model structure. Specifically, in this network architecture, for the input image, a plurality of candidate words can be retrieved from the huge reference word image repository based on the retrieval method, and the best one or more can be selected as the recognized candidate words. In this way, the flexibility of the network architecture in the model can not only be improved, but also the single-character images in the reference single-character image repository can be used as the benchmark, so that fewer single-character recognition model training samples can be trained to obtain single-character recognition with high single-character recognition accuracy. model, thereby improving the efficiency and accuracy of word recognition.
[0070] In this embodiment of the present application, by acquiring multiple sample single-character images and the first feature vectors corresponding to the multiple sample single-character images, each sample single-character image and the first feature vector corresponding to each sample single-character image are respectively used to create a single-character recognition model training sample, According to the training samples of a plurality of single-character recognition models, a single-character recognition model can be trained, and a trained single-character recognition model can be obtained. In this way, based on the trained word recognition model, the candidate word corresponding to the word image can be accurately and efficiently identified.
[0071] In one embodiment, as figure 2 As shown, according to a plurality of single-character recognition model training samples, the single-character recognition model is trained to obtain a trained single-character recognition model, which may specifically include:
[0072] For each word recognition model training sample, the following steps S131 to S134 are performed respectively.
[0073] S131 , input the single-character recognition model training sample into a preset single-character recognition model, and obtain at least one sample candidate word corresponding to the sample single-character image in the single-character recognition model training sample.
[0074] The sample candidate word may be one or more than one.
[0075] S132, based on the sample candidate word, obtain at least one second feature vector corresponding to the sample word image in the word recognition model training sample.
[0076] The second feature vector may be the feature vector of the reference single-word image corresponding to the sample candidate single-word obtained by the sample single-word image through the single-word recognition model. In an example, after obtaining the sample candidate word through the word recognition model, the single-character image corresponding to the sample candidate word may be obtained from a preset Chinese character database, for example, the single character corresponding to the sample candidate word may be obtained from the Xinhua dictionary database. image.
[0077] S133: Determine a loss function value of the single-character recognition model according to the second feature vector of each single-character recognition model training sample and the first feature vector of the sample single-character image corresponding to the single-character recognition model training sample.
[0078] In one embodiment, when there are multiple candidate words, the single-character recognition model is determined according to the second feature vector of each single-character recognition model training sample and the first feature vector of the sample single-character image corresponding to the single-character recognition model training sample The loss function value of , may include: calculating each second feature vector in each training sample of the single character recognition model with the first feature vector of the single character image of the sample corresponding to the training sample of the single character recognition model, and determining the value of For the loss function value corresponding to the feature vector, the minimum value or the maximum value among the corresponding loss function values in each second feature vector is determined as the loss function value corresponding to the training sample of the word recognition model. In an example, when there are multiple candidate words, the set of corresponding loss function values in each second feature vector may also be used as the loss function value of the word recognition model.
[0079] S134, in the case that the loss function value satisfies the training stop condition, obtain a trained word recognition model.
[0080] The above training stop condition may be a preset condition for stopping the training of the word recognition model. As an example, the training stopping condition may be that the loss function of the word recognition model is less than a certain threshold. As another example, when there are multiple candidate words, the training stop condition may further include that in the set of corresponding loss function values in each second feature vector, the distribution of each loss function value satisfies a preset condition. The specific training stop condition can be selected according to the user's needs, which is not limited here.
[0081] In this way, the preset single-character recognition model can be trained through the single-character recognition model training sample, so that the single-character image can be accurately recognized based on the trained single-character recognition model.
[0082] In one embodiment, determining the loss function value of the single-character recognition model according to the second feature vector of each single-character recognition model training sample and the first feature vector of the sample single-character image corresponding to the single-character recognition model training sample, may include:
[0083] Calculate the difference between the second eigenvector and the first eigenvector.
[0084] The difference is processed by the Sigmoid function to obtain the cross entropy loss function value.
[0085] The above calculation of the difference between the second eigenvector and the first eigenvector can be obtained by a method known in the art, and details are not described herein again.
[0086] In this embodiment, the difference between the second eigenvector and the first eigenvector is calculated and processed by the Sigmoid function to obtain the cross-entropy loss function value. In this way, the difference between the second eigenvector and the first eigenvector can be mapped to [0 ,1], so that the parameters of the single-character recognition model can be adjusted according to the cross-entropy loss function, thereby improving the accuracy of the single-character recognition model.
[0087] Attached below image 3 The handwritten signature recognition method provided by the embodiment of the present application will be described in detail.
[0088] image 3 A schematic flowchart of a handwritten signature recognition method provided by an embodiment of the present application is shown, such as image 3 As shown, the handwritten signature recognition method provided by the embodiment of the present application may include steps S210-S260.
[0089] S210: Acquire a stroke trajectory in the signature image to be recognized, where the stroke trajectory includes multiple points.
[0090] In step S210, the stroke trajectory may include a trajectory formed by all strokes used to form a plurality of single characters in the signature image to be recognized. The acquisition of the stroke trajectory in the signature image to be recognized can be implemented by a method known in the art, which is not limited here. In one example, acquiring the stroke trajectory in the signature image to be recognized may include: performing grayscale processing on the signature image to be recognized, and acquiring the stroke trajectory in the grayscale processed image of the signature to be recognized.
[0091] S220: Calculate the cutting probability of each point in the stroke track.
[0092] In step S220, the cutting probability may include the probability that the site is a target cutting site, and the target cutting site may be represented as a site at the end of a stroke and a boundary between two words.
[0093] S230: Determine the target cutting site of the signature image to be identified according to the cutting probability.
[0094] In one embodiment, multiple candidate cleavage sites with a cleavage probability greater than a preset threshold may be screened out, and a target cleavage site is determined according to the distance between these candidate cleavage sites.
[0095] S240, the signature image to be identified is cut according to the target cutting site to obtain a plurality of cut single-character images.
[0096] In one example, the cut single-character image corresponding to each single-character field may be obtained according to the position of the single-character in the signature image.
[0097] S250: Input the multiple cut single-character images into the trained single-character recognition model, and obtain candidate single-characters corresponding to each single-character image through the single-character recognition model.
[0098]In one example, the candidate word corresponding to each word field may be obtained according to the word field corresponding to each cut word image, and each word unit may correspond to one or more candidate words.
[0099] S260: Determine the signature corresponding to the signature image to be recognized according to the candidate word.
[0100] Wherein, determining the signature corresponding to the to-be-recognized signature image according to the candidate word may include: determining the signature corresponding to the to-be-recognized signature image according to candidate signatures obtained in multiple combinations of the candidate words. In an example, determining the signature corresponding to the signature image to be recognized according to the candidate words may include: arranging and combining the candidate words in each word field to obtain multiple candidate signatures, and determining the signature corresponding to the signature image to be recognized from the multiple candidate signatures sign.
[0101] The handwritten signature recognition method of the embodiment of the present application can calculate the cutting probability of each point in the stroke trajectory of the signature image to be recognized, and then determine the cutting point according to the cutting probability, so as to obtain a single-character image by cutting. In this way, the signature image can be character-segmented according to the quantified index, thereby improving the accuracy of character-slicing. Then, input the single-character image into the trained single-character recognition model to obtain the candidate single-character corresponding to each single-character image, and determine the signature corresponding to the signature image according to the candidate single-character. In this way, the signature image can be accurately cut into a plurality of single-character images, so that the signature image can be recognized by a single character, and the signature corresponding to the signature image can be determined according to the candidate single characters obtained by the single-character recognition. In this way, the word recognition of the handwritten signature is realized, and the accuracy of the handwritten signature recognition is improved.
[0102] In the present application, steps S210 to S240 may be implemented by various methods, which are not limited herein. In one embodiment, steps S210-S240 may be performed by a character cutting algorithm. Therefore, before performing the above steps S210 to S240 through the character cutting algorithm, the parameters of the character cutting algorithm need to be determined first. Therefore, the following combined with the appendix Figure 4 The specific implementation of the parameter determination process of the character cutting algorithm used in the handwritten signature recognition method provided in the embodiment of the present application will be described.
[0103] like Figure 4 As shown, the parameter determination process of the character cutting algorithm used in the handwritten signature recognition method provided by the embodiment of the present application may include steps S310-S340.
[0104] S310, acquiring multiple sample signature images.
[0105] The sample signature image may be a handwritten signature image.
[0106] S320 , for each sample signature image, the sample signature image is cut through a preset character cutting algorithm to obtain a plurality of cut sample single-character images.
[0107] Among them, the preset character cutting algorithm can be set by various methods. In one embodiment, the preset character cutting algorithm may include at least one of a water drop algorithm and a combination algorithm of a hidden Markov model and a Viterbi algorithm (HMM-Viterbialgorithm). The process of the water drop algorithm is simple and efficient, but it is only suitable for the case of no character adhesion. Because the handwritten signature often has stroke adhesion between different characters, it is impossible to accurately determine the leakage of the water drop, and sometimes the characters cannot be effectively segmented correctly. The process of HMM-Viterbi algorithm is more complicated, but it has higher accuracy.
[0108] Specifically, the water droplet algorithm can simulate the process of water droplets falling from a high place to a low place to divide the characters. The water droplets drop from the top of the character under the action of gravity and drop down along the outline of the character. When the water droplets sink in the concave of the outline , it will penetrate into the character strokes, and finally, the probability that the point on the stroke trajectory in the signature image is the target cutting point can be determined according to the trajectory of the water droplet, so as to determine the character segmentation path.
[0109] When the character cutting algorithm includes the HMM-Viterbi algorithm, the sample signature image is cut through the preset character cutting algorithm to obtain a plurality of cut sample single-character images, which may include:
[0110] Obtain the stroke trajectory in the sample signature image, and obtain multiple observation vectors;
[0111] The trajectory of the character stroke is segmented and segmented to obtain a plurality of position information that may have stroke adhesion, and the multiple position information that may have stroke adhesion is a plurality of possible cutting site information;
[0112] Calculate the output probability and probability transition probability of the Hidden Markov Model (HMM) model based on multiple observation vectors and multiple possible cutting site information, thereby obtaining conditional probabilities of multiple cutting methods;
[0113] The Viterbi algorithm is used to calculate the corresponding target cutting site information when the conditional probability is the largest, and the sample signature image is cut according to the target cutting site information.
[0114] As a result, the HMM model can divide the stroke trajectory of the signature image into several non-overlapping regional grid images. If each regional grid image is regarded as a node, the entire image can be regarded as a multi-line, multi-line It is composed of several nodes, and each node can be connected to each other. There can be various paths from the initial node of row 0 to the end node of the last row. Through the Viterbi algorithm, dynamic programming method can be used to search for an optimal path, and this path is a potential suitable segmentation result for the signature image. In this way, the accuracy of character cutting can be improved.
[0115] S330 , according to the plurality of cut sample word images, determine the cut evaluation index of the character cut algorithm.
[0116] S340, in the case that the cutting evaluation index meets the preset condition, determine that the current character cutting algorithm parameter is the final character cutting algorithm parameter.
[0117] The above-mentioned cutting evaluation index may include a cutting evaluation score determined based on a sample single-character image obtained by a character cutting algorithm, or whether the sample single-character image obtained based on the character cutting algorithm meets a preset requirement. As an example, the cut evaluation index may include whether the number of cut sample word images meets a preset requirement. For example, in non-minority areas, Chinese names basically consist of 2 to 4 characters. For each sample signature image, if the number of cut sample single-character images is 2 to 4, it is considered that the number of cut sample single-character images conforms to According to the preset requirements, it can be considered that the cutting evaluation index complies with the preset conditions accordingly. As another example, the cutting evaluation index may include whether the recognition result of the cut sample word image meets the preset requirements. For example, the cut sample single-character image can be recognized by the single-character recognition model, and the signature corresponding to the sample single-character image can be obtained. If the signature corresponding to the sample single-character image conforms to the naming convention of Chinese names, it can be considered that the cut sample single-character image The recognition result of the image meets the preset requirements, and accordingly, it can be considered that the cutting evaluation index meets the preset conditions.
[0118] It is easy to understand that when the cutting evaluation index does not meet the preset conditions, the parameters of the character cutting algorithm can be adjusted, and then the above S310 to S340 are executed until the cutting evaluation index meets the preset conditions.
[0119] In one embodiment, the preset character cutting algorithm may include water drop algorithm and HMM-Viterbialgorithm. The character cutting algorithm thus obtained can more accurately cut the handwritten signature image.
[0120] At this time, the above S310 to S340 may be executed separately for the water drop algorithm and the HMM-Viterbi algorithm, or the above S310 to S340 may be executed in combination. As an example of performing the above S310 to S340 in combination, for each sample signature image, the sample signature image is cut through a preset character cutting algorithm to obtain a plurality of cut sample single-character images, which may include:
[0121] For each sample signature image, a plurality of cut first sample word images are obtained by cutting through the water drop algorithm, and a plurality of cut second sample word images are obtained by cutting through the HMM-Viterbi algorithm;
[0122] In the case that the similarity between the first sample single-character image and the second sample single-character image is less than the preset threshold, the first sample single-character image or the second sample single-character image is determined as the sample single-character image.
[0123] It is easy to understand that when the similarity between the first sample single-character image and the second sample single-character image is greater than or equal to the preset threshold, the parameters of the water drop algorithm and the HMM-Viterbi algorithm can be adjusted until the first sample single-character image and The similarity of the second sample single-word image is less than the preset threshold.
[0124] In this way, the parameters of the preset character cutting algorithm can be adjusted through the sample signature image, and a character cutting algorithm capable of accurately segmenting the signature image can be obtained.
[0125] In one embodiment, the cutting probability of each point in the stroke trajectory is calculated; the target cutting point of the signature image to be recognized is determined according to the cutting probability, which may be performed by a character cutting algorithm. Specifically, it can include:
[0126] The first cutting probability of each point in the stroke trajectory is calculated by the water drop algorithm.
[0127] When the first cutting probability does not meet the preset condition, the second cutting probability of each position in the stroke trajectory is calculated by using the hidden Markov model, and the second cutting probability includes the conditional probability that the position is the target cutting position.
[0128] According to the second cleavage probability, the Viterbi algorithm is used to determine the target cleavage site.
[0129] The above preset conditions can be set by themselves according to the actual situation, and are not particularly limited here. As an example, the preset condition may be that it is expected that after cutting according to the first cutting probability, the number of images is within a preset range, for example, 2-4.
[0130] According to the second cutting probability, the Viterbi algorithm is used to determine the target cutting site, which may include: when using the Viterbi algorithm to determine which sites are used as the target cutting site, the conditional probability is the largest, and the cutting site with the largest conditional probability is determined as: The cleavage site of the signature image to be identified.
[0131] It is easy to understand that when the multiple cut images obtained by cutting the signature image to be identified by the water drop algorithm meet the preset conditions, the multiple cut images can be determined as multiple cut single-character images.
[0132] In this way, in the case of accurately cutting the signature image to be recognized by the water drop algorithm, only the relatively simple water drop algorithm can be used to cut the signature image to be recognized, thereby simplifying the cutting process of the signature image and improving the cutting efficiency of the signature image; in the water drop algorithm When the accuracy of cutting the signature image to be identified is insufficient, the signature image to be identified can be cut through the HMM-Viterbi algorithm, so as to ensure that the cutting of the signature image to be identified has a high accuracy, thereby improving the accuracy of handwritten signature recognition. .
[0133] In one embodiment, the signature corresponding to the signature image to be recognized is determined according to the candidate word, which may specifically include:
[0134] Multiple candidate signatures are obtained based on candidate word combinations.
[0135] According to the distance similarity between the candidate signature and the names in the preset name set, the signature corresponding to the signature image to be recognized is determined.
[0136]The preset name set may be a name set obtained according to relevant documents, such as the "Encyclopedia of Chinese Names", or existing Chinese name data. The distance similarity may include the minimum edit distance between the candidate signature and the names in the preset name set. In one example, one or more candidate signatures whose minimum edit distance is within a preset range may be determined according to the minimum edit distance between the multiple candidate signatures and the names in the preset name set, and then the corresponding signature images may be determined from the minimum edit distances. 's signature.
[0137] In this way, the signature corresponding to the recognized signature image can be made more in line with the Chinese name specification and naming convention, thereby improving the accuracy of handwritten signature recognition.
[0138] In one embodiment, acquiring the stroke trajectory in the signature image to be recognized may include:
[0139] Image preprocessing is performed on the signature image, which includes image denoising, image binarization, and stroke width transformation.
[0140] The stroke trajectory is obtained from the preprocessed signature image to be recognized.
[0141] The image preprocessing can be implemented by various algorithms, which are not particularly limited here. In an example, image denoising can use smooth spatial domain filter algorithm to remove noise points from the image; image binarization can be used to separate the text and background in the signature image through OTSU algorithm; stroke width transformation can be The width of the stroke is estimated by the run-length method, and the stroke is expanded or eroded according to the estimated result, so as to increase or decrease the width of the stroke. In this way, the accuracy of character cutting and word recognition can be improved, thereby improving the accuracy of handwritten signature recognition.
[0142] Based on the same inventive concept, an embodiment of the present application further provides a handwritten signature recognition device 400 .
[0143] like Figure 5 As shown, the handwritten signature recognition device 400 may include an acquisition module 401 , a calculation module 402 , a determination module 403 , a cutting module 404 and an input module 405 .
[0144] The acquiring module 401 is configured to acquire the stroke trajectory in the signature image to be recognized, where the stroke trajectory includes multiple points.
[0145] The calculation module 402 is used for calculating the cutting probability of each point in the stroke track.
[0146] The determining module 403 is configured to determine the target cutting site of the signature image to be identified according to the cutting probability.
[0147] The cutting module 404 is configured to cut the signature image to be identified according to the target cutting site to obtain a plurality of cut single-character images.
[0148] The input module 405 is configured to input a plurality of cut single-character images into the trained single-character recognition model, and obtain a candidate single-character corresponding to each single-character image through the single-character recognition model.
[0149] The determining module 403 is further configured to determine the signature corresponding to the signature image to be recognized according to the candidate word.
[0150] The handwritten signature recognition device of the embodiment of the present application can calculate the cutting probability of each point in the stroke trajectory of the signature image to be recognized, and then determine the cutting point according to the cutting probability, so as to obtain a single-character image by cutting. In this way, the signature image can be character-segmented according to the quantified index, thereby improving the accuracy of character-slicing. Then, input the single-character image into the trained single-character recognition model to obtain the candidate single-character corresponding to each single-character image, and determine the signature corresponding to the signature image according to the candidate single-character. In this way, the signature image can be accurately cut into a plurality of single-character images, so that the signature image can be recognized by a single character, and the signature corresponding to the signature image can be determined according to the candidate single characters obtained by the single-character recognition. In this way, the word recognition of the handwritten signature is realized, and the accuracy of the handwritten signature recognition is improved.
[0151] In one embodiment, the above-mentioned calculation module is used to calculate the cutting probability of each point in the stroke trajectory; the above-mentioned determination module is used to determine the target cutting point of the signature image to be recognized according to the cutting probability, which may specifically include:
[0152] The above calculation module is used to calculate the first cutting probability of each point in the stroke trajectory through the water drop algorithm.
[0153] The above-mentioned calculation module is also used to calculate the second cutting probability of each point in the stroke trajectory by the hidden Markov model when the first cutting probability does not meet the preset conditions, and the second cutting probability includes the site as the target Conditional probability of the cleavage site.
[0154] The above determination module is further configured to determine the target cleavage site by using the Viterbi algorithm according to the second cleavage probability.
[0155] The cutting module is used to cut the signature image according to the cutting site to obtain a plurality of cut single-character images.
[0156] In one embodiment, the determining module is configured to determine the signature corresponding to the signature image to be recognized according to the candidate word, and may include:
[0157] The combination module is used to obtain multiple candidate signatures based on candidate word combinations.
[0158] The above determining module is configured to determine the signature corresponding to the signature image to be recognized according to the similarity between the candidate signature and the names in the preset name set.
[0159] In one embodiment, the above-mentioned acquisition module is used to acquire the stroke trajectory in the signature image to be recognized, which may specifically include:
[0160] The preprocessing module is used to perform image preprocessing on the signature image to be recognized, and the image preprocessing includes image denoising, image binarization processing, and stroke width transformation.
[0161] The above acquisition module is used to acquire stroke trajectories from the preprocessed signature image to be recognized.
[0162] In one embodiment, the apparatus 400 may further include:
[0163] The acquiring module is configured to acquire multiple sample single-character images and first feature vectors corresponding to the multiple sample single-character images.
[0164] The creation module is used to create a single-character recognition model training sample from each sample single-character image and the first feature vector corresponding to each sample single-character image respectively.
[0165] The training module is used for training samples according to multiple single-character recognition models, training the single-character recognition model, and obtaining a trained single-character recognition model.
[0166] In one embodiment, the above-mentioned training module is used to train a single-character recognition model according to a plurality of single-character recognition model training samples, and obtain a trained single-character recognition model, which may specifically include:
[0167] For each word recognition model training sample, perform the following steps:
[0168] Input the single-character recognition model training sample into the preset single-character recognition model, and obtain at least one sample candidate single word corresponding to the sample single-character image in the single-character recognition model training sample.
[0169] Based on the sample candidate words, at least one second feature vector corresponding to the sample word image in the training sample of the word recognition model is obtained.
[0170] The loss function value of the single-character recognition model is determined according to the second feature vector of each single-character recognition model training sample and the first feature vector of the sample single-character image corresponding to the single-character recognition model training sample.
[0171] When the loss function value satisfies the second training stop condition, a trained word recognition model is obtained.
[0172] In one embodiment, determining the loss function value of the single-character recognition model according to the second feature vector of each single-character recognition model training sample and the first feature vector of the sample single-character image corresponding to the single-character recognition model training sample, may include:
[0173] Calculate the difference between the second eigenvector and the first eigenvector;
[0174] The difference is processed by the Sigmoid function to obtain the cross entropy loss function value.
[0175] The handwritten signature recognition device provided in the embodiment of the present application can realize image 3 In order to avoid repetition, the various processes implemented by the method embodiment of the present invention will not be repeated here.
[0176] Image 6 A schematic diagram of the hardware structure of the handwritten signature recognition device provided by the embodiment of the present application is shown.
[0177] The handwritten signature recognition device may include a processor 501 and a memory 502 storing computer program instructions.
[0178] Specifically, the above-mentioned processor 501 may include a central processing unit (CPU), or a specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured to implement one or more integrated circuits of the embodiments of the present application.
[0179] Memory 502 may include mass storage for data or instructions. By way of example and not limitation, memory 502 may include a Hard Disk Drive (HDD), a floppy disk drive, flash memory, optical disk, magneto-optical disk, magnetic tape, or Universal Serial Bus (USB) drive or two or more A combination of more than one of the above. Memory 502 may include removable or non-removable (or fixed) media, where appropriate. Storage 502 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In certain embodiments, memory 502 is non-volatile solid state memory.
[0180] Memory may include read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical or other physical/tangible memory storage devices. Thus, in general, memory includes one or more tangible (non-transitory) computer-readable storage media (eg, memory devices) encoded with software including computer-executable instructions, and when the software is executed (eg, by one or more multiple processors), it is operable to perform the operations described with reference to a method according to an aspect of the present application.
[0181] The processor 501 reads and executes the computer program instructions stored in the memory 502 to implement any one of the handwritten signature recognition methods in the foregoing embodiments.
[0182] In one example, the handwritten signature recognition device may also include a communication interface 503 and a bus 510 . Among them, such as Image 6 As shown, the processor 501, the memory 502, and the communication interface 503 are connected through the bus 510 and complete the mutual communication.
[0183] The communication interface 503 is mainly used to implement communication between modules, apparatuses, units and/or devices in the embodiments of the present application.
[0184]Bus 510 includes hardware, software, or both, coupling the components of the handwritten signature recognition device to each other. By way of example and not limitation, the bus may include Accelerated Graphics Port (AGP) or other graphics bus, Enhanced Industry Standard Architecture (EISA) bus, Front Side Bus (FSB), HyperTransport (HT) Interconnect, Industry Standard Architecture (ISA) Bus, Infiniband Interconnect, Low Pin Count (LPC) Bus, Memory Bus, Microchannel Architecture (MCA) Bus, Peripheral Component Interconnect (PCI) Bus, PCI-Express (PCI-X) Bus, Serial Advanced Technology Attachment (SATA) bus, Video Electronics Standards Association Local (VLB) bus or other suitable bus or a combination of two or more of the above. Bus 510 may include one or more buses, where appropriate. Although embodiments of this application describe and illustrate a particular bus, this application contemplates any suitable bus or interconnect.
[0185] The handwritten signature recognition device can execute the handwritten signature recognition method in the embodiments of the present application, so as to realize the combination of image 3 and Figure 5 A method and apparatus for handwritten signature recognition are described.
[0186] In addition, in combination with the handwritten signature recognition method in the above embodiment, the embodiment of the present application may provide a computer storage medium for implementation. Computer program instructions are stored on the computer storage medium; when the computer program instructions are executed by the processor, any one of the handwritten signature recognition methods in the foregoing embodiments is implemented.
[0187] To be clear, the present application is not limited to the specific configurations and processes described above and illustrated in the figures. For the sake of brevity, detailed descriptions of known methods are omitted here. In the above-described embodiments, several specific steps are described and shown as examples. However, the method process of the present application is not limited to the specific steps described and shown, and those skilled in the art can make various changes, modifications and additions, or change the sequence of steps after comprehending the spirit of the present application.
[0188] The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an application specific integrated circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, elements of the present application are programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted over a transmission medium or communication link by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transmit information. Examples of machine-readable media include electronic circuits, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio frequency (RF) links, and the like. The code segments may be downloaded via a computer network such as the Internet, an intranet, or the like.
[0189] It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above steps, that is, the steps may be performed in the order mentioned in the embodiment, or may be different from the order in the embodiment, or several steps may be performed simultaneously.
[0190] Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. The computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that execution of the instructions via the processor of the computer or other programmable data processing apparatus enables the Implementation of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams. Such processors may be, but are not limited to, general purpose processors, special purpose processors, application specific processors, or field programmable logic circuits. It will also be understood that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can also be implemented by special purpose hardware for performing the specified functions or actions, or by special purpose hardware and/or A combination of computer instructions is implemented.
[0191] The above are only specific implementations of the present application. Those skilled in the art can clearly understand that, for the convenience and brevity of the description, the specific working process of the above-described systems, modules and units may refer to the foregoing method embodiments. The corresponding process in , will not be repeated here. It should be understood that the protection scope of this application is not limited to this, and any person skilled in the art can easily think of various equivalent modifications or replacements within the technical scope disclosed in this application, and these modifications or replacements should all cover within the scope of protection of this application.
PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
Similar technology patents
Metrology Method and Apparatus, Substrate, Lithographic System and Device Manufacturing Method
Owner:ASML NETHERLANDS BV
Intelligent protocol parsing method and device
Owner:BEIJING VENUS INFORMATION TECH
TR309 - portable otoscope video viewer
Owner:RICH TONY C +1
Communication terminal apparatus and communication system
Owner:SONY ERICSSON MOBILE COMM JAPAN INC
Classification and recommendation of technical efficacy words
- improve accuracy
Golf club head with adjustable vibration-absorbing capacity
Owner:FUSHENG IND CO LTD
Direct fabrication of aligners for arch expansion
Owner:ALIGN TECH
Stent delivery system with securement and deployment accuracy
Owner:BOSTON SCI SCIMED INC
Method for improving an HS-DSCH transport format allocation
Owner:NOKIA SOLUTIONS & NETWORKS OY
Catheter systems
Owner:ST JUDE MEDICAL ATRIAL FIBRILLATION DIV