Face image recognition method and device, electronic equipment and storage medium
A face image and recognition method technology, applied in the field of image processing, can solve the problems of missed recognition and easy misrecognition, and achieve the effect of reducing misrecognition and improving accuracy.
Pending Publication Date: 2020-07-10
MIGU CO LTD +1
0 Cites 1 Cited by
AI-Extracted Technical Summary
Problems solved by technology
[0005] Embodiments of the present invention provide a face image recognition method, device, electronic equipment, and storage medium to solve the problem that in the pr...
Abstract
The embodiment of the invention provides a face image recognition method and device, electronic equipment and a storage medium, and the method comprises the steps: determining the identity informationcorresponding to a target face image according to the identity information belonging to a white list if the identity information corresponding to a similar reference image belongs to the white list after determining the similar reference image similar to the target face image. The white list is identity information corresponding to the recognized face image in the video. Due to the fact that theimages in the video have relevance, recognition of the human face images in the video is associated with other images in the video through the white list, the accuracy of human face image recognitionis improved, and for the human face images with poor quality and complex scenes, mistaken recognition is reduced.
Application Domain
Character and pattern recognition
Technology Topic
Image identificationComputer graphics (images) +4
Image
Examples
- Experimental program(1)
Example Embodiment
[0028] In order to make the objectives, technical solutions, and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be described clearly and completely in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments It is a part of the embodiments of the present invention, not all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
[0029] This embodiment provides a method for recognizing a face image, which is used to identify a person appearing in a video (for example, a movie, a movie fragment). In this way, the user can learn about the characters appearing in the video before watching the video, or according to the recognition of the facial images appearing in the video, it can automatically edit video clips that only contain specific characters, thereby improving the editing efficiency. The method can be executed by any device, for example, a computer, a server, a mobile phone, and so on. figure 1 It is a schematic flowchart of the face image recognition method provided by this embodiment, see figure 1 , The method includes:
[0030] Step 101: Obtain the target face image to be recognized from the video, and determine similar reference images according to the similarity between each reference image in the database and the target face image; wherein the database includes the identity information and the reference image Correspondence.
[0031] Reference images corresponding to each identity information are pre-stored in the database. Each identity information stores multiple reference images corresponding to the identity information. These reference images are based on the person corresponding to the identity information, especially the person of the person. The face is taken from different angles.
[0032] A similar reference image is a reference image with a high degree of similarity to the target face image among the reference images. The similarity between each reference image and the target face image can be calculated by Euclidean distance, which is not specifically limited in this embodiment.
[0033] Step 102: Determine whether there is identity information belonging to a white list in the identity information corresponding to each similar reference image, where the white list includes identity information corresponding to the recognized face images in the video.
[0034] The whitelist stores the identity information corresponding to the face images that have been identified in the video. Due to the correlation of each frame of the video, the identification information that has been identified in the video has a relatively high probability of reappearing in the video. Therefore, the screening of the whitelist can greatly improve the accuracy of recognition and reduce the image quality or scene complexity. Misidentification. At the same time, the screening of the whitelist also narrows the scope of the further judgment process and improves the identification efficiency.
[0035] Step 103: If there is identity information belonging to the white list, the identity information belonging to the white list is used as the candidate identity information, and the identity information corresponding to the target face image is identified according to each candidate identity information.
[0036] The identity information to be selected is the identity information that needs further confirmation. There may be one or more identity information to be selected. If the identity information to be selected is unique, it is further confirmed that the identity information to be selected is the identity information corresponding to the target face image. If the identity information to be selected is not unique, it needs to be based on other The information further confirms the identity information corresponding to the target face image.
[0037] figure 2 This embodiment provides a schematic diagram of the overall flow of facial image recognition of the video, see figure 2 After determining the TOPN neighbor candidate results (ie, similar reference images) through face detection and facial feature extraction, if the identity information corresponding to TOPN has identity information belonging to the white list, the recognition result can be further determined. The specific voting method can be used , Contextual information and similarity average value. After this round of identification, if the identified identity information has not appeared in the white list, the white list is updated to add the newly appeared identity information to the white list. Further, context information can also be updated, and the updated context information is used to further confirm the recognition result.
[0038] According to the method for recognizing a face image provided by this embodiment, after determining a similar reference image similar to the target face image, if the identity information corresponding to the similar reference image belongs to the white list, the target person is determined according to the identity information belonging to the white list The identity information corresponding to the face image. The whitelist is the identity information corresponding to the recognized face images in the video. Due to the correlation between the images in the video, the recognition of the face image in the video through the whitelist is associated with other images in the video, which improves the accuracy of face image recognition, poor quality and complex scenes The face image reduces misrecognition.
[0039] In the process of face image recognition, it may also happen that the identity information corresponding to each similar reference image does not belong to the white list (for example, when the face image acquired from the video is recognized for the first time, the white list is empty. At this time, the identity information corresponding to each similar reference image is not in the whitelist), and further, on the basis of the foregoing embodiment, it further includes:
[0040] If there is no identity information belonging to the whitelist, determine the first number of similar reference images corresponding to the same identity information among the identity information corresponding to each similar reference image;
[0041] The identity information corresponding to the similar reference images whose first number is greater than or equal to the first threshold is acquired as the candidate identity information, and the identity information corresponding to the target face image is identified according to each candidate identity information.
[0042] In the identity information corresponding to each similar reference image, the number of similar reference images corresponding to the same identity information is counted, and the number of similar reference images corresponding to each identity information is the first number. For example, if there are 5 similar reference images in total, the identity information corresponding to 3 similar reference images is identity information A, and the identity information corresponding to 2 similar reference images is identity information B, then the number of similar reference images corresponding to identity information A The number of similar reference images corresponding to the identity information B is 2 (that is, the first number of the identity information B is 2).
[0043] The first threshold is a set value, for example, the first threshold is N/2, where N is the total number of similar reference images.
[0044] Such as figure 2 As shown, when there is no identity information belonging to the whitelist in the identity information corresponding to TOPN, then according to the identity information with high confidence in the TOPN (that is, the first number of similar reference images that are greater than or equal to the first threshold) Corresponding identity information) to further determine the recognition result.
[0045] In this embodiment, in the case that there is no identity information belonging to the whitelist, the identity information is further screened based on the first number of similar reference images corresponding to each identity information, so as to realize further confirmation of the identity information.
[0046] In the process of face image recognition, it may also appear that the identity information corresponding to each similar reference image does not belong to the whitelist, and there is no identity information corresponding to the first number of similar reference images greater than or equal to the first threshold. In this case, further, on the basis of the foregoing embodiments, it further includes:
[0047] If there is no first number greater than or equal to the first threshold, after the re-identification condition is met, it is determined whether there is identity information that belongs to the whitelist among the identity information corresponding to each similar reference image. If so, it will belong to all the identity information. The identity information of the whitelist is used as the candidate identity information, and the identity information corresponding to the target face image is identified according to each candidate identity information; otherwise, the target face image is discarded;
[0048] Wherein, the re-identification condition is that the identity information corresponding to the last frame of the face image of the video has been identified according to the playing order of the video, or the identity information of the newly added identity information in the white list has been identified. The second quantity is greater than or equal to the second threshold.
[0049] The re-recognition condition is a preset condition for re-identifying the target face image. The second threshold is a set value. For example, the second threshold is 3, that is, when the number of newly added identity information in the whitelist is detected to be greater than or equal to 3, the target face image is re-based on the similar reference image. The target face image is used to identify the identity information. The re-identification condition can also be that the identity information corresponding to the last face image of the video has been identified according to the playback order of the video (it should be noted that regardless of the identity information of the last face image of the video) Whether the recognition is successful or not, as long as the identity information is recognized on the last frame of the video, the previously stored facial image without the identity information can be re-identified). Such as figure 2 As shown, when the whitelist and confidence are not satisfied, the TOPN result can be temporarily stored to the end of the video, and then it is judged whether the identity information corresponding to the TOPN belongs to the whitelist, if yes, further confirmation is performed, if not, the target face is discarded The image does not identify the identity information.
[0050] This embodiment performs temporary storage processing on some currently unrecognizable images, and uses the association between each frame of the video to re-recognize after satisfying the re-recognition condition, thereby increasing the probability of successful recognition of the target face image.
[0051] Further, on the basis of the foregoing embodiments, in the foregoing step 101, the determining similar reference images according to the similarity between each reference image in the database and the target face image includes:
[0052] Determine the face feature vector corresponding to the target face image, and calculate the similarity between the target face image and each reference image according to the face feature vector and the reference feature vector corresponding to each reference image;
[0053] In the descending order of similarity, the number of reference images equal to the third threshold is obtained as similar reference images.
[0054] Further, the method further includes: intercepting an unrecognized face image from the video as a target face image.
[0055] Further, the determining the face feature vector corresponding to the target face image includes:
[0056] Inputting the RGB image data of the target face image into a preset model, and the preset model determines the face feature vector corresponding to the target face image;
[0057] Wherein, the preset model is obtained by training a deep neural network with a face image as a sample, and identity information corresponding to the face image as a label.
[0058] Wherein, determining the face feature vector corresponding to the target face image by the preset model specifically includes: using a vector output by the last fully connected layer of the preset model as the face feature vector.
[0059] Further, calculating the similarity between the target face image and each reference image according to the face feature vector and the reference feature vector corresponding to each reference image includes:
[0060] For the target face image and any reference image, according to the face feature vector and the reference feature vector corresponding to the reference image, the formula
[0061]
[0062] Calculating the Euclidean distance between the target face image and each reference image;
[0063] Where x i And x j The face feature vector and the reference feature vector corresponding to the reference image are respectively, and dist represents the Euclidean distance between the two feature vectors.
[0064] Further, according to the order of the similarity degree from large to small, obtaining reference images with a number equal to the third threshold as similar reference images includes:
[0065] According to the descending order of Euclidean distance, the number of reference images equal to the third threshold is obtained as similar reference images. For example, if the third threshold is N, the first N reference images in the arrangement are obtained as similar reference images. The similarity is expressed by Euclidean distance. The smaller the Euclidean distance, the greater the similarity. image 3 This embodiment provides a schematic diagram of sorting Euclidean distance to obtain TOPN candidate results. image 3 Dist(x, y j ) Represents the calculated Euclidean distance between the target face image and the j-th reference image, image 3 On the right is a list sorted by Euclidean distance from small to small (ie similarity from large to small), where d i Is the i-th Euclidean distance calculation result sorted from small to large, and the identity is each d i Personal identity information corresponding to the calculation result. by image 3 Get the TOPN candidate results in the list on the right.
[0066] This embodiment obtains the top N reference images with the highest similarity as similar reference images through calculation and sorting of similarity, which provides a basis for subsequent calculations.
[0067] As a specific example, this embodiment provides a specific implementation process for determining the identity information corresponding to a face image, and the process includes the following content:
[0068] (1) Face detection. The face detection algorithm based on deep neural network is used to detect the face area in the image to be recognized, and obtain the RGB image data of the face area.
[0069] (2) Face feature extraction. Using the face feature extraction algorithm based on deep neural network, the face image is converted into a 512-dimensional feature vector.
[0070] (3) Spatial distance calculation of face feature vector. Calculate the distance between the face feature vector to be recognized and the standard face feature vector stored in the database, and take the top N candidate results sorted from small to large among all the calculation results.
[0071] For content (3), the Euclidean distance is used to measure the spatial distance of the face feature vector. The calculation method of Euclidean distance is as follows:
[0072]
[0073] Where x i And x j Each represents two 512-dimensional face feature vectors, and dist represents the Euclidean distance between the two feature vectors. Calculate the Euclidean distance between the feature vector to be recognized and all feature vectors in the database, and take the first N minimum values of the calculation result. Such as image 3 As shown, x is the face feature vector to be recognized, y i It is the standard facial feature vector stored in the database. d i Is the i-th Euclidean distance calculation result sorted from small to large, and the identity is each d i The identity of the person corresponding to the calculation result.
[0074] With regard to the process of further identifying the TOPN candidate results in the foregoing embodiments, on the basis of the foregoing embodiments, the identifying the identity information corresponding to the target face image according to the respective candidate identity information includes:
[0075] Determine the number of similar reference images corresponding to each candidate identity information as the number of votes. If the candidate identity information corresponding to the largest number of votes is unique, then the candidate identity information corresponding to the largest number of votes is used as the target face Identity information corresponding to the image;
[0076] If the candidate identity information corresponding to the largest number of votes is not unique, determine the identity information corresponding to the target face image according to historical identification information and/or similarity information;
[0077] Among them, the historical identification information includes each two-tuple and the tag value corresponding to each two-tuple. The two-tuple includes the identity information of the successfully recognized item and the identity information of the misidentified item. When the face image is identified, the identity information corresponding to the face image recognized as the face image and the identity information corresponding to the face image not recognized as the identity information to be selected are determined; the similarity information includes the identity information corresponding to each candidate The average similarity determined by the similarity between each similar reference image and the target face image.
[0078] When the identity information corresponding to the largest number of votes is unique, the identity information corresponding to the target face image is determined by the voting method; otherwise, the context information (that is, based on historical recognition information) and/or similarity information is used to determine the target face image correspondence ’S identity information.
[0079] Further, the similarity information specifically includes: for any candidate identity information, calculating the average value of the similarity between each similar reference image corresponding to the candidate identity information and the target face image, and taking the calculated average value as The average similarity of the candidate identity information.
[0080] Figure 4 For a schematic diagram of the voting method provided in this embodiment for determining identity information, see Figure 4 , D i It is the Euclidean distance corresponding to each candidate result (ie similar reference image) in the TOPN candidate result, identity is identity information, and identity with the same subscript indicates the same identity information. Group each candidate's identity information to obtain the number of votes corresponding to each candidate's identity (ie, m, n, s), and the identity corresponding to the max(m, n, s) result is the finally identified person identity information.
[0081] Further, on the basis of the foregoing embodiments, if the candidate identity information corresponding to the largest number of votes is not unique, the identity information corresponding to the target face image is determined according to the historical identification information and similarity information ,include:
[0082] If the candidate identification information corresponding to the largest number of votes is not unique, then any candidate identification information in each candidate identification information is used as the identification success item, and the other candidate identification information is used as the misrecognition item to determine the two-tuple , Get the two-tuple determined according to the identity information to be selected;
[0083] According to the historical identification information, the target binary group corresponding to the largest label value is determined from all the binary groups determined according to the identity information to be selected. If the target binary group corresponding to the largest label value is unique, the target binary group The identity information corresponding to the successfully recognized item in the group is used as the identity information corresponding to the target face image;
[0084] If the target two-tuple corresponding to the largest label value is not unique, the average similarity corresponding to each candidate identity information is determined, and the candidate identity information corresponding to the largest average similarity is used as the corresponding target face image Identity Information.
[0085] A two-tuple is a combination of two identity information. One identity information in the two-tuple is a successful identification item, and the other is a misrecognition item. For example, the two-tuple is i , Identity j , Where identity i To identify successful items, identity j It is a misrecognition item. In the historical identification information (ie, context information), the tag value of each two-tuple is stored. Through these tag values, it is possible to solve the confirmation of the identity information when the candidate identity information corresponding to the largest number of votes is not unique.
[0086] Figure 5 Provided for this embodiment A schematic diagram of a two-tuple assisting in determining the identity information of a person, see Figure 5 , According to the three types of identity information that appear in the whitelist 1 , Identity 2 And identity 3 , To determine all two-tuples (such as Figure 5 The two-tuple shown in the middle part of ), according to the mark value of each two-tuple in the historical identification information, determine the target two-tuple with the largest mark value, and use the identity information corresponding to the successfully identified item in the target two-tuple as The final identification information.
[0087] Further, if the identity information of the target face image cannot be confirmed through historical recognition information (ie, context information), the determination is continued through similarity information. Image 6 For the schematic diagram of determining the identity information of the person by the average Euclidean distance provided for the embodiment, see Image 6 For each type of identity information, calculate the average Euclidean distance between all the images to be recognized under the identity information and the target face image, and determine the identity information corresponding to the Euclidean distance with the smallest average value as the final identity information.
[0088] This implementation realizes the confirmation of the identity information corresponding to the target face image through the voting method, and solves the problem through historical identification information and/or similarity information. If the candidate identity information corresponding to the largest number of votes is not unique, the identity information Further confirmation.
[0089] For the marking method of the marking value of the binary group, further, on the basis of the foregoing embodiments, it further includes:
[0090] After the identity information belonging to the whitelist is used as the candidate identity information, the identity information corresponding to the target face image is identified according to the identity information to be selected, and the identity information to be selected is identified as the identity information corresponding to the target. The identity information corresponding to the face image is used as the first identity information, and the identity information that is not recognized as corresponding to the target face image is used as the second identity information;
[0091] For each second identity information, determine whether there is a two-tuple in which the first identity information is used as a successful identification item and the second identity information is used as a misrecognized item in the two-tuple of the historical identification information, If so, increase the label value of the binary group by the first preset value; otherwise, add the binary group to the historical identification information, and set the label value of the binary group to the second preset value.
[0092] The first preset value and the second preset value are set values, for example, both are 1.
[0093] Such as figure 2 As shown, not only the white list needs to be updated, but also the context information (historical identification information) needs to be updated. Specifically, if the identity information to be selected is the identity information belonging to the whitelist, the identity information corresponding to the target face image is finally recognized in the identity information to be selected as the first identity information, and other identities in the identity information to be selected are The information serves as the second identity information. For each second identity information, judge Whether it already exists in the historical identification information, if so, for the The tag value of is increased by 1, otherwise, the Store it in the historical identification information and set the tag value to 1.
[0094] This embodiment implements the setting of the tag value of the binary group, and provides a basis for determining the identity information based on the historical identification information.
[0095] Specifically, on the basis of the above content (1)-(3), content (4) face identification includes:
[0096] 1) Determine the list of candidates with high confidence. Perform calculations in the following order:
[0097] 1.1). If the character identities of the N candidate results are included in the whitelist, then the confidence of the candidate results included in the whitelist is raised to high confidence, and it is regarded as a high confidence candidate, skip Go to step 2) to further determine the identity of the person.
[0098] 1.2). Group and count the identities of the N candidate results, and sort them according to the value of the count result from large to small, and take the maximum value of the grouping result (there may be multiple parallel maximum values), if the grouping result is Maximum value meets high confidence threshold Take it as a high-confidence candidate result, and skip to step 2) to further determine the identity of the person.
[0099] 1.3). If none of the above conditions are met, the recognition result has a low confidence level, and the high-confidence candidate is empty, skip to step (7) to temporarily store the low-confidence recognition result.
[0100] 2) Determine the identity of the person. For the high-confidence candidate identity list output in step 1), the calculation is performed in the following order:
[0101] 2.1) Use the "voting method" to determine the identity of the person according to the principle of the minority obeying the majority. If the recognition is successful, skip to step (5), the identification process of the "voting method" is as follows Figure 4 Shown, where d i Is the Euclidean distance of the calculated high-confidence candidate result, and the identity is d i Corresponding character identities, m, n, s are the number of groups corresponding to each character identity, and the identity corresponding to the result of max(m, n, s) is the final identified character identity.
[0102] 2.2) If there is a situation where multiple people’s identities have the highest number of votes, and the "voting method" cannot determine the person’s identity, then the candidate’s identity will be combined in pairs and the query Context two-tuple, select the person's identity represented by the two-tuple with the largest count as the recognition result. If the recognition is successful, skip to step (5). The process of identifying contextual information is as Figure 5 Shown, where d i , M, n, s and identity have the same meaning as above. i , Identity j Is the high-confidence candidate two-tuple to be queried, max( i , Identity j ) The identity of the result i That is the identity of the person who is finally recognized.
[0103] 2.3) If the person's identity cannot be directly determined through "voting" or the context recognition result, it means that there are multiple person identities with the same confidence in the recognition result. Then use the principle of minimum average distance to determine the identity of the person, and jump to step (5) after successful recognition. The calculation process of the minimum average distance method is as follows Image 6 Shown, where d i , M, n, s and identity have the same meaning as above. x, y, z are identity respectively 1 , Identity 2 And identity 3 The average Euclidean distance of the group. The identity corresponding to the result of min(x, y, z) is the final identified person identity.
[0104] (5) Set up a "video-level" recognition whitelist. Based on the recognition result of step (4), if the recognition result corresponds to the identity, the number of N results meets the high confidence threshold condition The whitelist is updated, and the identity of the person corresponding to the identity is added to the whitelist, and the identity of the person added to the whitelist has a higher priority in the subsequent identification process.
[0105] (6) Set the "video level" Context two tuples. Based on the recognition result of step (4), construct The two-tuple saves pairs of persons that are prone to misrecognition, where "recognition success" corresponds to the identity of the recognition result in step (4), and "misrecognition" corresponds to the identity of the N calculation results except for the identity of the successful recognition, two The tuple count is incremented every time it is updated.
[0106] (7) Temporarily store the low-confidence recognition results. If none of the N candidate results are in the whitelist list, and the candidate results do not reach the threshold condition for triggering high confidence, it means that the confidence of the recognition result is low, and the confidence cannot be improved by the whitelist. Select the result until the end of the video recognition.
[0107] (8) Re-identify the temporarily stored low-confidence recognition results. After the entire video recognition is completed, based on the recognition whitelist of the entire video and Two-tuple information, for all the results temporarily stored in step (7), re-execute step (4), if the condition of step (4) is not met, then discard.
[0108] This embodiment improves the traditional face recognition method, and the improved method is suitable for face recognition scenes of characters in movies and videos. First, in the face feature vector matching, the first N neighbor vectors of the feature vector are calculated as the candidate results; second, the confidence of the recognition result is judged based on the N candidate results, and a white list of identifiable persons is set; , Based on the high confidence results among the N candidate results, use voting, context assistance, or the minimum average distance method to determine the attribution of the face identity; finally, when there are multiple identities with the same confidence in the candidate results ,use The two-tuple assists in correcting the recognition result. The two-tuple stores the identities of people who are prone to misidentification. The proposed method increases the number of face recognition candidates, and combines the context of the recognition result to correct subsequent recognition, effectively reducing misrecognition and missing recognition, and improving the accuracy and recall rate of recognition results.
[0109] Figure 7 The structural block diagram of the face image recognition device provided in this embodiment, see Figure 7 , The device includes an acquisition module 701, a judgment module 702, and an identification module 703, where:
[0110] The obtaining module 701 is configured to obtain a target face image to be recognized from a video, and determine a similar reference image according to the similarity between each reference image in the database and the target face image;
[0111] The judging module 702 is configured to judge whether there is identity information belonging to a white list in the identity information corresponding to each similar reference image, wherein the white list includes identity information corresponding to the recognized face images in the video;
[0112] The identification module 703 is configured to, if there is identity information belonging to the whitelist, use the identity information belonging to the whitelist as candidate identification information, and identify the identity information corresponding to the target face image according to each candidate identification information;
[0113] Wherein, the database includes the corresponding relationship between the identity information and the reference image.
[0114] The facial image recognition apparatus provided in this embodiment is applicable to the facial image recognition method provided in the above-mentioned embodiments, and will not be repeated here.
[0115] According to a facial image recognition device provided in this embodiment, after determining a similar reference image similar to the target facial image, if the identity information corresponding to the similar reference image belongs to the white list, the target person is determined according to the identity information belonging to the white list The identity information corresponding to the face image. The whitelist is the identity information corresponding to the recognized face images in the video. Due to the correlation between the images in the video, the recognition of the face image in the video through the whitelist is associated with other images in the video, which improves the accuracy of face image recognition, poor quality and complex scenes The face image reduces misrecognition.
[0116] Figure 8 An example of the physical structure diagram of an electronic device, such as Figure 8 As shown, the electronic device may include: a processor (processor) 810, a communication interface (Communications Interface) 820, a memory (memory) 830, and a communication bus 840. The processor 810, the communication interface 820, and the memory 830 pass through the communication bus 840. Complete mutual communication. The processor 810 may call the logic instructions in the memory 830 to execute the following method: obtain the target face image to be recognized from the video, and determine the similar reference image according to the similarity between each reference image in the database and the target face image ; Determine whether there is identity information belonging to a whitelist in the identity information corresponding to each similar reference image, wherein the whitelist includes identity information corresponding to the face images that have been recognized in the video; if there is an identity belonging to the whitelist Information, the identity information belonging to the whitelist is used as the candidate identity information, and the identity information corresponding to the target face image is identified according to each candidate identity information; wherein, the database includes the identity information and the reference image Correspondence.
[0117] It should be noted that the electronic device in this embodiment can be a server, a PC, or other devices during specific implementation, as long as its structure includes such Figure 8 The processor 810, the communication interface 820, the memory 830, and the communication bus 840 are shown. The processor 810, the communication interface 820, and the memory 830 communicate with each other through the communication bus 840, and the processor 810 can call logic in the memory 830 Instruction to execute the above method. This embodiment does not limit the specific implementation form of the electronic device.
[0118] In addition, the aforementioned logic instructions in the memory 830 can be implemented in the form of software functional units and when sold or used as independent products, they can be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present invention essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present invention. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code .
[0119] Further, an embodiment of the present invention discloses a computer program product, the computer program product includes a computer program stored on a non-transitory computer-readable storage medium, the computer program includes program instructions, when the program instructions are When executing, the computer can execute the methods provided in the above method embodiments, for example, including: obtaining the target face image to be recognized from the video, and determining the similarity according to the similarity between each reference image in the database and the target face image Reference image; determine whether there is identity information belonging to a white list in the identity information corresponding to each similar reference image, wherein the white list includes the identity information corresponding to the recognized face images in the video; if it exists, it belongs to the white list Identity information, the identity information belonging to the whitelist is used as the candidate identity information, and the identity information corresponding to the target face image is identified according to each candidate identity information; wherein, the database includes identity information and reference Correspondence of the image.
[0120] On the other hand, an embodiment of the present invention also provides a non-transitory computer-readable storage medium on which a computer program is stored. The computer program is implemented when executed by a processor to perform the transmission method provided in the foregoing embodiments, for example, including : Obtain the target face image to be recognized from the video, determine the similar reference image according to the similarity between each reference image in the database and the target face image; determine whether the identity information corresponding to each similar reference image is white The identity information of the list, wherein the white list includes the identity information corresponding to the recognized face images in the video; if there is identity information belonging to the white list, the identity information belonging to the white list is used as the candidate identity Information, identifying the identity information corresponding to the target face image according to each candidate identity information; wherein, the database includes the corresponding relationship between the identity information and the reference image.
[0121] The device embodiments described above are merely illustrative. The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the objectives of the solutions of the embodiments. Those of ordinary skill in the art can understand and implement without creative work.
[0122] Through the description of the above implementation manners, those skilled in the art can clearly understand that each implementation manner can be implemented by means of software plus a necessary general hardware platform, and of course, it can also be implemented by hardware. Based on this understanding, the above technical solution essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic A disc, an optical disc, etc., include a number of instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute the methods described in each embodiment or some parts of the embodiment.
[0123] Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention, not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: The technical solutions recorded in the foregoing embodiments are modified, or some of the technical features are equivalently replaced; these modifications or replacements do not cause the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of the present invention.
PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
Similar technology patents
Metrology Method and Apparatus, Substrate, Lithographic System and Device Manufacturing Method
Owner:ASML NETHERLANDS BV
Intelligent protocol parsing method and device
Owner:BEIJING VENUS INFORMATION TECH
TR309 - portable otoscope video viewer
Owner:RICH TONY C +1
Communication terminal apparatus and communication system
Owner:SONY ERICSSON MOBILE COMM JAPAN INC
Classification and recommendation of technical efficacy words
- improve accuracy
- Reduce misidentification
Golf club head with adjustable vibration-absorbing capacity
Owner:FUSHENG IND CO LTD
Direct fabrication of aligners for arch expansion
Owner:ALIGN TECH
Stent delivery system with securement and deployment accuracy
Owner:BOSTON SCI SCIMED INC
Method for improving an HS-DSCH transport format allocation
Owner:NOKIA SOLUTIONS & NETWORKS OY
Catheter systems
Owner:ST JUDE MEDICAL ATRIAL FIBRILLATION DIV
Road scene recognition method and device
Owner:QIANXUN SPATIAL INTELLIGENCE INC
Radar interference semi-supervised open set identification system based on generative adversarial network
PendingCN114241263AImprove generalization performanceReduce misidentification
Owner:UNIV OF ELECTRONIC SCI & TECH OF CHINA
Voice keyword recognition method and system
PendingCN113724696AGuaranteed recognition rateReduce misidentification
Owner:广州佰锐网络科技有限公司
Garlic scar identification method and system and sorting equipment
Owner:HEFEI MEIYA OPTOELECTRONICS TECH