Biological characteristic collection method, characteristic collection device and computer readable storage medium

A technology of biological characteristics and collection methods, which is applied in computer parts, calculations, instruments, etc., can solve the problems of small fingerprint range and low efficiency of biometric collection, so as to ensure accuracy, improve image overlapping problems, and improve image collection efficiency Effect

Active Publication Date: 2019-10-08
CHIPONE TECH BEIJINGCO LTD
10 Cites 0 Cited by

AI-Extracted Technical Summary

Problems solved by technology

However, the range of fingerprints collected by the above two collection methods is too small, so that it is necessary t...
View more

Method used

In summary, in the biological feature collection method, feature collection device 10, and computer-readable storage medium 15 given in the embodiments of the present application, multiple collection images are obtained by performing multiple collections of one biological feature, and each The collected images are iteratively processed to improve the image overlap problem caused by image magnification in the prior art, and ensure the accuracy of the biometric images corresponding to the biometric features collected multiple times. On this basis, each time...
View more

Abstract

The embodiment of the invention provides a biological characteristic collection method, characteristic collection device and a computer readable storage medium. The biological characteristic collection method is applied to the characteristic collection device and the characteristic collection device comprises an image sensing unit and a plurality of light sources. The biological characteristic collection method comprises the steps that when biological characteristics are detected, conducting image collection on the biological characteristics for multiple times through an image sensing unit toobtain a plurality of collected images, and turning light sources at a plurality of different positions on each time of image collection; and processing the plurality of acquired images to obtain biological characteristic images corresponding to the biological characteristics. According to the invention, high-efficiency collection of biological characteristics can be realized.

Application Domain

Character and pattern recognition

Technology Topic

Light sourceImage sensing +1

Image

  • Biological characteristic collection method, characteristic collection device and computer readable storage medium
  • Biological characteristic collection method, characteristic collection device and computer readable storage medium
  • Biological characteristic collection method, characteristic collection device and computer readable storage medium

Examples

  • Experimental program(1)

Example Embodiment

[0041] In order to make the purposes, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be described clearly and completely below with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments It is a part of the embodiments of the present application, but not all of the embodiments. The components of the embodiments of the present application generally described and illustrated in the drawings herein may be arranged and designed in a variety of different configurations.
[0042] Thus, the following detailed description of the embodiments of the application provided in the accompanying drawings is not intended to limit the scope of the application as claimed, but is merely representative of selected embodiments of the application. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
[0043] It should be noted that like numerals and letters refer to like items in the following figures, so once an item is defined in one figure, it does not require further definition and explanation in subsequent figures.
[0044] According to the applicant's research, in some embodiments, the feature collection device 10 may include such as Figure 4The transparent cover plate 11 , the plurality of light sources 13 , the image sensing unit 12 , and the adhesive layer 14 between the transparent cover plate 11 and the image sensing unit 12 are shown in FIG. As shown in FIG. 5( a ), taking a fingerprint as an example, when the finger 17 is placed on the transparent cover 11 , the OLED (Organic Light-Emitting Diode) or TFT-LCD (Thin A light source 13 will be lit in the FilmTransistor-Liquid Crystal Display layer, and then the difference between the refractive index of the skin of the finger 17 and the refractive index of the air of the air 18 will be used, in the area where the air is totally reflected but the finger skin is not totally reflected. The fingerprint collection is carried out, but since the light intensity of the light source 13 decays rapidly with the increase of the distance, only a small effective range x of the fingerprint F is illuminated as shown in FIG. The fingerprint image is shown as y in 5(b). Among them, it can be calculated according to the optical path shown in 5(a), the fingerprint image y is the image after x is magnified (2+D/d) times with the light spot of the light source 13 as the center, where d is the transparent cover 11 The thickness of or the distance between the light source 13 and the biological feature, D is the thickness of the adhesive layer 14, 0 0 , it will generate y as shown in Figure 5(c) 1 and y 2 The fingerprint images shown in overlap.
[0045] Please refer to Fig. 5(a) and Fig. 5(b) in combination, the above-mentioned threshold value c 0 The calculation process can include: assuming that the refractive index of the transparent cover 11 is n g , the index of refraction of finger 17 is n f , then, according to the optical path diagram shown in Fig. 5(a), the maximum effective radius of the fingerprint image y is the imaging radius when the contact surface between the fingerprint and the transparent cover 11 is just totally reflected, for example, the imaging radius R is R= d*(2+D/d)*tan(arcsin(n f /n g )), then the minimum non-overlapping distance c 0 (i.e. the threshold c 0 ) is c 0 =2R=2d*(2+D/d)*tan(arcsin(n f /n g )), where, assuming D=0.2d, n g =1.5, n f =1.33~1.42, (n f The size of the finger 17 changes with the dry and wet state of the finger 17, etc.), and then it can be calculated that 8d≤c 0 ≤12d. It should be noted here that, in actual implementation, the c 0 For example, when a protective film is attached to the transparent cover plate 11 or a cover plate is added at other positions, it can be calculated according to the actual optical path and refractive index. 0 The calculation is not limited in this embodiment.
[0046] To sum up, the embodiments of the present application provide a biometric feature collection method, a feature collection device, and a computer-readable storage medium, so as to solve the problem of overlapping imaging in the prior art, it is necessary to ensure the distance between the light sources 13 that are lit at the same time c is greater than the threshold c 0 , or the problem of low biometric acquisition efficiency caused by only lighting one light source 13 for each acquisition, the technical solution given in the present application will be described in detail below with reference to the accompanying drawings.
[0047] Please refer to Figure 4 and Image 6 , the feature acquisition device 10 provided in this embodiment of the present application may include, but is not limited to, a transparent cover 11 , an image sensing unit 12 , a processor 16 , a computer-readable storage medium 15 , and a plurality of light sources 13 . Wherein, the image sensing unit 12 is used for collecting images of biological features to obtain a plurality of collected images. Wherein, when the biometric feature is a fingerprint or a palm print, the multiple captured images may be multiple captured images corresponding to one touch operation of the fingerprint or palm print on the transparent cover plate 11 . Optionally, the image sensing unit 12 may be, but not limited to, CMOS (Complementary Metal-Oxide-Semiconductor, complementary metal oxide semiconductor), CCD (Charge Coupled Device, charge coupled device) and the like.
[0048] A plurality of light sources 13 are disposed between the transparent cover 11 and the image sensing unit 12, and are used to provide the image sensing unit 12 with background light when collecting images of biological features. In this embodiment, the plurality of light sources 13 may be, but not limited to, LCD (Liquid Crystal Display, liquid crystal display), LED (Light Emitting Diode, light-emitting diode) or OLED (Organic Light-Emitting Diode, organic laser display) Light source 13 in . In addition, the light source 13 may be a visible light source or an invisible light source, such as a monochromatic light source or a white light source in a visible light source, an infrared light source, an ultraviolet light source, etc. in the invisible light source, which is not limited in this embodiment.
[0049] In one embodiment, the plurality of light sources 13 may be such as Figure 7 shown in. It should be noted that, in this embodiment of the present application, each light source 13 may be, but not limited to, a point light source composed of one or more adjacent sub-pixels. In addition, the distance between adjacent light sources 13 may be less than but not limited to 8d (mm), where d (mm) is the thickness of the transparent cover 11 or the distance between the light sources 13 and the biological feature. In addition, the distance between the adjacent light sources 13 The spacing between can be Figure 7 The equidistant distribution shown in can also be an unequal interval distribution, which is not limited in this embodiment.
[0050] The computer-readable storage medium 15 is used to store computer-executable program instructions corresponding to the biological feature collection device 150. When the computer-executable program instructions are read and run by the processor 16, the computer-executable program instructions can execute the biological information provided in the embodiments of the present application. Feature collection method. Optionally, the actual types or signals of the computer-readable storage medium 15 and the processor 16 may be selected according to requirements, which are not limited in this embodiment. In addition, it should be noted that, in addition to being a part of the feature collecting device 10, the computer-readable storage medium 15 may also exist independently of the feature collecting device 10, etc., which is not limited in this embodiment.
[0051] It can be understood that, in this embodiment, the feature collection device 10 may be, but not limited to, a smart phone, an IPAD, a notebook computer, a mobile Internet device (mobile Internet device, MID), etc. capable of performing fingerprint, palm print, and iris features. Electronic devices that collect biometric features such as facial features. in addition, Image 6 and Figure 4 The shown structure of the feature collection device 10 is only for illustration, and the feature collection device 10 may further include Image 6 , Figure 4 more or fewer components shown in, or have the same Image 6 different configurations shown, Image 6 The components shown in can be implemented in hardware, software, or a combination thereof.
[0052] Further, please refer to Figure 8 , a biological feature collection method applicable to the feature collection device 10 provided by the embodiment of the present application. It should be noted that the biometric collection method given in this application does not Figure 8 And the following specific order is limited. It should be understood that the order of some steps in the biometric acquisition method of the present application may be exchanged with each other according to actual needs, or some of the steps may be omitted or deleted.
[0053] In step S11, when a biological feature is detected, the image sensing unit 12 performs multiple image acquisitions on the biological feature to obtain a plurality of captured images.
[0054] Wherein, the light sources 13 at multiple different positions are turned on each time an image is collected, in other words, during one image collection process, the light sources 13 at multiple different positions can be turned on, so as to illuminate as many biometric feature areas as possible, The effective information in the collected images collected each time is as much as possible, thereby improving the collection efficiency of the biometric images when the biometric images are collected. In addition, the biometric feature described in step S11 may be, but not limited to, at least one of fingerprints, palm prints, or facial features, and the biometric image may be, but not limited to, palm print images, fingerprint images, or human faces At least one of the feature images and the like, for example, one biometric image may include both a fingerprint image and a palm print image and the like.
[0055] Further, according to different biological features, the aforementioned detection methods for biological feature detection by the feature collection device 10 may be different. For example, when the biometric feature is a face feature, then when the projection of the face feature on the feature acquisition device 10 is detected, it can be determined that the biometric feature is detected; for another example, when the biometric feature is a fingerprint or a palm print, then in the When a touch operation of the finger 17 or the palm on the feature collection device 10 is detected, it may be determined that a biological feature is detected, which is not limited in this embodiment.
[0056] It should be noted that the multiple captured images may be multiple captured images obtained by performing multiple image acquisitions when the biometric feature is detected and the biometric feature does not move relative to the feature acquisition device 10 . For example, when the finger 17 or the palm no longer slides or moves relative to the feature collection device 10 after being in contact with the feature collection device 10 , multiple fingerprint images are obtained by performing multiple fingerprint image collections; for another example, when the feature collection device 10 detects After the face is characterized, multiple face feature images obtained by collecting multiple face feature images when the face does not move relative to the feature collecting device 10 are not limited in this embodiment.
[0057] Further, as an implementation manner, when the feature acquisition device 10 acquires an acquired image, there may be one or more non-activated light sources 13 between the adjacent light sources 13 that are turned on, as shown in FIG. 9( a ). Wherein, the black dots shown in FIG. 9(a) are the light sources 13 that are turned on, the others are light sources 13 that are not turned on, and the light sources 13 at multiple different positions in step S11 can be but not limited to those shown in FIG. 9(a) The light source array shown in . It should be understood that in actual implementation, the light source array formed by the light sources 13 at different positions may be, but not limited to, the graph shown in FIG. 9( a ). For example, the light source array may also be a line segment, a rectangle, a triangle , circle or other irregular figures, etc., the light source 13 is used as the vertex of each figure. Besides, the patterns formed by the plurality of light sources 13 that are turned on each time may be the same or different, which is not limited in this embodiment.
[0058] In addition, each time before the image sensing unit 12 performs image acquisition on the biometrics, a light source array as shown by the black dots in FIG. 9(a) can be turned on; There can be a preset offset between the two light source arrays that are turned on during image acquisition. Optionally, the preset offset can be, but not limited to, or c represents the distance between two adjacent light sources 13 in the light source array that is turned on. Taking Fig. 9(a) as an example, c represents the distance between two adjacent black dots shown in Fig. 9(a).
[0059] For example, please refer to Figure 9(a)-Figure 9(d) , assuming that Fig. 9(a) is a light source array that is turned on when the biometric feature is collected once (the black dots show the light source 13 that is turned on, and the others are the light sources 13 that are not turned on), then the last time the biometric feature is collected During image acquisition or the next image acquisition, the light source array that is turned on can be shifted to the right by a preset offset as shown in Figure 9(b). It can also be shifted to the left by a preset offset as shown in Figure 9(c) It can also be offset downward by a preset offset as shown in Figure 9(d) Wait.
[0060] It should be understood that, in addition to the aforementioned Figure 9(b)-Figure 9(d) Offset left, right, and down by a preset offset as shown in In addition, it can also be relative to the light source array currently turned on as shown in FIG. 9(a), the light source array turned on in the next acquisition can also be shifted to the upper left, lower left and other directions by one or more preset offsets , and the number of light sources 13 in the light source arrays that are turned on each time may be the same or different, and the offset and direction of the light source arrays that are turned on each time may be the same or different. No restrictions. Additionally, from Figure 9(a)-9(d) It can be seen that, compared with the prior art, when the biometric image acquisition is performed in the embodiment of the present application, multiple light sources 13 can be turned on for each image acquisition, and the distance c between adjacent light sources 13 can be smaller than the threshold value c 0 , thereby effectively increasing the range of biometric features collected and improving the efficiency of feature image collection.
[0061] Step S12, processing a plurality of collected images to obtain biometric images corresponding to the biometrics.
[0062] Among them, according to the distance c between adjacent light sources 13 and the threshold value c 0 Due to the difference in the size relationship between them, the processing process for obtaining the biometric image by processing the multiple acquired images is different. For example, when the distance c between adjacent light sources 13 is greater than or equal to the threshold value c 0 Then, the biometric range on the collected images will not overlap, and the biometric images can be obtained by performing image cropping and splicing processing on multiple collected images, which will not be repeated in this embodiment.
[0063] For another example, when the distance c between adjacent light sources 13 is smaller than the threshold value c 0 , then, it can be seen from the acquired images obtained by the two acquisitions shown in Fig. 10(a) and Fig. 10(b) that when each image acquisition is turned on at multiple light sources 13 at different positions, although it can increase the The biometric range of each captured image (for example, the distance c between adjacent light sources 13 can be smaller than the threshold value c 0 ), but also due to the enlargement of the biometric image, there are more overlapping areas on the acquired image as shown in Fig. 10(c). In this regard, the embodiments of the present application can be Figure 11 Steps S120 to S122 shown in FIG. 2 implement the processing of multiple captured images to obtain biometric images, the contents of which are as follows.
[0064] Step S120, obtaining an iterative image for iterative processing through image initialization.
[0065] The initialized iterative image may be, but not limited to, an all-black image or an all-white image, etc., as a reference image for performing deviation processing on the acquired image in step S121. It should be noted that the specific process of image initialization can be set according to requirements, for example, the initialization image template corresponding to the biological feature can be called from a plurality of preset initialization image templates as the iterative image; The iterative image is randomly generated according to the preset image generation rules; for another example, the iterative image, etc. may be obtained by initializing one of the multiple collected images. No restrictions.
[0066] Step S121: Perform deviation processing on each of the collected images according to the iterative images to obtain a plurality of deviation images corresponding to each of the collected images, and perform stitching processing on each of the deviation images to obtain a stitched image. Wherein, the image deviation processing process in step S121 can be various.
[0067] As an implementation, as Figure 12 As shown, the image deviation processing process in step S121 can be implemented through steps S1210 to S1215, and the contents are as follows.
[0068] Step S1210, for each collected image, obtain the light source position of each light source 13 that was turned on when the collected image was collected; wherein, the light source position information of each light source 13 can be preset in the feature collection device 10, and in actual implementation, the information can be passed through the information The method of calling to obtain the light source position information of each light source 13 in the light source array, etc., is not limited in this embodiment.
[0069] Step S1211, taking the light source position of each light source 13 as the center and the first preset value as the radius, divide the iterative image into the biometric range to obtain a first image including a plurality of first sub-images, each first sub-image and the first sub-image. Each light source 13 is in one-to-one correspondence; wherein, the first preset value is r, 2d≤r≤4d, d represents the thickness of the transparent cover 11 or the distance between the light source 13 and the biological feature, D represents the transparent cover 11 and the image Thickness of adhesive layer 14 between sensing units 12 .
[0070] Step S1212: Enlarge the biometric range of each first sub-image in the first image to obtain a second image including a plurality of second sub-images; wherein, according to the magnification of (2+D/d) The biometric range of each first sub-image in the first image is enlarged to obtain a second image including a plurality of second sub-images.
[0071] Step S1213, performing difference processing between the second image and the acquired image to obtain a difference image.
[0072] Step S1214, respectively taking the light source position of each light source 13 as the center, and taking the second preset value as the radius, the difference image is divided into the biometric range to obtain a third image including a plurality of third sub-images; The set value is ((2+D/d)*r), 2d≤r≤4d, d represents the thickness of the transparent cover 11 or the distance between the light source 13 and the biological feature, D represents the transparent cover 11 and the image sensor Thickness of adhesive layer 14 between cells 12 .
[0073] Step S1215, performing a reduction process on the biometric range of each third sub-image in the third image to obtain a deviation image; wherein, each pair of the third images in each pair of third images can be respectively reduced according to the reduction factor of (2+D/d). The biometric range of the three sub-images is reduced, d represents the thickness of the transparent cover 11 or the distance between the light source 13 and the biometrics, D represents the thickness of the adhesive layer 14 between the transparent cover 11 and the image sensing unit 12 thickness.
[0074] As another implementation, as Figure 13 As shown, the image deviation processing process in step S121 can also be implemented through steps S1216 to S1219, the contents of which are as follows.
[0075] Step S1216 , for each captured image, obtain the light source position of each light source that was turned on when the captured image was captured.
[0076] Step S1217, respectively taking the light source position of each light source 13 as the center and the second preset value as the radius, divide the biometric range of the collected image to obtain a first image including a plurality of first sub-images, and each first sub-image is the same as the first sub-image. The light sources 13 are in one-to-one correspondence.
[0077] Step S1218 , reducing the biometric range of each first sub-image in the first image to obtain a second image including a plurality of second sub-images.
[0078] In step S1219, a difference image is obtained by performing difference processing between the second image and the iterative image.
[0079] It can be understood that the difference between the image deviation processing procedures given in the above steps S1216 to S1219 and the image deviation processing procedures given in the steps S1210 to S1215 is that: in the steps S1216 to S1219, the collected images are processed. The feature range is divided, the image is enlarged and the difference value is processed with the collected image to obtain a difference image; and in steps S1210 to S1215, the feature range is divided on the iterative image, and the image is enlarged and the iterative image is subjected to difference processing to obtain the difference. Therefore, for a detailed description of steps S1216 to S1219, reference may be made to the descriptions of the foregoing steps S1210 to S1215, which are not repeated in this embodiment of the present application.
[0080] Further, in order to achieve as much coverage of the biometric image as possible when multiple image acquisitions are performed on a biological feature, the offset between the light source arrays that are turned on twice adjacent to each other is or As a result, there will be overlapping images among the multiple captured images, and each captured image is only a partial image of the actual biometric image. After collecting the deviation images corresponding to the images, it is necessary to stitch each deviation image to obtain a stitched image.
[0081] As an implementation manner, in step S121, the step of splicing each deviation image to obtain a spliced ​​image can be performed through the following steps: Figure 14 The illustrated steps S1220 to S1222 are implemented, and the contents are as follows.
[0082] Step S1220: Acquire overlapping images with overlapping images among the deviation images and the overlapping times of the overlapping images.
[0083] Step S1221, for each overlapping image, perform mean value processing on the images in the overlapping image according to the overlapping times to obtain a deduplicated image.
[0084] Step S1222, splicing the overlapped image after deduplication and the non-overlapping images in each of the deviation images to obtain the spliced ​​image.
[0085] In the above steps S1220 to S1222, it is assumed that n is the number of iterative processing, M is the number of collected images, and the deviation image corresponding to each collected image is Then the biased image can be Stitching to get the stitched image ΔF n ,in, On behalf of the deviation image corresponding to each acquisition image can be first Add directly, and then according to each image difference The overlapping area between and the overlapping times corresponding to each overlapping area are averaged over each overlapping area to obtain the final stitched image ΔF n.
[0086] For example, please combine Figure 15 As shown, it is assumed that the two deviation images to be spliced ​​are AB and BC respectively, and the deviation image corresponding to AB is △x 1 , the deviation image corresponding to BC is △x 2 , then it can be seen that when the deviation images AB and BC are spliced, the B area is collected twice, while the A area and the C area are collected once respectively, so it can be based on the overlap times of the B area of ​​the image after splicing (such as 2 times) average the B region to get the stitched image.
[0087] Step S122: Update the iterative image according to the stitched image to obtain the biometric image corresponding to the biometric.
[0088] The updated image F' is calculated by the formula F'=F-λΔF, where F represents the iterative image, ΔF represents the stitched image, λ represents the iterative compensation amount, 0
[0089] To sum up, the present application performs iterative processing on a plurality of collected images through the image processing steps given in the above steps S120 to S122, which can solve the problem of image overlap caused by image enlargement in the prior art, and can further ensure Accuracy of collected biometric images corresponding to biometrics.
[0090]Further, in an implementation manner, one deviation processing, splicing processing and updating processing performed on the multiple collected images can be regarded as one iteration processing. Therefore, in order to improve the image quality of the biometric images, the biometric acquisition method is It may also include: judging whether the number of iterations of performing iterative processing on the plurality of collected images reaches a third preset value, or judging whether the images obtained after the update processing meet preset requirements; when the number of iterations does not reach the third preset value Three preset values, or when the image obtained after the update process does not meet the preset requirements, the image obtained after the update process is used as the iterative image for the next iterative process, and based on the iterative image, the Iterative processing of multiple collected images, until the number of iterations reaches a third preset value or the image obtained after the update processing meets the preset requirement, the step of iterative processing of the multiple collected images is stopped, Taking the updated image corresponding to the number of iterations satisfying the third preset value as the biometric image corresponding to the biometric, or using the updated image corresponding to the biometric as the image corresponding to the biometric Biometric image. Optionally, the third preset value may be, but not limited to, 5 times, 7 times, and the like. The preset requirement may be, but is not limited to, whether the image quality meets the requirement, whether the image error in the iterative process is less than a preset value, etc., which is not limited in this embodiment.
[0091] In addition, when performing the first iteration processing on the acquired image, it should be noted that, in an implementation manner, when there are multiple iteration processing, then the image F after the n+1th update processing n+1 can be obtained by formula F n+1 =F n -λΔF n Calculated, where n represents the number of iterations, F n represents the image after the n-1th update processing, It represents the deviation image obtained by performing the nth deviation processing on the ith collected image, i=1, 2, 3, ..., M, where M represents the number of collected images. It is understandable that when n is 1, F n It is an all-black or all-white image after the image is initialized, but in the subsequent iterative processing, the iterative image used in each iterative processing is the image obtained after the previous iterative processing.
[0092] Based on the description of the above-mentioned biometric collection method, the following recombination Figure 12 and Figure 16 Steps S121 and S122 are briefly described. Here, take the captured image obtained by performing one image capture of the biometrics as an example, assuming that the captured image of the Mth acquisition for a biometric is Y M , and in the acquisition of the acquisition image Y M There are two light sources 13 when turned on.
[0093] (1) After acquiring and collecting the image Y M After the corresponding light source positions of the two light sources 13 , respectively take the light source positions of the two light sources 13 as the center, and take the first preset value as the radius to the collected image Y M can be divided into Figure 16 first image shown in And the first image includes two first sub-images representing different biometric ranges, for example, different dashed circles represent different biometric ranges.
[0094] (2) Enlarging the first sub-image by (2+D/d) times with the center of the circle of the two first sub-images in the first image respectively, to obtain Figure 16 A second image including a plurality of second sub-images shown in
[0095] (3) The image Y will be collected M with the second image Perform difference processing to get the difference image in,
[0096] (4) Taking the light source positions of the two light sources 13 as the center again, and taking the second preset value as the radius to the difference image can be divided into Figure 16 and includes two third sub-images representing different biometric ranges. Among them, the two third sub-images are difference images respectively The range of biometrics represented by the two dashed circles shown in .
[0097] (5) Respectively reduce the two third sub-images by (2+D/d) times with the center of the third sub-image in the third image as the center to obtain Figure 16 Deviation image shown in
[0098] It should be noted that after splicing the deviation images corresponding to the collected images calculated in the above (1)-(5) to obtain a spliced ​​image, and updating the image according to the spliced ​​image, it can be further judged that each collected image Y M Whether the number of times reached the third preset value, such as 5 times, 7 times, etc., or to determine whether the updated image meets the preset requirements, for example, whether the image resolution of the updated image meets the preset resolution, and during the iteration process Whether the convergence speed of the image has reached the preset value, etc., if the collected image Y M The number of times reaches the third preset value, or the updated image meets the preset requirement, the updated image is used as the biometric image corresponding to the biometric.
[0099] On the contrary, when the number of times does not reach the third preset value, or the updated image does not meet the preset requirements, then the updated image is used as a new iterative image, and steps S121 and S122 are repeatedly performed based on the new iterative image, Until the number of iterations reaches the third preset value, or the updated image meets the preset requirements.
[0100] It should be noted here that, in addition to the biometric feature collection method given in the above steps S120 to S122, as an implementation manner, the feature collection method may also iteratively process each collected image to obtain a plurality of pending images. splicing images; and then splicing each image to be spliced ​​to obtain a biometric image corresponding to the biometric. Wherein, for the iterative process of performing iterative processing on each captured image to obtain the image to be spliced, please refer to the detailed description in steps S120 to S122, and for the process of performing splicing processing on each image to be spliced, please refer to steps S1220 to S1222. For detailed description, this embodiment will not be repeated here.
[0101] Further, as Figure 17 As shown, a schematic flowchart of another method for collecting biological features provided by the embodiment of the present application is applied to a feature collecting device 10, and the feature collecting device 10 includes a transparent cover 11, an image sensing unit 12, and a At least one light source 13 between the board 11 and the image sensing unit 12, the biometric acquisition method includes:
[0102] Step S21, when a biological feature is detected, the image sensing unit 12 performs multiple image acquisitions on the biological feature to obtain a plurality of captured images, wherein, during each image capture, the position corresponding to the biological feature is located. A plurality of light spots are formed. Optionally, the plurality of light spots can be obtained, but not limited to, by turning on at least one light source 13 located at different positions, and the shapes of each light spot can be the same or different, for example Figure 18 shown.
[0103] In addition, in an implementation manner, during each image acquisition, at least one preset pattern is formed between the plurality of light spots, and the preset pattern may include a rectangle, a triangle, a circle, a polygon, or other different patterns. At least one of regular graphics. In addition, in multiple image acquisitions, among the multiple light spots formed by each image acquisition, there is at least one distance c in the distance between any two adjacent light spots ij satisfy c ij ≤c 0 , where c ij Represents the distance between light spot i and light spot j, 8d≤c 0 ≤12d, d represents the thickness of the transparent cover or the distance between the light source and the biological feature.
[0104] Step S22, processing the plurality of collected images to obtain biometric images corresponding to the biometrics.
[0105] It should be understood that the difference between the biometrics collection method given in the above steps S21 to S22 and the biometrics collection method given in the foregoing steps S11 to S12 is as follows:
[0106] (1) When a captured image is collected for a biological feature, the feature collection device 10 is formed with multiple light spots located at different positions, and the multiple light spots can be obtained by turning on a light source 13, or can be It is obtained by turning on the light source 13 at the position corresponding to each light spot, which is not limited in this embodiment.
[0107] (2) When collecting biological features, the multiple light spots can be, but not limited to, an equidistant array, and the patterns formed between the multiple light spots can be, but not limited to, lines, triangles, rectangles, etc. vertices of the image.
[0108] (3) When performing iterative processing on a plurality of collected images, it is necessary to obtain the light spot position of each light spot, and take the position of each light spot as the center to divide the biometric range of the collected images.
[0109] Except for the aforementioned three points, for the description of step S21 to step S22, reference may be made to the detailed description of the above-mentioned step S11 to step S12, which is not repeated in this embodiment.
[0110] Further, see again Image 6 , the biological feature acquisition device 150 provided in this embodiment may include an image acquisition module 151 and an image processing module 152 .
[0111] The image acquisition module 151 is configured to perform multiple image acquisitions on the biological feature through the image sensing unit 12 to obtain multiple acquired images when a biological feature is detected; wherein, each image acquisition turns on a plurality of light sources 13 at different positions. In this embodiment, for the description of the image acquisition module 151 , reference may be made to the detailed description of the above step S11 , that is, the step S11 may be executed by the image acquisition module 151 , so no further description is given here.
[0112] The image processing module 152 is configured to process the multiple captured images to obtain biometric images corresponding to the biometrics. In this embodiment, for the description of the image processing module 152, reference may be made to the detailed description of the above step S12, that is, the step S12 may be executed by the image processing module 152, and therefore no further description is given here.
[0113] Optionally, as Figure 19 As shown, the image processing module 152 may include an initialization unit 1520 , an image difference unit 1521 and an image update unit 1522 .
[0114] The initialization unit 1520 is used to initialize the image to obtain an iterative image for iterative processing; in this embodiment, the description of the initialization unit 1520 may refer to the detailed description of the above step S120, that is, the step S120 may be performed by the initialization unit 1520, Therefore, no further explanation is given here.
[0115] The image difference unit 1521 is used to perform deviation processing on each collected image according to the iterative images to obtain a plurality of deviation images corresponding to each collected image, and splicing each deviation image to obtain a stitched image; in this embodiment, the image difference unit 1521 For the specific description, please refer to the detailed description of the above step S121, that is, the step S121 may be performed by the image difference unit 1521, so no further description is given here.
[0116]The image update unit 1522 is configured to update the image according to the spliced ​​image, and use the updated image as the biometric image corresponding to the biometric. In this embodiment, for the description of the image updating unit 1522, reference may be made to the detailed description of the above-mentioned step S122, that is, the step S122 may be performed by the image updating unit 1522, so no further description is given here.
[0117] To sum up, in the biological feature collection method, feature collection device 10 and computer-readable storage medium 15 provided in the embodiments of the present application, a plurality of collected images are obtained by performing multiple collections of one biological feature, and the collected images for each collection are obtained. The images are iteratively processed to improve the problem of image overlap caused by image enlargement in the prior art, and to ensure the accuracy of the biometric images corresponding to the biometrics collected multiple times. The light sources 13 at a plurality of different positions are turned on, thereby greatly improving the image acquisition efficiency of the biometric image.
[0118] The above are only various embodiments of the present application, but the protection scope of the present application is not limited to this. Any person skilled in the art who is familiar with the technical scope disclosed in the present application can easily think of changes or substitutions. Covered within the scope of protection of this application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.

Similar technology patents

Knowledge base supported spatial database design method

InactiveCN101477549AGuaranteed accuracyAchieve an optimized design
Owner:CHINESE ACAD OF SURVEYING & MAPPING

Train tracking method based on information redundancy

ActiveCN104149822AGuaranteed accuracyKeep Tracking
Owner:NARI TECH CO LTD

Classification and recommendation of technical efficacy words

Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products