[0033] The present disclosure will be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments of the present disclosure are illustrated. The following will clearly and completely describe the technical solutions in the embodiments of the present disclosure with reference to the accompanying drawings in the embodiments of the present disclosure. Apparently, the described embodiments are only some of the embodiments of the present disclosure, not all of them. Based on the embodiments in the present disclosure, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present disclosure.
[0034] The "first", "second" and so on in the following are only used to describe the difference, and have no other special meanings.
[0035] figure 1 It is a schematic flow chart of an embodiment of the beauty treatment method according to the present disclosure, as figure 1 Shown:
[0036] Step 101, face recognition is performed on the collected user image to obtain first face feature information.
[0037]In one embodiment, the user image is collected by the camera of the mobile phone, face recognition is performed on the user image, the facial features of the user are recognized and the first facial feature information is obtained. The facial feature information may be a feature vector of the facial features.
[0038] Step 102: Obtain the second face feature information corresponding to the preset beauty makeup template, and select from multiple beauty makeup templates based on the matching result between the second face feature information and the first face feature information. The image matches the Target Beauty template.
[0039] In one embodiment, the beauty makeup templates are collected in advance by means of manual collection or crawler crawling, etc., and the second facial feature information of the person images in the beauty makeup templates is obtained. The second facial feature information can be facial facial features feature vectors, etc. , to build a face feature vector library. Get the beauty information (AR makeup information) corresponding to the beauty template, including lipstick, blush, color contact lenses, eyebrow pencil, eye shadow, eyeliner, mascara, foundation and hairstyle, etc. Makeup and skill information, based on AR makeup information And beauty templates to build AR makeup library.
[0040] Match the second face feature information (the second face feature vector) with the first face feature information (the first face feature feature vector), and based on the matching result, select from multiple beauty templates in the AR makeup library Select a target beauty template that matches the user's image.
[0041] Step 103: Determine the target processing area that requires makeup processing in the user image, obtain the beauty information corresponding to the target beauty template, and generate the target image map corresponding to the target processing area based on the beauty information.
[0042] In an embodiment, the target processing area may be an image area corresponding to eyes, nose, mouth, cheeks, etc. in the user image, and the shape of the target processing area may be a rectangle or the like. Acquire the beauty information corresponding to the target beauty template. The beauty information includes beauty information about lipstick, blush, foundation, eye makeup, and colored contact lenses. Based on the beauty information, the eyes, nose, The target image texture corresponding to the target processing area such as the mouth. The target image patch is the image texture corresponding to the mouth, cheeks, nose, eyes, etc. that have been processed by lipstick, blush, foundation, eye makeup, and colored contact lenses.
[0043] Step 104, performing fusion processing on the target image texture and the corresponding target processing area, so as to perform cosmetic processing on the user image.
[0044] In one embodiment, the picture stickers corresponding to mouth, cheek, nose, eyes, etc. that have been processed with lipstick, blush, foundation, eye makeup, and colored contact lenses are combined with the mouth, cheek, nose, and mouth in the user image. The target processing area such as the eyes is fused, and the virtual makeup processing is completed on the user image.
[0045] The first face feature information includes a first face feature vector and the like, and the second face feature information includes a second face feature vector and the like. figure 2 It is a schematic flow chart of selecting a target beauty makeup template in an embodiment of the beauty makeup processing method according to the present disclosure, as shown in figure 2 Shown:
[0046] Step 201, obtain the similarity between the first facial feature vector and the second facial feature vector.
[0047] Step 202, selecting a target beauty makeup template from multiple beauty makeup templates based on the similarity.
[0048] In one embodiment, the first feature vector of facial features is extracted from the user image, and the first feature vector of facial features is compared with the second feature vector of facial features preset in the library of feature vectors of facial features to obtain For facial features with high similarity, a target beauty makeup template is selected from multiple beauty makeup templates based on the similarity.
[0049] A variety of existing methods can be used to extract facial features feature vectors. For example, facial features include eyebrows, eyes, ears, nose, and mouth, etc., obtain the facial features or coordinates of the face area in the user image, divide the areas where eyebrows, eyes, ears, nose, and mouth are located, and extract eyebrows, eyes, etc. , ears, nose, and mouth areas.
[0050] Extract the feature information of the eyebrows, eyes, ears, nose and mouth in the user image. The feature information includes the position, size, shape, color and texture of the facial features. The feature information is expressed in the form of vectors to form the first facial features Feature vector. The first face feature vectors may be feature vectors corresponding to eyebrows, eyes, ears, nose and mouth respectively.
[0051] Comparing the first face feature vector with the second face feature vector of the makeup template to obtain the similarity corresponding to each organ. The closer the similarity information to the same organ in the two face images is, the higher the similarity of the organ in the two face images is. Similarity can adopt cosine similarity etc., for example, similarity A=a b/(||a||||b||); wherein, a is the feature vector of the facial features of the first person, and b is the second person Face features vector. The closer the A value is to 1, the higher the similarity. Select the beauty makeup template with the highest similarity from the multiple beauty makeup templates as the target beauty makeup template.
[0052] like Figure 3A As shown, the user's face image is collected through the camera of the mobile phone. Obtain the first face feature vector corresponding to the user's face image, calculate the similarity between the first face feature vector and the second face feature vector of the beauty makeup template, and select from multiple beauty makeup templates Select the beauty makeup template with the highest similarity as the target beauty makeup template.
[0053] like Figure 3B As shown, the image of the target beauty makeup template may be a star image with a beauty makeup effect, etc., and the facial features of the character image of the target beauty makeup template are highly similar to the facial features in the user's face image. Obtain the beauty information corresponding to the star in the target beauty template, that is, the makeup information used by the star, including lipstick, blush, foundation, eye makeup, contact lenses, etc., based on the information generated by the beauty information and the user The target image texture corresponding to the target processing area in the image is used to complete the virtual makeup on the user image.
[0054] After the target beauty makeup template is selected, beauty makeup recommendation information for displaying to the user is generated. The beauty makeup recommendation information includes: beauty makeup type, beauty makeup tool information, beauty tool product link information, and the like. The types of beauty makeup can include: lipstick, blush, foundation, and contact lenses. The beauty tool information may be information such as brands, specifications, prices, pictures, etc. of lipsticks, eyebrow pencils, and foundations used by stars in the beauty template. The beauty tool product link information may be e-commerce purchase links, shopping cart links, etc. for lipsticks, eyebrow pencils, and foundations.
[0055] Figure 4 It is a schematic flow chart of making picture stickers in an embodiment of the beauty treatment method according to the present disclosure, as shown in Figure 4 Shown:
[0056] Step 401, perform face detection processing on the user image, extract face feature points and determine a target processing area. The face feature points include: a plurality of feature points corresponding to facial features; the target processing area includes: a facial feature image area.
[0057] In one embodiment, a variety of existing AR face detection technologies can be used to perform face detection processing on the user image, and more than 100 face feature points in the face image can be obtained through the AR face detection technology, and tiny Facial details and movements such as Figure 5 shown. Face feature point extraction can use a variety of detection algorithms, such as explicit shape regression ESR, 3D-ESR, LBF (Local Binary Features), etc. Through the detection algorithm to obtain the relative position of the facial feature points, the user's facial expression can be obtained, and the expression can be anger, joy, surprise, etc.
[0058] Step 402, acquiring a target image corresponding to the target processing area in the user image.
[0059] In one embodiment, the target processing area can be a rectangular image area corresponding to the eyes, nose, mouth, cheeks, etc. in the user image, and the target image in the rectangular image area is acquired, and the target image can be eyes, nose, mouth, Images such as cheeks.
[0060] Step 403, making target picture stickers based on the beauty makeup information and the target image. Beauty makeup information includes: makeup type, beauty makeup tool information, beauty makeup image color and thickness information, etc.; target picture stickers include: facial features picture stickers, etc.
[0061] In one embodiment, beauty makeup information is acquired, including information about tools that the user needs to perform cosmetic treatments such as cosmetic contact lenses, eyebrow pencils, and eye makeup, as well as color value and thickness information of lipstick, blush, etc. Editing software such as Photoshop can be used to make appropriate facial features picture stickers based on beauty information and images such as eyes, nose, mouth, and cheeks in user images.
[0062] Image 6 It is a schematic flow chart of fusion processing in an embodiment of the cosmetic processing method according to the present disclosure, as shown in Image 6 Shown:
[0063] Step 601, deforming the target picture sticker based on the face feature points, so that the target picture sticker is aligned with the corresponding target processing area.
[0064] In one embodiment, existing Dlib, OpenCV and other software can be used to perform face detection and face alignment processing on user images, and by obtaining multiple face feature points, the target picture sticker is deformed based on the face feature points, Align the target picture sticker with the corresponding target treatment area to complete the makeup treatment. A variety of existing deformation algorithms can be used to deform the target image sticker, including IDW transformation algorithm, MLS transformation algorithm, RMLS transformation algorithm, etc.
[0065] Step 602, performing layer fusion processing on the deformed target image sticker and the corresponding target processing area.
[0066] In an embodiment, after the deformation process is performed on the target picture sticker, the layer fusion process of the target picture sticker and the target processing area is performed. The layer fusion processing can adopt various existing fusion methods, such as alpha fusion method and the like. Since the facial features of each person are different, people will have various actions such as talking and blinking, resulting in coordinate changes and movement, and the human facial features are three-dimensional, so it is necessary to deform the target picture stickers such as eye makeup and eyebrow pencil. Thereby, the makeup effect can be better completed.
[0067] Detect the user image, obtain light intensity information, perform light balance processing on the user image after beauty makeup processing based on the light intensity information, and use various existing 3D rendering software to perform image rendering on the user image after light balance processing deal with. Light balance can use the existing AR engine to detect lighting conditions in real time, obtain the average light intensity and color correction of the camera image, and use the same light as the surrounding environment to perform light balance processing on the user image after beauty treatment to enhance the sense of reality . Through 3D rendering, light balance and other processing, it can ensure a realistic makeup effect and provide a more accurate and realistic makeup trial experience.
[0068] In one embodiment, such as Figure 7 As shown, the present disclosure provides a cosmetic processing device 70 , including: a face recognition module 71 , a template selection module 72 , a texture generation module 73 and an image processing module 74 . The face recognition module 71 performs face recognition on the collected user images to obtain first face feature information. The template selection module 72 acquires the second face feature information corresponding to the preset beauty makeup template, and selects from a plurality of beauty makeup templates based on the matching result between the second face feature information and the first face feature information. The target beauty template that matches the user image.
[0069]The first face feature information includes a first face feature vector and the like; the second face feature information includes a second face feature vector and the like. The template selection module 72 obtains the similarity between the first facial features feature vector and the second facial features feature vector, and selects a target beauty makeup template from multiple beauty makeup templates based on the similarity.
[0070] The texture generation module 73 determines the target processing area that needs makeup processing in the user image, obtains the beauty information corresponding to the target beauty template, and generates the target image texture corresponding to the target processing area based on the beauty information. The image processing module 74 performs fusion processing on the target image texture and the corresponding target processing area, so as to perform cosmetic processing on the user image.
[0071] In one embodiment, such as Figure 8 As shown, the beauty processing device 70 includes: an image processing module 75 and a beauty recommendation module 76 . The image processing module 75 detects the user's image and acquires light intensity information. The image processing module 75 performs light balance processing on the user image after the makeup processing based on the light intensity information, and performs image rendering processing on the user image after the light balance processing. After the beauty makeup recommendation module 76 selects the target beauty makeup template, it generates beauty makeup recommendation information for displaying to the user. The beauty makeup recommendation information includes: beauty makeup type, beauty makeup tool information, beauty tool product link information, etc.
[0072] In one embodiment, such as Figure 9 As shown, the texture generation module 73 includes: an area determination unit 731 and a picture creation unit 732 . The area determination unit 731 performs face detection processing on the user image, extracts face feature points, and determines the target processing area. The face feature points include a plurality of feature points corresponding to facial features, and the target processing area includes a facial feature image area.
[0073] The picture making unit 732 acquires the target image corresponding to the target processing area in the user image, and makes a target picture sticker based on the beauty makeup information and the target image. The beauty makeup information includes: makeup type, beauty makeup tool information, beauty makeup image color and Thickness information, etc. Target picture stickers include: face facial features picture stickers, etc. The image processing module 74 deforms the target picture sticker based on the face feature points, so that the target picture sticker is aligned with the corresponding target processing area, and layers the target picture sticker after the deformation process and the corresponding target processing area fusion processing.
[0074] Figure 10 It is a block diagram of another embodiment of the beauty treatment device according to the present disclosure. like Figure 10 As shown, the device may include a memory 1001 , a processor 1002 , a communication interface 1003 and a bus 1004 . The memory 1001 is used to store instructions, and the processor 1002 is coupled to the memory 1001, and the processor 1002 is configured to implement the above-mentioned beauty treatment method based on the instructions stored in the memory 1001.
[0075] The memory 1001 may be a high-speed RAM memory, a non-volatile memory (non-volatile memory), etc., and the memory 1001 may also be a memory array. The storage 1001 may also be divided into blocks, and the blocks can be combined into virtual volumes according to certain rules. The processor 1002 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement the cosmetic treatment method of the present disclosure.
[0076] In one embodiment, the present disclosure provides a terminal, including the beauty treatment device in any one of the above embodiments. The terminal may be a mobile phone, a tablet computer, or the like.
[0077] In one embodiment, the present disclosure provides a computer-readable storage medium. The computer-readable storage medium stores computer instructions. When the instructions are executed by a processor, the logistics sorting method in any one of the above embodiments is implemented.
[0078] In the cosmetic processing method, device, terminal and storage medium provided in the above-mentioned embodiments, the target cosmetic template is selected according to the matching result between the second facial feature information of the cosmetic template and the first facial feature information of the user image , based on the beauty information of the target beauty template, the target image texture is generated, and the target image texture is fused with the target processing area to realize the beauty makeup processing of the user's image; it can intelligently recommend suitable makeup according to the user's facial features It also performs virtual makeup application to provide a more accurate and realistic makeup trial experience and achieve realistic makeup effects, which can improve the efficiency and effect of AR beauty makeup and improve user experience.
[0079] The methods and systems of the present disclosure may be implemented in many ways. For example, the methods and systems of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above sequence of steps for the method is for illustration only, and the steps of the method of the present disclosure are not limited to the sequence described above, unless specifically stated otherwise. Furthermore, in some embodiments, the present disclosure can also be implemented as programs recorded in recording media, the programs including machine-readable instructions for realizing the method according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
[0080] The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and changes will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to better explain the principles and practical application of the disclosure, and to enable others of ordinary skill in the art to understand the disclosure and design various embodiments with various modifications as are suited to the particular use.