Image processing method and terminal

An image processing and image technology, applied in the field of image processing, can solve the problem of low reliability and achieve the effect of accurate removal

Inactive Publication Date: 2017-06-20
SHENZHEN GIONEE COMM EQUIP
4 Cites 4 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0004] Although some image processing software in the prior art can automatically detect the position...
View more

Method used

[0102] Since the preset relative position correspondence between the element to be removed and the feature point of the face image is pre-established and stored in the terminal, the position of the element to be removed will not change significantly in a short period of time, so that the terminal can automatically Extract the position information of the target element contained in the feature information of the target face image, and accurately locate the position information of the target element. The user does not need to manually determine the target element one by one from the face image every time the face image is processed. The location of the target element can reduce the workload of manually determining the location of the target element and improve the efficiency of obtaining the location information of the target element. In addition, the terminal can remove target elements from different photos corresponding to the same face in batches, further improving image processing efficiency.
[0103] The terminal only processes the target elements belonging to the preset area, according to the corresponding mapping area in the target face image of the preset area to which the first feature point belongs, the position information of the first feature point and the preset relative position correspondence Determining the position information of the target element can accurately remove the target element in the photos of the same face taken at different times or at different angles, and improve the accuracy of obtaining the target element and its position information.
[0104] According to the second size information and position information of the target element, the image processing can effectively and completely eliminate the target element.
[0114] Since the preset relative position correspondence between the element to be removed and the feature point of the face image is pre-stored in the terminal, the position of the element to be removed will not change significantly in a short period of time, so that the terminal can automatically extract the target The position information of the target element contained in the feature information of the face image can accurately locate the position information of the target element, and the user does not need to manually determine the position of the target element one by one from the face image every time the face image is processed , which can reduce the workload of manually determining the position of the target element and improve the efficiency of obtaining the position information of the target element. In addition, the terminal can remove target elements from different photos corresponding to the same face in batches, further improving image processing efficiency.
[0135] Since the preset relative position correspondence between the element to be removed and the feature point of the face image is pre-stored in the terminal, the position of the element to be removed will not change significantly in a short period of time, so that the terminal can automatically extract the target The position information of the target element contain...
View more

Abstract

The embodiments of the invention provide an image processing method and a terminal. The method comprises the following steps: acquiring the feature information of a target face image of which a target element needs to be removed; acquiring the position information of a target element corresponding to the feature information of the target face image according to a preset relative position corresponding relationship between to-be-removed elements and feature points of the face image; and removing the target element according to the position information of the target element. The terminal provided by the embodiments of the invention can accurately identity a target element in a to-be-processed image and determine the position information of the target element.

Application Domain

Geometric image transformation

Technology Topic

Imaging processingComputer science +1

Image

  • Image processing method and terminal
  • Image processing method and terminal
  • Image processing method and terminal

Examples

  • Experimental program(1)

Example Embodiment

[0023] The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are part of the embodiments of the present invention, not all of them. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
[0024] It should be understood that when used in this specification and the appended claims, the terms "including" and "including" indicate the existence of the described features, wholes, steps, operations, elements and/or components, but do not exclude one or The existence or addition of multiple other features, wholes, steps, operations, elements, components, and/or collections thereof.
[0025] It should also be understood that the terms used in this specification of the present invention are only for the purpose of describing specific embodiments and are not intended to limit the present invention. As used in the specification of the present invention and the appended claims, unless the context clearly indicates other circumstances, the singular forms "a", "an" and "the" are intended to include plural forms.
[0026] It should be further understood that the term "and/or" used in the specification and appended claims of the present invention refers to any combination of one or more of the items listed in the associated list and all possible combinations, and includes these combinations .
[0027] As used in this specification and the appended claims, the term "if" can be interpreted as "when" or "once" or "in response to determination" or "in response to detection" depending on the context . Similarly, the phrase "if determined" or "if detected [described condition or event]" can be interpreted as meaning "once determined" or "in response to determination" or "once detected [described condition or event]" depending on the context ]" or "in response to detection of [condition or event described]".
[0028] In specific implementation, the terminals described in the embodiments of the present invention include but are not limited to other portable devices such as mobile phones with touch-sensitive surfaces (for example, touch screen displays and/or touch pads), laptop computers, or tablet computers. It should also be understood that, in some embodiments, the device is not a portable communication device, but a desktop computer with a touch-sensitive surface (e.g., a touch screen display and/or a touch pad).
[0029] In the following discussion, a terminal including a display and a touch-sensitive surface is described. However, it should be understood that the terminal may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
[0030] The terminal supports various applications, such as one or more of the following: drawing application, presentation application, word processing application, website creation application, disk burning application, spreadsheet application, game application, telephone application Applications, video conferencing applications, email applications, instant messaging applications, exercise support applications, photo management applications, digital camera applications, digital camera applications, web browsing applications, digital music player applications, and / Or digital video player application.
[0031] Various application programs that can be executed on the terminal can use at least one common physical user interface device such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal can be adjusted and/or changed between applications and/or within corresponding applications. In this way, the common physical architecture of the terminal (for example, a touch-sensitive surface) can support various applications with a user interface that is intuitive and transparent to the user.
[0032] See figure 1 , figure 1 It is a schematic flowchart of an image processing method provided by an embodiment of the present invention. The execution subject of the image processing method in this embodiment is the terminal. The terminal may be a mobile terminal such as a mobile phone or a tablet computer. Such as figure 1 The image processing method shown may include the following steps:
[0033] S101: Obtain feature information of a target face image whose target element needs to be removed.
[0034] When the end user needs to use the photo containing the face image of the person to perform image processing to remove the target element, the image processing function for removing the target element is activated. The target element is an element that does not change significantly in the short or long term, and its position will not change significantly during this period. The target element can include one or any combination of dark spots, moles or scars. The number of target face images can be one or more, and there is no limitation here.
[0035] It is understandable that the target elements that need to be removed in the same face image may be one type or at least two types.
[0036] When the terminal detects the target face image from which the user determines that the target element needs to be removed, the terminal obtains the feature information of the target face image through face recognition technology. The feature information of the target face image includes facial contour information, facial features information and so on. For example, the feature information of the target face image includes one or more feature point information such as left eye, right eye, nose, mouth, and facial contour.
[0037] Further, the terminal can obtain information of 68 feature points contained in the target face image. The information of the 68 feature points can describe facial contour features, lip features, nose features, eyebrow shapes, eye contours, etc.
[0038] Please refer to figure 2 , figure 2 It is a schematic diagram of the facial features depicted by 68 feature points in an embodiment of the present invention.
[0039] Such as figure 2 As shown, the facial contour features are drawn through 17 feature points (feature points marked as 1-17), and the eyebrow features of the left eye are drawn through 5 feature points (feature points marked as 18-22), and through 5 features Points (marked as 23-27 feature points) depict the eyebrow features of the right eye, through 9 feature points (marked as 28-36 feature points) to describe the nose feature, through 6 feature points (marked as 37-40 features) Point) depicts the eye contour features of the left eye, and the eye contour features of the right eye are drawn through 6 feature points (feature points marked as 43-46), and 20 feature points (feature points marked as 49-68) Dot) depicts the contour characteristics of the lips. It can be understood that the reference sign corresponding to each feature point can be set according to actual conditions.
[0040] S102: Obtain the position information of the target element corresponding to the feature information of the target face image according to the preset relative position correspondence between the element to be removed and the feature point of the face image.
[0041] The preset relative position correspondence between the elements to be removed and the feature points of the face image is pre-stored in the terminal. The terminal may pre-store the preset relative position correspondence between each type of element to be removed and the feature point of the face image.
[0042] The terminal obtains the face image matching the target face image from the database according to the feature information of the target face image, and from the preset relative position correspondences between the elements to be removed and the feature points of the face image, and the ones to be removed The preset relative position correspondence between the element and the feature point of the matched face image, so as to obtain the position information of the target element corresponding to the feature of the matched face image according to the type of the target element, so as to obtain the feature information of the target face image The location information of the corresponding target element.
[0043] Specifically, the terminal may determine the location information of the target element through the location information of each feature point and the relative location relationship between the feature point and the target element.
[0044] Wherein, when there are at least two types of target elements contained in the target face image, the terminal obtains respective location information corresponding to each type of target element.
[0045] When the number of target elements of the same type contained in the target face image is multiple, the terminal obtains the position information corresponding to each target element of the type.
[0046] For example, when the user manually selects the location information of the element to be removed in any photo that includes a face image for the first time, the type of the element and the location information of each element to be removed are recorded, thereby establishing the element to be removed and The preset relative position correspondence of the feature points of the face image.
[0047] S103: Remove the target element according to the location information of the target element.
[0048] The terminal can use the open source image restoration algorithm inpainting provided in OPENCV to perform image processing according to the determined location information of the target element to remove the target element. Among them, the existing Navier-Stokes or Alexandru Telea image restoration method can be used to restore the target face image to remove the target element.
[0049] In the above solution, the terminal obtains the feature information of the target face image that needs to remove the target element; according to the preset relative position correspondence between the element to be removed and the feature point of the face image, obtain the feature information corresponding to the target face image Location information of the target element; removing the target element according to the location information of the target element can accurately identify the target element in the image to be processed, and determine its location information, and can accurately remove the target element.
[0050] See image 3 , image 3 It is a schematic flowchart of an image processing method according to another embodiment of the present invention. The execution subject of the image processing method in this embodiment is the terminal. The terminal may be a mobile terminal such as a mobile phone or a tablet computer. Such as image 3 The image processing method shown may include the following steps:
[0051] S201: According to the feature points of the face image and the location information of the elements to be removed determined by the user in the face image, establish a preset relative position correspondence between the feature points of the face image and the elements to be removed.
[0052] When detecting that the user removes the first element or the first type of element from the first face image for the first time, the terminal acquires the feature information of the first face image, and extracts and records the feature points and the locations of the feature points from the feature information Information; and obtain the location information of the element to be removed determined by the user in the first face image, and establish the element to be removed and the location information of the element to be removed based on the feature point identifier, the location information of the feature point, and the type and location information of the element to be removed The preset relative position correspondence of the feature points of the face image.
[0053] It is understandable that the terminal can update the preset relative position correspondence relationship according to user operations.
[0054] Optionally, the image processing method may further include: pre-recording the first size information corresponding to the target element in the first face image.
[0055] For example, when acquiring the position information of the element to be removed that the user determines in the first face image, the terminal may also acquire and record the first size information corresponding to the element to be removed in the first face image.
[0056] The first face image is a face image that is pre-stored in the terminal and the user manually determines the location information of the element to be removed.
[0057] S202: Obtain feature information of the target face image from which the target element needs to be removed.
[0058] When the end user needs to use the photo containing the face image of the person to perform image processing to remove the target element, the image processing function for removing the target element is activated. The target element is an element that does not change significantly in the short or long term, and its position will not change significantly during this period. The target element can include one or any combination of dark spots, moles or scars. The number of target face images can be one or more, and there is no limitation here.
[0059] It is understandable that the target elements that need to be removed in the same face image may be one type or at least two types.
[0060] When the terminal detects the target face image from which the user determines that the target element needs to be removed, the terminal obtains the feature information of the target face image through face recognition technology. The feature information of the target face image includes facial contour information, facial features information and so on. For example, the feature information of the target face image includes feature points of one or more features such as the left eye, right eye, nose, mouth, and facial contour.
[0061] Further, the terminal can obtain the information of 68 feature points contained in the target face image. The 68 feature points can describe facial contour features, lip features, nose features, eyebrow shape, eye contour information, and so on.
[0062] Please refer to figure 2 , figure 2 It is a schematic diagram of the facial features depicted by 68 feature points.
[0063] Such as figure 2 As shown, the facial contour features are drawn through 17 feature points (feature points marked as 1-17), and the eyebrow features of the left eye are drawn through 5 feature points (feature points marked as 18-22), and through 5 features Points (marked as 23-27 feature points) depict the eyebrow features of the right eye, through 9 feature points (marked as 28-36 feature points) to describe the nose feature, through 6 feature points (marked as 37-40 features) Point) depicts the eye contour features of the left eye, and the eye contour features of the right eye are drawn through 6 feature points (feature points marked as 43-46), and 20 feature points (feature points marked as 49-68) Dot) depicts the contour characteristics of the lips. It can be understood that the reference sign corresponding to each feature point can be set according to actual conditions.
[0064] S203: Acquire the position information of the target element corresponding to the feature information of the target face image according to the preset relative position correspondence between the element to be removed and the feature point of the face image.
[0065] The preset relative position correspondence between the elements to be removed and the feature points of the face image is pre-stored in the terminal. The terminal may pre-store the preset relative position correspondence between each type of element to be removed and the feature point of the face image.
[0066] The terminal obtains the face image matching the target face image from the database according to the feature information of the target face image, and obtains the face image to be removed from the preset relative position correspondence between the element to be removed and the feature points of the face image The preset relative position correspondence between the element and the feature point of the matched face image, so as to obtain the location information of the target element corresponding to the feature of the matched face image according to the type of the target element to obtain the feature information correspondence of the target face image The location information of the target element.
[0067] Specifically, the terminal may determine the location information of the target element through the location information of each feature point and the relative location relationship between the feature point and the target element. Wherein, when there are at least two types of target elements contained in the target face image, the terminal obtains respective location information corresponding to each type of target element.
[0068] When the number of target elements of the same type contained in the target face image is multiple, the terminal obtains the position information corresponding to each target element of the type.
[0069] Further, S203 may also include:
[0070] A: According to the preset relative position correspondence between the element to be removed and the feature point of the face image, detect whether each feature point contained in the feature information of the target face image corresponds to a target element;
[0071] B: Determine the position information of the target element according to the position information of the feature point corresponding to the target element and the preset relative position correspondence.
[0072] For example, the terminal traverses each feature point contained in the target face image, and detects whether each feature point contained in the feature information of the target face image corresponds to the preset relative position correspondence between the element to be removed and the feature point of the face image Corresponds to a target element. When it is detected that any feature point corresponds to a target element, the location information of the target element is determined according to the location information of the feature point corresponding to the target element, the preset relative position correspondence between the element to be removed and the feature point of the face image .
[0073] Further, step B may also include the following three steps:
[0074] 1) Obtain a first face image matching the feature information of the target face image; wherein the first face image includes the target element;
[0075] The number of types of target elements contained in the first face image may be one or at least two, and the number of each target element may be one or at least two.
[0076] 2) Obtain the position information of the first feature point corresponding to the target element in the first face image;
[0077] Wherein, the first feature point may be any feature point in the first face image, or it may be a feature point that is closer or closest to the corresponding position of the target element in the first face image.
[0078] The first feature point is closer to the corresponding position of the target element in the first face image. When the size of the target face image is different from that of the first face image, the target element calculated according to the position information of the first feature point is The position information in the target face image is more accurate.
[0079] 3) Determine the preset area to which the first feature point belongs according to the location information of the first feature point;
[0080] Among them, the terminal pre-stores the corresponding feature point information of the face image whose elements need to be removed, and divides the preset face image into multiple closed preset areas according to the position information of the feature points, and each preset The inside of the area is an exposed skin area, and multiple closed preset areas cover the entire exposed skin in the preset face image. That is, the preset area corresponds to the exposed skin area, and the preset area does not include non-skin areas such as eyebrows, eyes, and teeth. The shape of the preset area can be a triangle, but is not limited to this, and can also be a regular or irregular closed polygon.
[0081] The terminal judges whether the first feature point belongs to any one of the multiple preset areas according to the position information of the first feature point corresponding to the target element in the first face image.
[0082] Further, if the first feature point belongs to at least two preset areas with different areas, the preset area with the smallest area is identified as the preset area corresponding to the first feature point to determine that the preset area with the smallest area corresponds to The mapping area.
[0083] It is understandable that when the first feature point corresponding to the target element does not belong to any preset area, it is recognized that the target element corresponding to the first feature point does not belong to the exposed skin area. At this time, the target element does not need to be removed .
[0084] 4) Determine the mapping area obtained by mapping the preset area to the target face image, and determine the position of the first feature point in the mapping area;
[0085] When confirming that the first feature point belongs to the first preset area, the terminal determines the corresponding mapping area in the target face image of the first preset area to which the first feature point in the first face image belongs according to the mapping model, so as The first preset area to which a feature point belongs is mapped from the first face image to the target face image to obtain the mapping area, thereby mapping the first preset area to which the first feature point belongs from the first face image to the target person For the face image, the terminal obtains the position of the first feature point in the mapping area.
[0086] Among them, when the preset area is a triangle, the mapping model can be an affine transformation model The mapping area of ​​the first preset area is obtained by mapping the first preset area in the first face image to the target face image.
[0087] Please refer to Figure 4 , Figure 4 It is a schematic diagram of determining the position of a target element according to an affine transformation model provided by an embodiment of the present invention.
[0088] For such as Figure 4 Any pair of triangles shown m 0 n 0 p 0 With m 1 n 1 p 1 , Substitute Available Solve the linear equation directly to get At this time, the set transformation matrix Move the vertices of the triangle set from m 0 n 0 p 0 Transform to m 1 n 1 p 1 At the same time, the points inside the triangle also establish a dense mapping relationship, and the geometric mapping relationship of the position of the target element to be removed inside each triangle is also uniquely determined.
[0089] among them, Figure 4 The target element to be removed in s 0 Belonging to the first face image by m 0 n 0 p 0 In the preset area formed, divide the first face image from m 0 n 0 p 0 The formed preset area is mapped to the target face image to get 1 n 1 p 1 The formed mapping area.
[0090] 5) Determine the position information of the target element according to the mapping ratio, the position of the first feature point in the mapping area, and the preset relative position correspondence.
[0091] The terminal calculates the mapping ratio according to the size of the preset area to which the first feature point belongs and the size of the mapping area corresponding to the preset area.
[0092] The terminal obtains the relative position correspondence between the first feature point and the target element in the first face image according to the preset relative position correspondence between the element to be removed and the feature point of the face image, and according to the mapping ratio, the first face The corresponding relationship between the first feature point in the image and the relative position of the target element is calculated. After the preset area is mapped, the relative position correspondence between the first feature point and the target element in the target face image is calculated, and the first feature point is The position of the mapping area and the relative position correspondence between the first feature point in the target face image and the target element are determined to determine the position information of the target element contained in the target face image.
[0093] Such as Figure 4 As shown, the location information of the target element corresponds to the mapping area m 1 n 1 p 1 Middle s 1 s position.
[0094] Further, when the terminal pre-records the first size information corresponding to the target element in the first face image, according to the first size information and the size of the first face image and the target face image The ratio calculates the second size information corresponding to the target element in the target face image.
[0095] For example, the terminal obtains the first size information corresponding to the target element in the first face image matching the target face image, and calculates the size ratio between the first face image and the target face image, that is, the first face image The ratio of the size of the image to the size of the target face image (the unit of measurement is the same).
[0096] The terminal calculates the second size information corresponding to the target element in the target face image according to the first size information corresponding to the target element in the first image and the size ratio between the first face image and the target face image.
[0097] S204: Remove the target element according to the location information of the target element.
[0098] The terminal can use the open source image restoration algorithm inpainting provided in OPENCV to perform image processing according to the determined location information of the target element to remove the target element. Among them, the existing Navier-Stokes or Alexandru Telea image restoration method can be used to restore the target face image to remove the target element.
[0099] Further, when the terminal calculates the second size information corresponding to the target element in the target face image, S204 may specifically include: removing the target element according to the position information of the target element and the second size information.
[0100] The terminal can use the open source image restoration algorithm inpainting provided in OPENCV to perform image processing according to the determined position information of the target element and the second size information to remove the target element.
[0101] In the above solution, the terminal obtains the feature information of the target face image that needs to remove the target element; according to the preset relative position correspondence between the element to be removed and the feature point of the face image, obtain the feature information corresponding to the target face image Location information of the target element; removing the target element according to the location information of the target element can accurately identify the target element in the image to be processed, and determine its location information, and can accurately remove the target element.
[0102] Since the preset relative positions of the elements to be removed and the feature points of the face image are pre-established and stored in the terminal, the positions of the elements to be removed will not change significantly in the short term, so that the terminal can automatically extract the target person The location information of the target element contained in the feature information of the face image can accurately locate the location information of the target element. The user does not need to manually determine the location of the target element from the face image every time the face image is processed. This can reduce the workload of manually determining the location of the target element, and improve the efficiency of obtaining the location information of the target element. In addition, the terminal can perform processing to remove target elements in batches of different photos corresponding to the same face, further improving image processing efficiency.
[0103] The terminal only processes the target elements belonging to the preset area, and determines the target element according to the mapping area in the target face image corresponding to the preset area to which the first feature point belongs, the position information of the first feature point and the preset relative position correspondence relationship The location information can accurately remove the target element in the photos of the same face at different times or at different angles, and improve the accuracy of obtaining the target element and its location information.
[0104] Image processing based on the second size information and location information of the target element can effectively and completely remove the target element.
[0105] See Figure 5 , Figure 5 It is a schematic block diagram of a terminal according to an embodiment of the present invention. The terminal may be a mobile terminal such as a mobile phone and a tablet computer, but is not limited to this, and may also be other terminals, which is not limited here. The units included in the terminal 500 of this embodiment are used to execute figure 1 For each step in the corresponding embodiment, please refer to figure 1 as well as figure 1 The relevant description in the corresponding embodiment will not be repeated here. The terminal of this embodiment includes: a first acquiring unit 510, a second acquiring unit 520, and an image processing unit 530.
[0106] The first acquiring unit 510 is configured to acquire feature information of the target face image from which the target element needs to be removed.
[0107] For example, the first acquiring unit 510 acquires the feature information of the target face image from which the target element needs to be removed, and the first acquiring unit 510 sends the feature information of the target face image to the second acquiring unit 520.
[0108] The second acquiring unit 520 is configured to receive the feature information of the target face image sent by the first acquiring unit 510, and acquire the target face image according to the preset relative position correspondence between the elements to be removed and the feature points of the face image The feature information corresponds to the location information of the target element.
[0109] For example, the second acquiring unit 520 receives the feature information of the target face image sent by the first acquiring unit 510, and acquires the target face image according to the preset relative position correspondence between the elements to be removed and the feature points of the face image The feature information corresponds to the location information of the target element.
[0110] The second acquiring unit 520 sends the position information of the target element to the image processing unit 530.
[0111] The image processing unit 530 is configured to receive the location information of the target element sent by the second acquiring unit 520, and remove the target element according to the location information of the target element.
[0112] For example, the image processing unit 530 receives the location information of the target element sent by the second acquisition unit 520, and removes the target element according to the location information of the target element.
[0113] In the above solution, the terminal obtains the feature information of the target face image that needs to remove the target element; according to the preset relative position correspondence between the element to be removed and the feature point of the face image, obtain the feature information corresponding to the target face image Location information of the target element; removing the target element according to the location information of the target element can accurately identify the target element in the image to be processed, and determine its location information, and can accurately remove the target element.
[0114] Since the preset relative positions of the element to be removed and the feature points of the face image are pre-stored in the terminal, the position of the element to be removed will not change significantly in the short term, so that the terminal can automatically extract the target face image The location information of the target element contained in the feature information of the target element can accurately locate the location information of the target element. The user does not need to manually determine the location of the target element from the face image one by one every time the face image is processed. Reduce the workload of manually determining the location of the target element, and improve the efficiency of obtaining the location information of the target element. In addition, the terminal can perform processing to remove target elements in batches of different photos corresponding to the same face, further improving image processing efficiency.
[0115] See Image 6 , Image 6 It is a schematic block diagram of a terminal according to another embodiment of the present invention. The terminal may be a mobile terminal such as a mobile phone or a tablet computer, but is not limited to this, and may also be other terminals, which is not limited here. The units included in the terminal 600 of this embodiment are used to execute image 3 For each step in the corresponding embodiment, please refer to image 3 as well as image 3 The relevant description in the corresponding embodiment will not be repeated here. The terminal of this embodiment includes: a setting unit 610, a recording unit 620, a first acquisition unit 630, a second acquisition unit 640, a calculation unit 650, and an image processing unit 660. Wherein, the second acquisition unit 640 may include a detection unit 641 and a position determination unit 642.
[0116] The setting unit 610 is configured to establish a preset relative position correspondence between the element to be removed and the feature point of the face image according to the feature points of the face image and the position information of the element to be removed determined by the user in the face image . For example, the setting unit 610 establishes the preset relative position correspondence between the feature points of the face image and the feature points of the face image according to the feature points of the face image and the location information of the elements to be removed determined by the user in the face image . The setting unit 610 sends the preset relative position correspondence between the element to be removed and the feature point of the face image to the second acquiring unit 640.
[0117] The recording unit 620 is configured to pre-record the first size information corresponding to the target element in the first face image. For example, the recording unit 620 pre-records the first size information corresponding to the target element in the first face image. The recording unit 620 sends the first size information corresponding to the target element in the first face image to the calculation unit 650.
[0118] The first acquiring unit 630 is configured to acquire feature information of the target face image from which the target element needs to be removed. For example, the first acquiring unit 630 acquires the feature information of the target face image from which the target element needs to be removed, and the first acquiring unit 630 sends the feature information of the target face image to the second acquiring unit 640, and sends the information of the target face image to the The calculation unit 650 sends.
[0119] The second acquiring unit 640 is configured to receive the preset relative position correspondence between the element to be removed and the feature points of the face image sent by the setting unit 610, and the feature information of the target face image sent by the first acquiring unit 630, and according to the feature information to be removed The preset relative position correspondence between the elements of and the feature points of the face image, and the location information of the target element corresponding to the feature information of the target face image is acquired.
[0120] For example, the second acquiring unit 640 receives the preset relative position correspondences between the elements to be removed and the feature points of the face image sent by the setting unit 610, and the feature information of the target face image sent by the first acquiring unit 630, and according to the features to be removed The preset relative position correspondence between the elements of and the feature points of the face image, and the location information of the target element corresponding to the feature information of the target face image is acquired.
[0121] Further, when the second acquiring unit 640 includes the detecting unit 641 and the position determining unit 642,
[0122] The detecting unit 641 is configured to detect whether each feature point contained in the feature information of the target face image corresponds to a target element according to the preset relative position correspondence between the element to be removed and the feature point of the face image; the position determining unit 642 is configured to determine the position information of the target element according to the position information of the feature point corresponding to the target element and the preset relative position correspondence.
[0123] Further, the position determining unit 642 is specifically configured to:
[0124] Acquiring a first face image matching the feature information of the target face image; wherein, the first face image includes the target element;
[0125] Acquiring location information of the first feature point corresponding to the target element in the first face image;
[0126] Determine the preset area to which the first feature point belongs according to the location information of the first feature point;
[0127] Determining a mapping area obtained by mapping the preset area to the target face image, and determining the position of the first feature point in the mapping area;
[0128] The position information of the target element is determined according to the mapping ratio, the position of the first feature point in the mapping area, and the preset relative position correspondence.
[0129] The second acquiring unit 640 sends the location information of the target element to the image processing unit 660.
[0130] The calculation unit 650 is configured to receive the target face image information sent by the first acquisition unit 630, and calculate the target face image according to the first size information and the size ratio between the first face image and the target face image The second size information corresponding to the element in the target face image. The calculation unit 650 sends the calculated second size information to the image processing unit 660.
[0131] The image processing unit 660 is configured to receive the location information of the target element sent by the second acquisition unit 640, and remove the target element according to the location information of the target element.
[0132] For example, the image processing unit 660 receives the location information of the target element sent by the second acquisition unit 640, and removes the target element according to the location information of the target element.
[0133] The image processing unit 660 is further configured to receive the second size information sent by the calculation unit 650, and remove the target element according to the position information of the target element and the second size information.
[0134] In the above solution, the terminal obtains the feature information of the target face image that needs to remove the target element; according to the preset relative position correspondence between the element to be removed and the feature point of the face image, obtain the feature information corresponding to the target face image Location information of the target element; removing the target element according to the location information of the target element can accurately identify the target element in the image to be processed, and determine its location information, and can accurately remove the target element.
[0135] Since the preset relative positions of the element to be removed and the feature points of the face image are pre-stored in the terminal, the position of the element to be removed will not change significantly in the short term, so that the terminal can automatically extract the target face image The location information of the target element contained in the feature information of the target element can accurately locate the location information of the target element. The user does not need to manually determine the location of the target element from the face image one by one every time the face image is processed. Reduce the workload of manually determining the location of the target element, and improve the efficiency of obtaining the location information of the target element. In addition, the terminal can perform processing to remove target elements in batches of different photos corresponding to the same face, further improving image processing efficiency.
[0136] The terminal only processes the target elements belonging to the preset area, and determines the target element according to the mapping area in the target face image corresponding to the preset area to which the first feature point belongs, the position information of the first feature point and the preset relative position correspondence relationship The position information of the target element can be accurately removed from the photos of the same face at different times or at different angles, and the accuracy of obtaining the position information of the target element can be improved.
[0137] Image processing based on the second size information and location information of the target element can effectively and completely remove the target element.
[0138] See Figure 7 , Figure 7 It is a schematic block diagram of a terminal according to still another embodiment of the present invention. Such as Figure 7 The terminal 700 in this embodiment shown may include: one or more processors 710; one or more input devices 720, one or more output devices 730, and a memory 740. The aforementioned processor 710, input device 720, output device 730, and memory 740 are connected through a bus 750.
[0139] The memory 740 is used to store program instructions.
[0140] The processor 710 is configured to perform the following operations according to the program instructions stored in the memory 740:
[0141] The processor 710 is configured to obtain feature information of the target face image from which the target element needs to be removed.
[0142] The processor 710 is further configured to obtain the location information of the target element corresponding to the feature information of the target face image according to the preset relative position correspondence between the element to be removed and the feature point of the face image.
[0143] The processor 710 is further configured to remove the target element according to the location information of the target element.
[0144] Further, the processor 710 is configured to establish a prediction of the feature points of the face image and the feature points of the face image according to the feature points of the face image and the location information of the elements to be removed determined by the user in the face image. Set the relative position correspondence.
[0145] Further, the processor 710 is further configured to detect whether each feature point contained in the feature information of the target face image corresponds to a target element according to the preset relative position correspondence between the element to be removed and the feature point of the face image ; Determine the location information of the target element according to the location information of the feature point corresponding to the target element and the preset relative position correspondence.
[0146] Further, the processor 710 is specifically configured to obtain a first face image that matches the feature information of the target face image; wherein, the first face image includes the target element; and obtains the first face image Location information of the first feature point corresponding to the target element in the image; determine the preset area to which the first feature point belongs according to the location information of the first feature point; determine that the preset area is mapped to the target The mapping area obtained from the face image determines the position of the first feature point in the mapping area; according to the mapping ratio, the position of the first feature point in the mapping area, and the preset relative position correspondence Determine the location information of the target element.
[0147] Further, the processor 710 is further configured to pre-record the first size information corresponding to the target element in the first face image; according to the first size information and the first face image and the The size ratio of the target face image is calculated, and the second size information corresponding to the target element in the target face image is calculated; the target element is removed according to the position information of the target element and the second size information.
[0148] In the above solution, the terminal obtains the feature information of the target face image that needs to remove the target element; according to the preset relative position correspondence between the element to be removed and the feature point of the face image, obtain the feature information corresponding to the target face image Location information of the target element; removing the target element according to the location information of the target element can accurately identify the target element in the image to be processed, and determine its location information, and can accurately remove the target element.
[0149] Since the preset relative positions of the element to be removed and the feature points of the face image are pre-stored in the terminal, the position of the element to be removed will not change significantly in the short term, so that the terminal can automatically extract the target face image The location information of the target element contained in the feature information of the target element can accurately locate the location information of the target element. The user does not need to manually determine the location of the target element from the face image one by one every time the face image is processed. Reduce the workload of manually determining the location of the target element, and improve the efficiency of obtaining the location information of the target element. In addition, the terminal can perform processing to remove target elements in batches of different photos corresponding to the same face, further improving image processing efficiency.
[0150] The terminal only processes the target elements belonging to the preset area, and determines the target element according to the mapping area in the target face image corresponding to the preset area to which the first feature point belongs, the position information of the first feature point and the preset relative position correspondence relationship The position information of the target element can be accurately removed from the photos of the same face at different times or at different angles, and the accuracy of obtaining the position information of the target element can be improved.
[0151] Image processing based on the second size information and location information of the target element can effectively and completely remove the target element.
[0152] It should be understood that, in the embodiment of the present invention, the so-called processor 710 may be a central processing unit (Central Processing Unit, CPU), and the processor may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), and dedicated Integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
[0153] The input device 720 may include a touch panel, a fingerprint sensor (used to collect user fingerprint information and fingerprint orientation information), a microphone, etc., and the output device 730 may include a display (LCD, etc.), a speaker, and the like.
[0154] The memory 740 may include a read-only memory and a random access memory, and provides instructions and data to the processor 710. A part of the memory 740 may also include a non-volatile random access memory. For example, the memory 740 may also store device type information.
[0155] In specific implementation, the processor 710, input device 720, and output device 730 described in the embodiment of the present invention can execute the implementation described in the first embodiment and the second embodiment of the image processing method provided in the embodiment of the present invention. , The implementation of the terminal described in the embodiment of the present invention can also be implemented, which will not be repeated here.
[0156] Those of ordinary skill in the art may realize that the units and algorithm steps of the examples described in the embodiments disclosed herein can be implemented by electronic hardware, computer software, or a combination of the two, in order to clearly illustrate the hardware and software Interchangeability. In the above description, the composition and steps of each example have been generally described in accordance with the function. Whether these functions are executed by hardware or software depends on the specific application and design constraints of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered as going beyond the scope of the present invention.
[0157] Those skilled in the art can clearly understand that, for the convenience and conciseness of description, the specific working process of the terminal and unit described above can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here.
[0158] In the several embodiments provided in this application, it should be understood that the disclosed terminal and method may be implemented in other ways. For example, the device embodiments described above are merely illustrative. For example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or It can be integrated into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may also be electrical, mechanical or other forms of connection.
[0159] The steps in the method of the embodiment of the present invention can be adjusted, merged, and deleted in order according to actual needs.
[0160] In the embodiment of the present invention, the units in the terminal can be combined, divided, and deleted according to actual needs.
[0161] The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments of the present invention.
[0162] In addition, the functional units in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above-mentioned integrated unit can be realized in the form of hardware or software functional unit.
[0163] If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present invention is essentially or the part that contributes to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium. It includes several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method in each embodiment of the present invention. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes.
[0164] The above are only specific embodiments of the present invention, but the protection scope of the present invention is not limited to this. Anyone familiar with the technical field can easily think of various equivalents within the technical scope disclosed by the present invention. Modifications or replacements, these modifications or replacements shall be covered by the protection scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.

Similar technology patents

Image processing method, device and equipment and computer readable storage medium

PendingCN110930441AEffective removalAccurate removal
Owner:PEKING UNIV +2

Filtering device for water intake pipe of water purifier

InactiveCN108503101AAccurate removalFine water disinfection
Owner:SUZHOU LIN XINYUAN AUTOMATION TECH CO LTD

Sparse accelerator applied to on-chip training

PendingCN114492753AImprove hardware utilizationAccurate removal
Owner:NANJING UNIV

Out-of-tolerance product removal mechanism for weighing equipment

InactiveCN105149237AAccurate removalguaranteed use
Owner:天津滨海立成包装机械制造有限公司

Classification and recommendation of technical efficacy words

  • Accurate removal

Out-of-tolerance product removal mechanism for weighing equipment

InactiveCN105149237AAccurate removalguaranteed use
Owner:天津滨海立成包装机械制造有限公司

Image processing method, device and equipment and computer readable storage medium

PendingCN110930441AEffective removalAccurate removal
Owner:PEKING UNIV +2

Filtering device for water intake pipe of water purifier

InactiveCN108503101AAccurate removalFine water disinfection
Owner:SUZHOU LIN XINYUAN AUTOMATION TECH CO LTD

Sparse accelerator applied to on-chip training

PendingCN114492753AImprove hardware utilizationAccurate removal
Owner:NANJING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products