Spectacle lens virtual try-on device and method based on edge detection
An edge detection and spectacle lens technology, which is applied in image data processing, instruments, computer components, etc., can solve the problems of try-on failure and inaccurate eye positioning detection, etc., and achieve the effect of precise positioning detection
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
specific Embodiment approach
[0052] Data processor (2), this data processor (2) comprises data collector (21), image processor (22) and link builder (23), described data collector (21), image processor (22 ), the link generator (23) is coupled with the database (1) through interaction, and the data collector (21) is used to collect user information data, frame information data, lens color data and try-on effect data to the database (1 ), the image processor (22) calls the user information data, frame information data and lens color data in the database (1), parses the avatar through the user information data, and performs edge detection on the frame information data to obtain the frame image , and then use the lens color data to dye the inside of the frame, and output the adjusted glasses image and avatar respectively; the user interaction terminal (3) is used for man-machine interaction with the user to collect user information, and the user interaction terminal (3) is also coupled Connected to the data ...
Embodiment approach
[0057] The present invention provides another embodiment, a virtual try-on method, comprising the following steps:
[0058] Step 1, input the avatar and frame pictures into the database 1 through the user interaction terminal 3;
[0059] Step 2, carry out face recognition to the input avatar picture by the data processor 2, if the recognition is successful, then proceed to the next step, if the recognition fails, return to step 1 and re-input;
[0060] Step 3, through the data processor 2, perform pupil positioning on the avatar picture identified in step 2, and locate the pupil position of the face in the avatar picture;
[0061] Step 4: Translating and zooming the frame picture through the data processor 2, adjusting it to fit the size and position of the face in the avatar picture;
[0062] Step 5, through the data processor 2, the frame image adjusted in step 4 is used to identify the edge of the frame in it, and output a binary frame image;
[0063] Step 6, according to...
PUM
Login to View More Abstract
Description
Claims
Application Information
Login to View More 

