An operation prompting method and glasses
A technology of operation prompts and operation modes, applied in the field of data processing, can solve problems such as cumbersome operations, low efficiency, and errors, and achieve the effect of improving learning efficiency
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment 2
[0049] As a specific implementation of 3D semantic map construction in Embodiment 1 of the present invention, in the embodiment of the present invention, the environmental images to be collected include color images and depth images of the environment, such as figure 2 As shown, Embodiment 2 of the present invention includes:
[0050] S201. Based on the color image and the depth image, obtain the user's position information and posture information, as well as the position information and item information of the items in the user's environment.
[0051] In the embodiment of the present invention, the smart glasses use an RGB-D camera as a sensor to acquire color images and depth images, and use the visual SLAM algorithm to complete the autonomous positioning and pose estimation and optimization of the smart glasses (that is, the user's position information and attitude information acquisition). ), and at the same time carry out item detection to obtain the semantic information...
Embodiment 3
[0109] As a specific implementation of the user's gaze behavior recognition in Embodiment 1 of the present invention, such as Figure 3A As shown, the third embodiment of the present invention includes:
[0110] S301. Acquire an eye image of the user, perform pupil positioning on the eye image, and determine the gaze area of the user in the 3D semantic map based on the obtained pupil position information.
[0111] In the embodiment of the present invention, the user's line of sight is tracked based on the user's pupil and Purchin's spot, and the area where the user gazes is determined. Therefore, it is first necessary to determine the position of the pupil in the eye. Among them, the specific pupil recognition algorithm can be set by the technicians according to the needs, including but not limited to using the neural network model to train the sample data of the pupil image, and to identify the pupil in the eye image, or refer to the fourth embodiment of the present inventi...
Embodiment 4
[0118] As an implementation of pupil positioning in Embodiment 3 of the present invention, such as Figure 4A As shown, Embodiment 4 of the present invention includes:
[0119] S401. Divide the eye image into N×M area images, and binarize the gray levels of all the area images to obtain corresponding N×M eye gray values, where N and M are both positive integers.
[0120] Such as Figure 4B As shown, the eye features in the embodiment of the present invention are described in detail as follows, the basic rectangular blocks in the figure are of the same size, ABCD is the most primitive rectangular feature in the figure, E is composed of 3 basic rectangles, and F is composed of 9 rectangles , G is a rectangle, H and I are composed of 4 basic rectangles, J is composed of 12 rectangles, K and L are composed of 4 rectangles, the calculation of the characteristics of each rectangle is the black part of the picture and the subtraction of the white part Pixel sum, the feature G here ...
PUM
Login to View More Abstract
Description
Claims
Application Information
Login to View More 


