Human muscle movement perception based menu selection method for human-computer interaction interface

A human-computer interaction interface and menu selection technology, applied in the field of human-computer interaction, can solve the problems of insufficient fit, shaking or even falling, and lack of humanization, and achieve the effect of convenient control, simplified learning process, and humanized operation.

Inactive Publication Date: 2015-07-08
JILIN UNIV
3 Cites 5 Cited by

AI-Extracted Technical Summary

Problems solved by technology

You can exercise and sing with your eyes closed, but walking is prone to instability, shaking or even falling
[0005] At present, in the field of human-computer interaction, there are many examples of human-computer interaction relying on traditional methods such as gestures, body movements, and eye movements, but most of these methods require users to have a learning process
S...
View more

Abstract

The invention discloses a human muscle movement perception based menu selection method for a human-computer interaction interface. The method comprises the steps of setting initialing parameters; determining a user front space through a kinect sensor; dividing and displaying the user front space; catching the user movement information; determining the manual selection result of the user human-computer interaction interface. According to the method, the 3D physical contact technology is adopted, and the kinect sensor is used for detecting the body front space and performs a series of division for the space in front of the body, including vertical plane division and longitudinal horizontal division; each divided space is calibrated through a space coordination system; the manual selection at the human-computer interaction interface is performed by selecting the space by human hands; the corresponding instruction and operation are sent to the machine to avoid complex memory of a user to the action and gesture.

Application Domain

Input/output for user-computer interactionGraph reading +1

Technology Topic

Movement perceptionSelection method +5

Image

  • Human muscle movement perception based menu selection method for human-computer interaction interface
  • Human muscle movement perception based menu selection method for human-computer interaction interface
  • Human muscle movement perception based menu selection method for human-computer interaction interface

Examples

  • Experimental program(1)

Example Embodiment

[0033] The technical solution of the present invention will be described in detail below in conjunction with the drawings and embodiments:
[0034] The menu selection method of human-computer interaction interface based on human muscle movement perception requires at least a Kinect sensor, an upper computer (computer) connected to the Kinect sensor, and a display screen connected to the upper computer. The overall process of the method is as follows: figure 1 As shown, including the following steps:
[0035] Step 1. Initialization parameter setting: According to user needs, set the number of space divisions of the space in front of the human body in the horizontal direction and the vertical direction.
[0036] Connect the Kinect sensor and use the Kinect sensor to obtain information about the position of each joint of the human body. Here we mainly obtain the user's chest (the midpoint of the human neck and waist), the left palm, and the right palm. The menu selection method of the computer interactive interface prompts the user to customize parameter settings or select default settings on the display screen:
[0037] 1) Custom parameter settings: The user selects the degree of spatial division in two orthogonal directions (horizontal direction and vertical direction) in space according to the strength of his own muscle movement perception ability, that is, select the corresponding M value and N value (M Decide the number of divisions in the horizontal direction, and N determines the number of divisions in the vertical direction), allowing users to choose the control method for one-handed operation or two-handed operation at the same time. When selecting both hands for simultaneous operation, the choice of M value depends on usage habits and complexity of operation Degree, the left-hand and right-hand M value selection can be different, and after use, it can be adjusted if it is not suitable to meet the user's comfort.
[0038] 2) Default setting: The system uses the default value of M=4 and N=4, and adopts the control mode of simultaneous operation of left and right hands.
[0039] Step 2: Determine the space in front of the user through the Kinect sensor: the user faces the Kinect sensor, and the sensor captures the position of the human body (especially the position of the hand) and the surrounding space information of the human body, and the space is divided, marked and displayed in real time. The user straightens his arms, and the sensor captures the important joint points of the human body and calculates the length of the user's arms. The available space in front of the user is determined based on the captured human body position information and spatial information and the calculated human arm length.
[0040] 1) The user faces the Kinect sensor, and the human body straightens his arms to the maximum distance in the vertical direction. The sensor detects three important joint points in the middle of the human body (the midpoint of the human neck and waist), the palm of the left hand and the palm of the right hand. Recorded as Joint type data: ShoulderCenture, LeftHand, RightHang. The user can see on the screen that his left and right hands and the center of his chest are marked with blue dots and move with the human body in real time, such as image 3 As shown, it is a schematic diagram of space division in the vertical direction in front of the user when N=4.
[0041] 2) According to the detected joint point data of the middle of the human body, the palm of the left hand and the palm of the right hand, the arm lengths of the left and right arms are calculated according to the above step 1). Space, so that the boundary of the available space is within the reach of human hands. The specific process of this step is:
[0042] Call the C# system library function "System.Math.Abs()" according to the joint point data of the middle of the human chest, the palm of the left hand, and the palm of the right hand detected by the sensor:
[0043] System.Math.Abs(ShoulderCenter.Position.Z-RightHand.Position.Z); and
[0044] System.Math.Abs(ShoulderCenter.Position.Z-LeftHand.Position.Z);
[0045] The arm length of the two arms of the human body is calculated from this, and the corresponding space division interval (the available space in front of the user) is determined according to the calculated arm length, so that the boundary of the controlled space is in a position that can be reached by hand.
[0046] Among them, the library function "System.Math.Abs()" is used to find the absolute value of the value in the brackets; ShoulderCenter and RightHand are Joint type data, and the Joint type is a kind of data used to define body joint points in the Kinect for windows SDK Type: Kinect sensor processes the collected data and returns it to the Joint type. Its Position attribute contains the spatial position information of the human joint points, that is, the X, Y, Z where the joint position with the Kinect sensor as the origin of the spatial coordinate system is located Coordinate information, in meters. The range of coordinates is related to human body characteristics, such as arm span, height, etc. Through the assignment operation, the system can implicitly convert the Joint type to the Point type. The Point type is similar to the Joint type, but it is provided by the C# framework. It mainly uses its X and Y coordinate properties to compare various controls on the screen with the important joint points of the human body. The positional relationship between.
[0047] Step 3: Division and display of the space in front of the user: According to the number of space divisions set by the initialization parameters in step 1, the space available in front of the user determined in step 2 is divided into space in the horizontal direction and the vertical direction. Divide, establish the coordinate system of the divided space; display the result of the space division on the display screen.
[0048] 1) The user stretches his arms to the left and right sides, and you can see the corresponding yellow frame N square (N is the division value set by the user on the vertical plane) appearing on the display screen, and the left and right sides of the screen A yellow scale bar similar to a thermometer appears on the side. The scale bar has different scale divisions according to the difference of the left and right hand M values. The N square grid border and the scale bar appearing on the system screen are drawn by WPF internal plug-ins, using Line and Rectangle plug-in for drawing. When using the Line plug-in (plug-in for drawing straight lines) to draw the N square grid, draw the two end points of each line segment according to the human arm length and the coordinate points of the middle of the chest, and finally draw the overall N square grid , The scale bar is drawn using the Rectangle plug-in (plug-in for drawing rectangles). According to the different values ​​of the left and right hand M selected by the user at the beginning of the program, the number of rectangles drawn on the left and right sides of the screen is different, so the scale ranges of the left and right scale bars are also different. , This scale bar is used to display the current depth information of the left and right hands in real time.
[0049] Step 4: User motion information capture: Kinect sensor captures the position information of the human hand in the available space in front of the user in real time. The user controls the left and right hands to move to the corresponding spatial position and can see the feedback on the display: the corresponding area of ​​the N square grid The border of the N square grid changes from yellow to red, and the fill color of the side scale bar corresponding to the depth of the hand changes from yellow to red. These feedbacks are used to prompt the user of the accurate position of his hand in the divided space in real time.
[0050] Step 5. Judgment of the user's human-computer interaction interface menu selection result: The Kinect sensor detects that the user's hand stays at any spatial position in the space in front of the user for a short time (the user's hand stops moving for longer than the preset time), then it is determined that the user wants Make the corresponding menu selection of the human-computer interaction interface. At this time, the left and right hand positions of the human body are detected to obtain their space coordinates, and combined with the divided space coordinate system established in step 3 to determine where the user's hand space coordinates are in the divided space coordinate system at this time The divided space.
[0051] The division space judgment method is: use the Kienct sensor to detect the position of the hand, obtain the three-dimensional coordinate information in the divided space after the three-space division in the step, and then combine the three-dimensional coordinate information of the hand with the coordinates in the divided space coordinate system Information is compared to determine the divided space in which it is located, specifically: each small area in the divided space is a cube, and the hand coordinates can be compared with the coordinates of the four vertices of a square to determine whether it is in the Divide the space within. Such as Figure 5 As shown in the figure, the shape of the area selected by the user is represented by black shading.
[0052] Taking the vertical plane divided into 4 square grids when the regional space is divided, and the user uses the right hand with one hand as an example, the judgment formula for the position of the human hand is:
[0053] 1) Judgment formula for the depth position of the current hand:
[0054] righthand.Position.Z
[0055] Among them, P is used to control the selection of depth, the degree of depth division (that is, the M value) is different, and the layer used for judgment is different, and the value of P is different.
[0056] 2) Judgment formula for the vertical plane area where the hand is currently located:
[0057] (RightHand.X
[0058] (RightHand.X.ShoulderCenter.Y); lower left grid;
[0059] (RightHand.X> ShoulderCenter.X)&&(RightHand.Y
[0060] (RightHand.X> ShoulderCenter.X)&&(RightHand.Y> ShoulderCenter.Y); lower right grid;
[0061] && means logical "and", if the conditions are met, the color of the corresponding grid changes.
[0062] There are many changes in the formula according to the division of the area in the vertical direction (that is, the N value).

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.

Similar technology patents

Task breakpoint resuming method applied to data cleaning tool

PendingCN110928863ASave computing resources and computing timeHumanized operation
Owner:WUXI SHILING TECH

Interactive digital TV set

InactiveCN101110927ARealize the interactivity of communicationHumanized operation
Owner:安迎建 +1

Method for setting proxy server to access to Internet based on Android platform

InactiveCN105306550Aoperate moreHumanized operation
Owner:ARCHERMIND TECH NANJING

Medical instrument rapid cleaning device

ActiveCN109590271Aeasy to operateHumanized operation
Owner:HENAN UNIV OF CHINESE MEDICINE

Intelligent analysis and release method for commodity pictures

PendingCN113934688Aeasy to operateHumanized operation
Owner:杭州优批科技有限公司

Classification and recommendation of technical efficacy words

  • Humanized operation

Communication device and image sharing method thereof

InactiveCN101877737AOperation moreHumanized operation
Owner:SHENZHEN FUTAIHONG PRECISION IND CO LTD

Photographing system and method

InactiveCN102122107AOperation moreHumanized operation
Owner:SHENZHEN FUTAIHONG PRECISION IND CO LTD

A way to take screenshots

InactiveCN102270139AOperation moreHumanized operation
Owner:潘天华

Medical instrument rapid cleaning device

ActiveCN109590271Aeasy to operateHumanized operation
Owner:HENAN UNIV OF CHINESE MEDICINE

Intelligent analysis and release method for commodity pictures

PendingCN113934688Aeasy to operateHumanized operation
Owner:杭州优批科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products