Character input device and method based on eye-gaze tracking and speech recognition

A speech recognition and character input technology, applied in the field of image processing, can solve problems such as low pupil image resolution, complicated character input process, confirmation process, and limited gaze accuracy, and achieve the goal of overcoming the limitations of human-computer interaction functions and gaze accuracy The effect of affecting and improving the character input rate

Inactive Publication Date: 2013-05-01
XIDIAN UNIV
View PDF4 Cites 30 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

There are two deficiencies in this patented technology: First, the gaze accuracy is limited
The device first detects the user's iris, fits the elliptical contour of the iris, and then calculates the degree of pupil deviation relative to the eye corner by the ellipse parameters. Therefore, the eye diagram processing accuracy is limited, resulting in limited gaze accuracy, and the user's desired character key cannot be located at one time.
The second is to calculate the probability that each possible character in the candidate character set is the user's expected character, determine the user's space domain candidate set, and then control the characters in the space domain candidate set to flash randomly, stimulate the user's P300 EEG potential, collect and analyze the brain The electric potential signal is used to calculate the probability that each possible character is the target stimulus, determine the candidate set in the time domain, and finally jointly calculate the character with the highest possibility based on the two candidate sets as the user's expected character key, which leads to the character input process and confirmation process. complex
There are four deficiencies in this method: First, the edge of the iris is determined through Canny edge detection, and then the iris is detected and the center of the iris is determined through the Hough transform. The camera is fixed on the display, and the three points on the wearable calibration cap are used as reference points, and the movement of the camera is planned by extracting the position information of the three points, which is used to compensate the influence of the user's head movement. The motion compensation is limited, which causes the head movement to have a great impact on the accuracy; the third is that the method requires the user to stare at the desired character for 2 seconds to complete the character input, which will inevitably cause complicated operations and easily cause user visual fatigue; The accuracy of this method is limited. 28 character keys are drawn on a display with a resolution of 1024*768. Each character key on the interface is relatively large, and only limited characters are displayed, which leads to the limitation of human-computer interaction functions.
There are two shortcomings in this method. One is that the resolution of the pupil image segmented from the face image is low, resulting in limited positioning accuracy of the pupil center, and a dual-light source gaze tracking calibration method based on similar triangles is used. The accuracy is limited; the second is that this method is only used for password input, and the input characters are limited, which leads to the limitation of human-computer interaction function

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Character input device and method based on eye-gaze tracking and speech recognition
  • Character input device and method based on eye-gaze tracking and speech recognition
  • Character input device and method based on eye-gaze tracking and speech recognition

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0061] Attached below figure 1 , to further describe the device of the present invention.

[0062] The character input device based on line of sight tracking and speech recognition in the present invention includes a helmet unit, an ARM core unit, an image acquisition card, a speech recognition module, a DSP core unit, a scene image processing module, a coordinate conversion module, and an interface module; wherein, the helmet unit is respectively One-way connection with the ARM core unit, image acquisition card, and voice recognition module, and output the collected eye diagrams, scene images, and user voice signals to the ARM core unit, image acquisition card, and voice recognition module; the ARM core unit and the DSP core The unit is bidirectionally connected, the ARM core unit outputs the unprocessed eye diagram to the DSP core unit, and receives the processed eye diagram input by the DSP core unit; the image acquisition card is connected to the scene image processing mod...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a character input device and method based on eye-gaze tracking and speech recognition. The device comprises a helmet unit, an ARM (advanced RISC machine) core unit, an image acquisition card, a voice identification module, a DSP (digital signal processor) core unit, a scene image processing module, a coordinate conversion module and an interface module. The method comprises the following steps of: on the basis of collecting and processing an eye pattern, a scene image and a user voice signal, calibrating to obtain a calibration coefficient; solving a two-dimensional calibration equation and a coordinate transformation matrix to obtain the coordinate value of a user sight fixation point in an interface coordinate system; finally, obtaining a character which is expected to be input; and cooperating with the user voice information to finish the operations of character input and the four arithmetic operation. The invention has the advantages of high character input fixation precision, big head moving range, simpleness in operation and good practicality and maneuverability.

Description

technical field [0001] The invention belongs to the technical field of image processing, and further relates to a character input device and method based on gaze tracking and voice recognition in the technical field of human-computer interaction. The invention can be used in the technical field of human-computer interaction to realize full-keyboard English character input and four arithmetic operations through sight tracking and speech recognition. Background technique [0002] Human-Computer Interaction (HCI for short) refers to the process of information exchange between humans and computers using a certain dialogue language to complete certain tasks in a certain interactive way. The human-computer interaction method based on eye-tracking technology is a natural and harmonious human-computer interaction method. The existing eye-tracking technology draws a keyboard on the computer screen, analyzes and feeds back the characters that the user is looking at through the eye-tr...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F3/01G10L15/00
Inventor 王军宁崔耀于明轩何迪高静魏雯婷
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products