Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Lip reading technology based lip language input method

An input method and lip language technology, applied in the input/output of user/computer interaction, graphic reading, instruments, etc., to achieve the effect of strong practicability and recognition accuracy, strong pertinence, improved accuracy and rapidity

Inactive Publication Date: 2013-05-08
NANKAI UNIV
View PDF4 Cites 36 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0009] Harbin Institute of Technology, Institute of Acoustics, Chinese Academy of Sciences and other institutions in China are also committed to the research of this topic, but they are still in the stage of laboratory research. Therefore, in our country, we still need to increase the strength and speed of research in this area, and strive to commercialize the research results as soon as possible.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Lip reading technology based lip language input method
  • Lip reading technology based lip language input method
  • Lip reading technology based lip language input method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0025] The following will introduce the lip language input system based on lip reading technology and its implementation method:

[0026] First, the system camera is used to locate the lips of the person, and the lip movement video containing only the speaker's lips is collected, and the key frame image in the video stream is obtained by using the key frame extraction technology.

[0027] For the obtained normalized lip color static picture, the OpenCV library function is used to grayscale and median filter the picture, and then the Otsu method (Otsu method) is used to calculate the binarization threshold of the picture. Use this threshold to binarize the smoothed grayscale image. In this way, the effect of adaptive acquisition threshold is achieved. For the binarized image, scan each pixel in the image to determine whether it is an isolated point. The isolated points are removed during the scanning process, thus having a good denoising effect on the binarized image. The pi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a lip reading technology based lip language input method, mainly aims at Chinese characters in common use and Arabic numerals, belongs to the intelligent computer identification technology, is a typical image mode analysis, understanding and classified calculation problem and relates to multiple subjects of mode identification, computer vision, intelligent human-computer interaction, cognitive science and the like. Shot mouth lip movement video is subjected to key frame extraction, normalizing processing is performed for extracted images by utilizing gray processing, median filtering, dynamic threshold binarization processing and scanning for noisy point removal, then eigenvectors are extracted to obtain parameters with lip characteristics and are matched with a lip model library to identify the images as a Chinese phonetic alphabet sequence, and finally an input method module is combined to obtain corresponding Chinese characters or Arabic numerals.

Description

technical field [0001] The invention relates to a lip language input method based on lip reading technology, which is mainly aimed at commonly used Chinese characters and Arabic numerals. It belongs to computer intelligent recognition technology and is a typical problem of image pattern analysis, understanding and classification calculation, involving pattern recognition, computer vision, intelligent human-computer interaction, cognitive science and other disciplines. According to the captured lip movement video, key frame extraction, image processing and feature vector extraction are used to obtain the parameters with lip shape characteristics, which are recognized as a sequence of Chinese pinyin letters, and finally combined with the input method module to obtain the corresponding Chinese characters or Arabic numerals . Background technique [0002] With the development of science and technology, people need more humanized human-computer interaction methods. Nowadays, al...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F3/01G06K9/00G06K9/62
Inventor 张金肖庆阳梁碧玮左闯范娟婷邸硕临
Owner NANKAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products