Audio and video dual mode-based spoken language learning monitoring method

A dual-mode, audio-video technology, applied in the field of oral learning monitoring based on audio-video dual-mode, can solve the problems of unavailable student guidance, limited teacher resources, high labor costs, etc., to improve learning efficiency and reduce teacher resources dependent effect

Inactive Publication Date: 2013-07-24
SHANGHAI ZHONGSHI TECH DEV
View PDF0 Cites 26 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In language teaching, teachers, as an effective source of feedback, still have some unsolvable problems: language learning requires repeated training, and it is necessary to effectively use fragmented time to practice anytime and anywhere; however, due to the limited resources of teachers, it is impossible for all Students receive one-on-one instruction anytime
Online oral language learning platforms are more and more popular with ordinary users because of their free time and low cost. For oral language learning platforms, if the number of users does not increase, the number of teachers will inevitably lead to a shortage of individual user resources. In today's society, The cost of labor is getting higher and higher, how to effectively monitor the learning situation of users has become an important issue on the oral learning platform

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Audio and video dual mode-based spoken language learning monitoring method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0013] The present invention will be further described below in conjunction with the accompanying drawings and embodiments.

[0014] figure 1 It is a schematic diagram of the oral language learning monitoring process based on audio and video dual modes of the present invention.

[0015] See figure 1 , the spoken language learning monitoring method based on audio and video dual modes provided by the present invention comprises the following steps:

[0016] S101: Establish the sound information base and image feature information base of all standard pronunciation units; such as using Chinese phoneme units or finer sub-phoneme units as standard pronunciation units; training standard pronunciation models on a database, the database includes different ages Segment, different gender, covering the pronunciation image information of all standard pronunciation units, and including standard pronunciation annotations; the hidden Markov model is selected for the acoustic information bas...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an audio and video dual mode-based spoken language learning monitoring method, which comprises the following steps: (a) establishing a sound information base and an image characteristic information base of all standard pronouncing units; (b) acquiring sound and video information during spoken language learning of a user in real time, performing compression coding and then transmitting to a server end; (c) decoding the sound of the user by the server and then segmenting to obtain sound information matching degree of each pronouncing unit of the user; and (d) extracting the image action characteristic information corresponding to each pronouncing unit from the simultaneously acquired video information by the server and giving the matching degree of the image action characteristic information and the image characteristic informationof thestandard pronouncing unit. According to the audio and video dual mode-based spoken language learning monitoring method provided by the invention, the sound and the image characteristic information are segmented and compared respectively by simultaneously acquiring the sound and video information, so that the defects and the reasons of pronouncing can be quickly and accurately found, the dependence on teacher resources is reduced and the learning efficiency is greatly improved.

Description

technical field [0001] The invention relates to a user online learning monitoring method, in particular to a spoken language learning monitoring method based on audio and video dual modes. Background technique [0002] At present, under the general trend of globalization, oral education is becoming a huge industry in the world. As far as China is concerned, the upsurge of Chinese people learning foreign languages ​​and foreigners learning Chinese is getting higher and higher. On the one hand, foreign languages ​​(especially English) are an indispensable tool in business communication activities, thus promoting the enthusiasm of Chinese incumbents to learn foreign languages. According to incomplete statistics, about 1% of employees in big cities such as Beijing and Shanghai spend more than 10% of their income on foreign language learning. On the other hand, in the upsurge of learning English promoted by globalization, new upsurges have also been achieved, such as "China fev...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G09B5/06
Inventor 许东星
Owner SHANGHAI ZHONGSHI TECH DEV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products