Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

115 results about "Speech training" patented technology

Speech interactive training system and speech interactive training method

The invention relates to a speech interactive training system and a speech interactive training method. The system comprises a user selection module, a speech interactive training module, a user feedback module, a speech evaluation module and a result feedback module, wherein the user selection module is used for acquiring training contents selected by a user; the speech interactive training module is used for displaying the training contents to the user in a multimode guiding mode to guide the user to perform a speech training; the user feedback module is used for collecting a fed-back speech and a lip video corresponding to the speech; the speech evaluation module is used for receiving the speech fed back by the user and the lip video corresponding to the speech, and automatically evaluating the speech training of the user and giving an evaluation result; and the result feedback module is used for feeding the evaluation result back to the user so that the user can correct and adjust the speech training. The speech interactive training system is used for automatically evaluating the speech training of the user, giving the evaluation result and feeding the evaluation result back to the user, and then the user finds out the level of the personal speech training according to the evaluation result and corrects and adjusts the personal speech training to further improve the speech level, so the rehabitation training effect of a speech impediment is greatly enhanced.
Owner:SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI

Training method for hybrid frequency acoustic recognition model and speech recognition method

The invention discloses a training method for a hybrid frequency acoustic recognition model and a speech recognition method, and belongs to the technical field of speech recognition. The training method comprises the steps of obtaining a first type of speech features of a first speech signal, and processing the first type of speech features so as to obtain corresponding first speech training data;obtaining a first type of speech features of a second speech signal, and processing the first type of speech features so as to obtain corresponding second speech training data; obtaining a second type of speech features of the first speech signal and a second type of speech features of the second speech signal according to a power spectrum; forming a preliminary recognition model of the hybrid frequency acoustic recognition model according to pre-training of the first speech signal and the second speech signal; and performing supervised parameter training on the preliminary recognition modelaccording to the first speech training data, the second speech training data and the second type of speech features so as to form the hybrid frequency acoustic recognition model. The technical schemehas the beneficial effect that the recognition model has good robustness and generalization.
Owner:YUTOU TECH HANGZHOU

Method, device and equipment for constructing speech recognition model and storage medium

The present invention relates to the field of artificial intelligence, and provides a method, a device and equipment for constructing a speech recognition model and a storage medium. The method comprises the following steps: acquiring a plurality of training speech samples; constructing the speech recognition model by using an independent convolution layer, a convolution residual layer, a full connection layer and an output layer; inputting speech training information to the speech recognition model, updating a weight value of neurons in the speech recognition model with the speech informationand a text label corresponding to the speech information through a natural language processing NLP technology, and then obtaining a target model; evaluating an error of the target model by L(S) = -ln[Pi]<(h(x),z) being an element of a set S> p(z|h(x))= -sigma<(h(x),z) being an element of a set S> ln p(z|h(x)); adjusting the weight value of the neurons in the target model until the error is less than a threshold value; setting the weight value of the neurons with the error less than the threshold value as an ideal weight value; deploying the target model and the ideal weight value on a client.The method of the present invention reduces influence of tone in the speech information on a predicted text and computation burden during recognition process in the speech recognition model.
Owner:PING AN TECH (SHENZHEN) CO LTD

Chinese initial and final visualization method based on combination feature

InactiveCN102820037AMake up for indiscernibilityMake up for the shortcomings of memorySpeech analysisSpeech trainingAlgorithm
The invention relates to a Chinese initial and final visualization method based on a combination feature, which comprises the steps of: pre-processing a voice signal; calculating the frame number of the pre-processed voice signal as a length feature, representing a resonance strength feature by correlation of a frequency domain peak amplitude and an average amplitude to obtain a resonant peak feature value of each frame signal, and calculating robust feature parameters WPTC1-WPTC20 and PMUSIC-MFCC1-PMUSIC-MFCC12; respectively encoding image width information and image length information by the length feature and the resonance strength feature; encoding the main color information by the resonant peak feature; enabling 32 feature parameters to serve as input of a neural network and the output of the neural network to be corresponding pattern information, wherein the output corresponds to 23 initials and 24 finals sequentially; and fusing the width, length, main color and pattern information in an image and displaying the image on a display screen. The Chinese initial and final visualization method has the advantages that the Chinese initial and final visualization method based on the combination feature is helpful for deaf-mutes for speech training to establish and improve auditory perception and form correct speed reflection so as to recover the speed function of the deaf-mutes.
Owner:BOHAI UNIV

Vibration feedback system and device for speech rehabilitation

InactiveCN108320625AImprove the effect of rehabilitation trainingSolve the problem that the rehabilitation training teacher needs to touch the hand to know that the vibration of the vocal cords needs to occurSpeech analysisAcquiring/recognising facial featuresData displaySpeech rehabilitation
The invention relates to a vibration feedback system and device for speech rehabilitation. The vibration feedback system and device comprise a video acquisition module, a video processing module, a speech acquisition module, a speech processing module, a synchronous processing module, a data transmission module, a data display module, a vibration acquisition module, a signal processing module anda vibration feedback module. The invention provides an audio and video recognition system; meanwhile, the video acquisition module and the vibration feedback module are added to assist deaf students in a visual three-dimensional human face speech production model and vocal cord vibration simulation during hearing-speech rehabilitation training; the vibration feedback module can feed back the vocalcord vibrating condition in the speech production process of deaf children and gives feedback; the vibration feedback system is a rehabilitation system fusing mentor simulation of speech production process by three-dimensional virtual speech, vocal cord vibrating simulation and vibration information feedback and assisting speech training of the deaf students.
Owner:CHANGCHUN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products