Unlock instant, AI-driven research and patent intelligence for your innovation.

Voice recognition method and device and storage medium

A speech recognition and speech recognition model technology, applied in speech recognition, speech analysis, biological neural network models, etc., can solve problems such as lack of data support, lack of accuracy and reliability of speech recognition, and wake-up

Pending Publication Date: 2020-11-20
BEIJING XIAOMI PINECONE ELECTRONICS CO LTD
View PDF9 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The neural network model is generally composed of multiple sub-modules. In related technologies, each sub-module in the neural network model is manually selected. Since the manual selection method does not have accurate data support, the performance of the generated network model is not good.
Moreover, the neural network model in the related art generally reuses the model in the visual field, which is not effective for speech recognition.
In this way, the accuracy and reliability of speech recognition in the related art are insufficient. For example, when the user wants to wake up the terminal device, the device may not be woken up in time due to the inaccurate recognition of the speech spoken by the user, so the device cannot be woken up quickly. Good to meet the needs of users

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voice recognition method and device and storage medium
  • Voice recognition method and device and storage medium
  • Voice recognition method and device and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numerals in different drawings refer to the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatuses and methods consistent with aspects of the present disclosure as recited in the appended claims.

[0039] figure 1 is a flow chart of a speech recognition method shown according to an exemplary embodiment, such as figure 1 As shown, the method may include S101 and S102.

[0040] In S101, when voice information is received, the voice information is input into the generated voice recognition model.

[0041] In S102, a recognition result is output through the speech recognition mode...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a voice recognition method and device and a storage medium. The method comprises the steps that when voice information is received, the voice information is input into a generated voice recognition model; and a recognition result is output through a voice recognition model. The method for generating the voice recognition model comprises the steps that a super network is trained, wherein the super network comprises multiple network layers, each network layer comprises M substructures, at least one substructure in the M substructures comprises a time sequence convolutionnetwork module, and M is a positive integer larger than or equal to 2; a target substructure corresponding to each network layer is determined from the M substructures of the network layer accordingto a training result; and the voice recognition model is generated according to the target substructure corresponding to each network layer. Through the technical scheme, the performance of the voicerecognition model is improved, the accuracy of voice recognition is ensured, and the recognition speed and the response speed of the voice information are improved.

Description

technical field [0001] The present disclosure relates to the field of voice recognition, and in particular, to a voice recognition method, device and storage medium. Background technique [0002] Speech recognition can be simply explained as the recognition of speech or sound signals, and it is widely used in various fields. For example, when starting a terminal device, the user can wake up the terminal device by speaking a short voice without turning on a switch or fingerprint recognition. This method of starting the device is convenient and fast. When the voice wakes up, the device is activated from the dormant state to the running state by detecting voice keywords. The response speed and accuracy of the voice wakeup directly affect the user's experience in using the device. [0003] Currently, speech recognition is usually performed through a neural network model, such as an end-to-end neural network recognition model. A neural network model is generally composed of mul...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L15/22G06N3/04G10L15/06G10L15/16
CPCG10L15/22G10L15/063G10L15/16G06N3/045
Inventor 张勃初祥祥李庆源
Owner BEIJING XIAOMI PINECONE ELECTRONICS CO LTD