Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Voice processing model training method and device, equipment and storage medium

A speech processing and training method technology, applied in the computer field, can solve the problems of unsatisfactory translation results, low efficiency, and a large amount of data, so as to improve the speed of recognition and translation, improve the effect, and improve the efficiency of training.

Pending Publication Date: 2021-09-07
PING AN TECH (SHENZHEN) CO LTD
View PDF0 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] At present, most of the speech translation technologies first transcribe the text of the speech through ASR, and then translate the transcribed text into the required target text through machine translation, which needs to be transcribed by the automatic speech recognition (ASR) model and the neural machine translation (NMT) model Translation, when training the above model, a large amount of data is required, the training efficiency is not high, and the transcription effect of the automatic speech recognition (ASR) model is not accurate enough, the output result after translation processing will produce a larger error, resulting in speech-to-text translation results that are not as expected

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voice processing model training method and device, equipment and storage medium
  • Voice processing model training method and device, equipment and storage medium
  • Voice processing model training method and device, equipment and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0031] The following will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, not all of them. Based on the embodiments in this application, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the scope of protection of this application.

[0032] The flow charts shown in the drawings are just illustrations, and do not necessarily include all contents and operations / steps, nor must they be performed in the order described. For example, some operations / steps can be decomposed, combined or partly combined, so the actual order of execution may be changed according to the actual situation.

[0033] Embodiments of the present application provide a speech processing model training method, device, computer e...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a voice processing model training method and device, equipment and a computer readable storage medium. The method comprises the steps: obtaining sample data comprising voice of a source language and a target language sample text corresponding to the voice of the source language; inputting the voice of the source language into a voice recognition sub-model of a voice processing model to obtain a source language text; inputting the source language text into the word database of the voice processing model for traversal to obtain a word vector corresponding to the source language text; inputting the word vector and the target language sample text into the machine translation sub-model of the voice processing model to obtain a target language translated text; based on a preset loss function, calculating a loss value of the voice processing model according to the target language translation text and the target language sample text; and performing parameter adjustment on the voice processing model according to the loss value to obtain a trained voice processing model. According to the invention, the training data of the model can be reduced, and the training efficiency is improved. The invention further relates to a block chain technology.

Description

technical field [0001] The present application relates to the field of computer technology, and in particular to a training method, device, equipment and computer-readable storage medium for a speech processing model. Background technique [0002] At present, most of the speech translation technologies first transcribe the text of the speech through ASR, and then translate the transcribed text into the required target text through machine translation, which needs to be transcribed by the automatic speech recognition (ASR) model and the neural machine translation (NMT) model Translation, when training the above model, a large amount of data is required, the training efficiency is not high, and the transcription effect of the automatic speech recognition (ASR) model is not accurate enough, the output result after translation processing will produce a larger errors, resulting in speech-to-text translation results that are not as expected. Contents of the invention [0003] T...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L15/06G10L15/26G06K9/62G06N3/04G06N3/08
CPCG10L15/063G10L15/26G06N3/08G06N3/045G06F18/2415
Inventor 陈霖捷王健宗黄章成
Owner PING AN TECH (SHENZHEN) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products