Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An Adversarial Multi-Task Training Method for Spoken Language Understanding

A technology of oral comprehension and training methods, applied in the field of confrontational multi-task training of oral comprehension, which can solve the problems of time-consuming and difficult to obtain marked data in the domain, and achieve the effect of reducing cost and avoiding heavy dependence

Active Publication Date: 2021-11-23
AISPEECH CO LTD
View PDF11 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, since data annotation is labor-intensive and time-consuming, it is difficult to obtain sufficient in-domain labeled data for training

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An Adversarial Multi-Task Training Method for Spoken Language Understanding
  • An Adversarial Multi-Task Training Method for Spoken Language Understanding
  • An Adversarial Multi-Task Training Method for Spoken Language Understanding

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0016] In order to make the purpose, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments It is a part of embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of the present invention.

[0017] It should be noted that, in the case of no conflict, the embodiments in the present application and the features in the embodiments can be combined with each other.

[0018] The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program mo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an adversarial multi-task training method for spoken language comprehension, including sampling from unlabeled data and labeled data to train and update a language model and a shared space, and labeling the first public feature obtained in the shared space as a language model The task is to train and update the task discriminator and the shared space; take samples from the labeled data to train and update the spoken language understanding model and the shared space, and mark the second public feature obtained by the shared space as a spoken language understanding model task to train and update the described Task discriminator and the shared space. The adversarial multi-task training method for spoken language understanding in the embodiment of the present invention can train the spoken language understanding model based on unlabeled data and labeled data at the same time, thereby avoiding the heavy dependence of the traditional method for training the spoken language understanding model on labeled data , reducing the cost overhead caused by a large number of labeled data.

Description

technical field [0001] The invention relates to the technical field of artificial intelligence, in particular to an adversarial multi-task training method for spoken language comprehension. Background technique [0002] The spoken language understanding (SLU, spoken language understanding) module is a key component of the target-oriented spoken language dialogue system (SDS, spoken dialogue system), which parses the user's utterance into corresponding semantic concepts. For example, the sentence "show me flights from Boston to New York" could be parsed as (departure city=Boston, arrival city=New York). Generally, it is considered as a slot filling task, assigning each word in an utterance a predefined semantic slot label. [0003] Recent research on statistical slot filling in SLU has focused on recurrent neural networks (RNNs) and their extensions, such as long and short memory networks (LSTMs), codec models, etc. These traditional methods require a large amount of labele...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F40/30
CPCG06F40/30
Inventor 俞凯兰鸥羽朱苏
Owner AISPEECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products