Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Structured memory graph network model for multiple rounds of spoken language understanding

A technology of spoken language understanding and network model, applied in the field of human-computer dialogue, can solve the problems of reducing model computing time and space cost, noise, low efficiency, etc., to achieve the effect of solving noise and low computing efficiency, and improving computing efficiency

Active Publication Date: 2021-01-05
NORTHWEST NORMAL UNIVERSITY
View PDF8 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The purpose of the present invention is to provide a structured memory graph network model for multi-round spoken language comprehension, which solves the problems of low operation efficiency of the above information-dependent model and noise generation in complex scenes, and reduces the time and space cost of model operation

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Structured memory graph network model for multiple rounds of spoken language understanding
  • Structured memory graph network model for multiple rounds of spoken language understanding
  • Structured memory graph network model for multiple rounds of spoken language understanding

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0029] The memory map network of the present invention is composed of four parts: input coding layer, memory coding layer, feature aggregation layer and output classification layer, such as figure 2 shown. in,

[0030] Input the encoding layer, using BERT as the encoder of the input encoding layer. BERT is a multi-layer bidirectional Transformer encoder that can better encode contextual information. Because role information is helpful for multiple rounds of SLU tasks in complex dialogues, instead of adding a classification marker [CLS] at the starting position according to the BERT method, a pair of special markers [USR] or [SYS] ( [USR] means that the current utterance comes from user input, [SYS] means that the current utterance is automatically generated by the system), which aims to let the model learn to distinguish whether the curre...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a structured memory graph network model for multiple rounds of spoken language understanding, which is composed of an input coding layer, a memory coding layer, a feature aggregation layer and an output classification layer, dialogue behaviors generated by spoken language understanding tasks are used to replace texts as memory nodes for coding, and the dialogue behaviors are formatted representations containing semantic framework information. And the unstructured characters are converted into a structured triple. A graph attention network is used for replacing a recurrent neural network and an attention mechanism to achieve feature aggregation, sequence information between the attention mechanism and dialogue nodes is reserved, and model learning how to effectivelyutilize structured memory nodes is facilitated. According to the network model, the encoding dialogue behavior replaces a historical dialogue text to serve as a memory unit, original information of asemantic framework is reserved to the maximum extent, and the problems that in the prior art, noise is generated in complex occasions due to the fact that text information depends on a model, and operation efficiency is low are solved.

Description

technical field [0001] The invention belongs to the technical field of man-machine dialogue, and relates to a structured memory graph network model for multi-round spoken language comprehension. Background technique [0002] With the rapid development of various smart devices, human-computer dialogue has attracted extensive attention from academia and industry in recent years. Task-based dialogue system-related technologies have been used in many products, such as Microsoft's "Cortana" (Cortana), Apple's intelligent voice assistant Siri, etc. In a task-based dialogue system, an important module is Spoken Language Understanding (SLU), which recognizes the user's input in natural language as a semantic representation of a specific structure, including domains, intentions, slots, etc. , before being processed by other modules downstream. [0003] Most previous studies on spoken language comprehension tasks focus on single-turn dialogue scenarios. In a single-round SLU task, ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F16/332G06F16/901G06F40/30G06N3/08
CPCG06F16/3329G06F16/9024G06F40/30G06N3/084
Inventor 张志昌于沛霖庞雅丽曾扬扬
Owner NORTHWEST NORMAL UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products