Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Task model training method, device and equipment

A task model and training method technology, applied in the field of information processing, can solve problems such as single function, inability to meet the individual needs of users, and insufficient flexibility, so as to achieve the effect of enriching functions, improving user experience, and improving flexibility

Active Publication Date: 2019-11-19
杭州蓦然认知科技有限公司
View PDF7 Cites 23 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the existing voice assistants are usually loaded into the terminal products by equipment manufacturers, which have the problems of single function, insufficient flexibility, and inability to meet the individual needs of users. In order to solve this problem, the present invention provides a voice assistant. The training method, device and equipment of the model, according to the source file selected by the user, the knowledge graph is extracted to train the task model for the user's specific needs or specific activities

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Task model training method, device and equipment
  • Task model training method, device and equipment
  • Task model training method, device and equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0041] In this embodiment, the function of each module in the voice assistant of the present invention is described in combination with the travel needs of the user. The specific process involved in the following is only used to illustrate how to implement the functions of each module in the voice assistant of the present invention, and should not be regarded as a limitation on each module.

[0042]The user sends a voice command to the voice assistant through the man-machine interface 101 of the voice assistant: Please give a suggestion to go to Hangzhou for three days during the Qingming Festival. According to the usage habits and working environment of some users, the instructions may also be text instructions, gesture instructions or image instructions. If the user sends a voice command, the man-machine interface 101 of the voice assistant sends the received user voice command to the voice recognition module 102 for voice recognition, and sends the text information that is ...

Embodiment 2

[0048] see figure 2 , figure 2 It is a flowchart of a task model training method of a voice assistant. The task model training method for voice assistant of the present invention comprises the following steps:

[0049] Step 1. The voice assistant receives the task instruction sent by the user, executes the task instruction and obtains the execution result;

[0050] In this step, the voice assistant receives the voice command from the user through the human-computer interaction interface, recognizes the voice command and performs semantic understanding on it, generates an executable task command according to the semantic understanding result, and executes the task command to obtain the execution result. Preferably, the voice assistant further screens the recommended execution results according to parameters such as the user's personal data and historical behavior.

[0051] Step 2. Grab relevant knowledge graphs according to the execution results;

[0052] In this step, th...

Embodiment 3

[0086] see image 3 , image 3 The structure of the task model training device in the voice assistant involved in this embodiment is shown, which can realize the task model training method of the second embodiment. The task model training device includes: a data source receiving module, which is used to receive the data source selected by the user; the data source can be directly input to the voice assistant by the user through the human-computer interaction interface, and then sent to the data source receiving module by the voice assistant, or can be sent by the voice assistant. The assistant executes the user's task instruction to obtain the execution result, and the user selects the data source from the execution result.

[0087] The knowledge map generation module is used to capture relevant knowledge maps according to the selected data source;

[0088] A general intention generation module is used to obtain general intentions according to task types;

[0089] The real ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

An embodiment of the invention discloses a task model training method which is applied to a voice assistant. The method comprises the following steps that step 1, the voice assistant receives a task instruction sent by a user and executes a task instruction to obtain an execution result; step 2, a relevant knowledge graph is captured according to the execution result; step 3, a task type is obtained according to the task instruction, and a universal intention is obtained according to the task type; step 4, according to the knowledge graph and / or a multi-round dialogue, the universal intentionis corrected to obtain the real intention of the user; step 5, a slot position is generated according to the real intention of the user; step 6, filling of the slot position is carried out accordingto the knowledge graph and the multi-round dialogue; and step 7, a callable task model is generated according to the slot position, the task model is stored into the voice assistant, and the task model is triggered automatically according to the condition. According to the method, the flexibility of the language assistant is improved, the special requirements of the user in specific activities canbe met, and the user experience degree is improved.

Description

technical field [0001] The embodiments of the present invention relate to the technical field of information processing, and in particular to a task model training method, device, equipment, and computer-readable storage medium. Background technique [0002] With the development of artificial intelligence technology, a large number of applications or software that use voice recognition technology to assist users to realize their needs have been produced. Voice assistant is one of the intelligent applications. It realizes human-computer interaction through intelligent dialogue and instant question and answer, helping users solve problems Problems, such as helping users to query information, realize ordering, navigation and other needs. However, the existing voice assistants are usually loaded into terminal products by equipment manufacturers, which have the problems of single function, insufficient flexibility, and inability to meet the individual needs of users. In order to ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L15/06G06F16/9535G06F9/48
CPCG10L15/063G06F9/4881G06F16/9535G10L2015/0631G10L2015/0638
Inventor 常凌赵晓朝袁志伟
Owner 杭州蓦然认知科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products