Golden-brand verbal skill recommendation method and device based on multivariate semantic fusion
A recommendation method and technology of speech, applied in semantic analysis, neural learning method, natural language data processing, etc., can solve the problem of ignoring customer portraits, historical dialogue data, limiting the accuracy of speech recommendation, and difficult to effectively realize intelligent navigation of speech And other issues
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment 1
[0052] Such as figure 1 and figure 2 As shown, the present invention provides a gold medal speech recommendation method based on multivariate semantic fusion, including:
[0053] Step 1: Perform word segmentation and word vector initialization operations on historical dialogues, user questions of the current round, and user attributes;
[0054] Step 2: Based on the hierarchical semantic encoding mechanism and the user attribute encoding mechanism, perform dialogue semantic encoding and user attribute semantic encoding on the initialization operation results, and obtain the corresponding semantic representation;
[0055] Step 3: Fusion the encoding results to obtain the fused semantic representation, and match it with the semantic representation of each speech in the gold medal speech database to obtain the recommendation result of the speech.
[0056] In this embodiment, historical conversations are historical questions and corresponding answer records; user attributes are ...
Embodiment 2
[0059] On the basis of Embodiment 1, said step 1: perform word segmentation and word vector initialization operations on historical dialogues, user questions of the current round, and user attributes, including:
[0060] Based on the preset word segmentation toolkit, historical conversations, user questions of the current round, and user attributes are used as input text and word segmentation is performed to obtain the corresponding word sequence;
[0061] The embedding representation of each word in the word sequence is initialized by using the pre-trained word vector.
[0062] In this embodiment, the input history dialog S={s 1 ,s 2 ,...,s t-1}, where s i Indicates the i-th round of utterance. In this step, the word segmentation operation is performed on each round of utterance to obtain the corresponding word sequence Where|s i | Indicates the length of the current utterance.
[0063] Enter current user question s t , get its corresponding word sequence
[0064]...
Embodiment 3
[0073] Based on the basis of embodiment 1, such as image 3 and Figure 4 As shown, the step 2: based on the hierarchical semantic encoding mechanism and the user attribute encoding mechanism, perform dialogue semantic encoding and user attribute semantic encoding on the initialization operation result, including:
[0074] Processing the word embedding sequence of the dialogue based on the hierarchical semantic encoding mechanism, obtaining the hidden semantic representation corresponding to the historical dialogue and the current utterance, and generating the dialogue semantic encoding;
[0075] Based on the user attribute coding mechanism, the word embedding sequence of the user attribute is processed to obtain the hidden semantic representation of the user attribute information, and generate the user attribute semantic code.
[0076] In this embodiment, the dialogue semantics is the semantics of the word embedding sequence in the dialogue text; the user attribute semantics...
PUM
Login to View More Abstract
Description
Claims
Application Information
Login to View More - R&D
- Intellectual Property
- Life Sciences
- Materials
- Tech Scout
- Unparalleled Data Quality
- Higher Quality Content
- 60% Fewer Hallucinations
Browse by: Latest US Patents, China's latest patents, Technical Efficacy Thesaurus, Application Domain, Technology Topic, Popular Technical Reports.
© 2025 PatSnap. All rights reserved.Legal|Privacy policy|Modern Slavery Act Transparency Statement|Sitemap|About US| Contact US: help@patsnap.com



