Voice interaction method and system, storage medium and electronic equipment

A voice interaction and voice information technology, applied in voice analysis, voice recognition, instruments, etc., can solve the problems of users with different expression styles, insufficient semantic understanding processing, and inability to accurately understand user intentions, etc., to achieve accurate recognition Effect

Pending Publication Date: 2021-12-17
GREE ELECTRIC APPLIANCES INC +1
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At present, the usual semantic understanding models are implemented based on the general speech training model. However, due to the different regions, ages and personalities of different users, the user's expression style is very different, and the sentence pattern of the speech that expresses semantic intention The structure is also different, resulting in most of the semantic understanding processing is not high enough to accurately understand the user's intent

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voice interaction method and system, storage medium and electronic equipment
  • Voice interaction method and system, storage medium and electronic equipment
  • Voice interaction method and system, storage medium and electronic equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0049] According to an embodiment of the present invention, a voice interaction method is provided, figure 1 It shows a schematic flowchart of a voice interaction method proposed in Embodiment 1 of the present invention, as shown in figure 1 As shown, the voice interaction method may include: Step 110 to Step 160.

[0050] In step 110, voice information is acquired.

[0051] Here, the voice information refers to the voice dialogue between the user and the smart device. For example, if the user interacts with the air conditioner, and the user sends out the voice of "help me check tomorrow's weather", then "help me check tomorrow's weather" is used as the voice message . Wherein, the smart device may be an air conditioner with a voice function, a refrigerator, a TV, a range hood and other smart devices.

[0052] In step 120, the characteristic information of the speaker who sends out the voice information is determined; wherein, the characteristic information can be used to c...

Embodiment 2

[0069] On the basis of the foregoing embodiments, Embodiment 2 of the present invention may further provide a voice interaction method. The voice interaction method may include: Step 210 to Step 260 .

[0070] In step 210, voice information is acquired.

[0071] Here, the voice information refers to the voice dialogue between the user and the smart device. For example, if the user interacts with the air conditioner, and the user sends out the voice of "help me check tomorrow's weather", then "help me check tomorrow's weather" is used as the voice message . Wherein, the smart device may be an air conditioner with a voice function, a refrigerator, a TV, a range hood and other smart devices.

[0072] In step 220, the characteristic information of the speaker who sends out the voice information is determined; wherein, the characteristic information can be used to characterize the group category to which the speaker belongs.

[0073] Wherein, the feature information includes at ...

Embodiment 3

[0121] According to an embodiment of the present invention, a voice interaction system is also provided, including:

[0122] Voice acquisition module, used to acquire voice information;

[0123] A feature determination module, configured to determine the feature information of the speaker who sends out the voice information;

[0124] A group category determination module, configured to determine the group category to which the speaker who sends out the voice information belongs according to the feature information;

[0125] A corpus acquisition module, configured to acquire a corpus matching the group category;

[0126] A semantic intent determination module, configured to obtain a semantic intent that matches the voice information from the corpus;

[0127] A control module, configured to control the smart device to perform an action in response to the semantic intention.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a voice interaction method and system, a storage medium and electronic equipment, and relates to the technical field of voice interaction, and the method comprises the steps: obtaining voice information; determining feature information of a sounder sending the voice information; according to the feature information, determining a group category to which a sounder sending the voice information belongs; obtaining a corpus matched with the group category; obtaining a semantic intention matched with the voice information from the corpus; and controlling the intelligent equipment to execute an action in response to the semantic intention. The method has the beneficial effects that the semantic intention to be expressed by the voice information is accurately recognized by utilizing the corresponding corpus, so that the semantic intention recognition accuracy is improved.

Description

technical field [0001] The invention belongs to the technical field of voice interaction, and in particular relates to a voice interaction method, system, storage medium and electronic equipment. Background technique [0002] In the process of voice interaction, the user's dialogue part serves as a link between the preceding and the following. What the user says through the client is converted into text by ASR (speech recognition) and then enters the dialogue system. After semantic understanding and dialogue decision-making in the dialogue system, the specified Service content, output the corresponding text content, and then return to the user on the client after converting it into voice through TTS (from text to voice). At present, the usual semantic understanding models are implemented based on the general speech training model. However, due to the different regions, ages and personalities of different users, the user's expression style is very different, and the sentence ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G10L15/02G10L15/07G10L15/22G10L15/26G10L17/02G10L17/14
CPCG10L15/07G10L15/22G10L15/02G10L17/02G10L17/14G10L15/26
Inventor 杨昌品宋德超黄姿荣贾巨涛韩林峄
Owner GREE ELECTRIC APPLIANCES INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products