Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Voice interaction method and device

A technology of voice interaction and voice data, applied in voice analysis, voice recognition, acquisition/recognition of facial features, etc., can solve the problems of low flexibility and single response form of voice content, so as to improve flexibility and enrich voice response methods , the effect of meeting user needs

Active Publication Date: 2015-07-22
HUAWEI TECH CO LTD
View PDF10 Cites 26 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In the prior art, when the voice content is consistent, the voice interaction system performs the same operations or returns the same results, so the response form to the voice content is relatively single, and the flexibility is not high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voice interaction method and device
  • Voice interaction method and device
  • Voice interaction method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0069] The embodiment of the present invention provides a voice interaction method, see figure 1 , the method flow provided by this embodiment includes:

[0070] 101. Acquire voice data of the user.

[0071] 102. Perform user attribute recognition on the voice data to obtain a first user attribute recognition result.

[0072] 103. Perform content recognition on the voice data to obtain a content recognition result of the voice data.

[0073] 104. Perform a corresponding operation according to at least the first user attribute recognition result and the content recognition result to respond to the voice data.

[0074] In the method provided in this embodiment, after the voice data of the user is obtained, user attribute identification and content identification are respectively performed on the voice data to obtain the first user attribute identification result and content identification result of the voice data, and at least according to the first user Perform corresponding...

Embodiment 2

[0099] The embodiment of the present invention provides a voice interaction method, now in combination with the first embodiment above and figure 2 The illustrated voice interaction system explains in detail the voice interaction method provided by the embodiment of the present invention. exist figure 2 Among them, the voice interaction system is divided into five parts, which are image detection module, user attribute recognition module, face recognition module, voice content recognition module and voice application module. Among them, the image detection module is used to detect the number of people in the collected user image; the user attribute recognition module is used to perform user attribute recognition on the user voice; the face recognition module is used to detect the number of people in the user image when the image detection module When it is a preset value, it recognizes the face data in the user image; the speech content recognition module is used to carry o...

Embodiment 3

[0136] An embodiment of the present invention provides a voice interaction device, and a user executes the method shown in the first or second embodiment above. see Figure 5 , the device includes: an acquisition module 501 , a user attribute identification module 502 , a content identification module 503 , and an execution module 504 .

[0137] Wherein, acquisition module 501 is used to obtain the user's voice data; user attribute identification module 502 is connected with acquisition module 501, and is used to carry out user attribute identification to voice data, obtains the first user attribute identification result; Content identification module 503 and user attribute The recognition module 502 is connected to perform content recognition on the voice data to obtain the content recognition result of the voice data; the execution module 504 is connected to the content recognition module 503 and is used to perform corresponding operations at least according to the first use...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention discloses a speech interaction method and apparatus, and pertains to the field of speech processing technologies. The method includes: acquiring speech data of a user; performing user attribute recognition on the speech data to obtain a first user attribute recognition result; performing content recognition on the speech data to obtain a content recognition result of the speech data; and performing a corresponding operation according to at least the first user attribute recognition result and the content recognition result, so as to respond to the speech data. According to the present invention, after speech data is acquired, user attribute recognition and content recognition are separately performed on the speech data to obtain a first user attribute recognition result and a content recognition result, and a corresponding operation is performed according to at least the first user attribute recognition result and the content recognition result.

Description

technical field [0001] The invention relates to the technical field of voice processing, in particular to a voice interaction method and device. Background technique [0002] With the continuous development of information technology, user interaction technology has been widely used. As a new generation of user interaction mode after keyboard interaction, mouse interaction and touch screen interaction, voice interaction is gradually recognized by the majority of users and has the potential for large-scale promotion due to its convenience and quickness. For example, there are more and more voice-related applications on smart mobile terminals, and smart TV manufacturers are also replacing traditional hand-held remote controls by citing voice interaction technology. [0003] In the prior art, voice interaction is based on voice recognition technology, that is, after receiving a piece of voice, the voice interaction system first performs content recognition on the voice data, ob...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L15/26
CPCG10L17/22G06K9/00302G10L17/00G06K9/00288G10L15/22G10L2015/227G10L17/26G06V40/70G06V40/172G10L15/183G10L15/25G10L17/10G10L2015/223
Inventor 金洪波江焯林
Owner HUAWEI TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products