Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

System and method for providing a natural language content dedication service

a technology of natural language content and dedication service, applied in the field of providing natural language content dedication service, can solve the problems of preventing users from fully exploiting the capabilities of their electronic devices, affecting the mass adoption of many technologies, and affecting the availability of natural language content,

Inactive Publication Date: 2015-06-18
VB ASSETS LLC
View PDF17 Cites 33 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

The invention is a system and method for providing a natural language content dedication service. This service allows users to request to dedicate content through a voice-enabled device, such as a smartphone, and receive a customized version of the content. The system detects multi-modal device interactions, such as natural language utterances, and processes the interactions to identify the content requested for dedication. The system then sends the content to the user through a messaging interface. The invention can operate in a hybrid processing environment, where multiple devices can work together to interpret and process natural language interactions. The system can also use data repositories to identify relevant content and make recommendations to the user. Overall, the invention provides a convenient and efficient way for users to request and receive customized content dedications.

Problems solved by technology

Greater functionality also introduces trade-offs, however, including learning curves that often inhibit users from fully exploiting all of the capabilities of their electronic devices.
For example, many existing electronic devices include complex human to machine interfaces that may not be particularly user-friendly, which can inhibit mass-market adoption for many technologies.
Moreover, cumbersome interfaces often result in otherwise desirable features being difficult to find or use (e.g., because of menus that are complex or otherwise tedious to navigate).
As such, many users tend not to use, or even know about, many of the potential capabilities of their devices.
As such, the increased functionality of electronic devices often tends to be wasted, as market research suggests that many users only use only a fraction of the features or applications available on a given device.
Thus, as consumer demand intensifies for simpler mechanisms to interact with electronic devices, cumbersome interfaces that prevent quick and focused interaction become an important concern.
Nevertheless, the ever-growing demand for mechanisms to use technology in intuitive ways remains largely unfulfilled.
Even so, existing voice user interfaces, when they actually work, still require significant learning on the part of the user.
Furthermore, many existing voice user interfaces cause user frustration or dissatisfaction because of inaccurate speech recognition.
Similarly, by forcing a user to provide pre-established commands or keywords to communicate requests in ways that a system can understand, existing voice user interfaces do not effectively engage the user in a productive, cooperative dialogue to resolve requests and advance a conversation towards a satisfactory goal (e.g., when users may be uncertain of particular needs, available information, device capabilities, etc.).
As such, existing voice user interfaces tend to suffer from various drawbacks, including significant limitations on engaging users in a dialogue in a cooperative and conversational manner.
Additionally, many existing voice user interfaces fall short in utilizing information distributed across different domains, devices, and applications in order to resolve natural language voice-based inputs.
Thus, existing voice user interfaces suffer from being constrained to a finite set of applications for which they have been designed, or to devices on which they reside.
Although technological advancement has resulted in users often having several devices to suit their various needs, existing voice user interfaces do not adequately free users from device constraints.
For example, users may be interested in services associated with different applications and devices, but existing voice user interfaces tend to restrict users from accessing the applications and devices as they see fit.
Moreover, users typically can only practicably carry a finite number of devices at any given time, yet content or services associated with users' devices other than those currently being used may be desired in various circumstances.
Accordingly, although users tend to have varying needs, where content or services associated with different devices may be desired in various contexts or environments, existing voice technologies tend to fall short in providing an integrated environment in which users can request content or services associated with virtually any device or network.
As such, constraints on information availability and device interaction mechanisms in existing voice services environments tend to prevent users from experiencing technology in an intuitive, natural, and efficient way.
For instance, when a user wishes to perform a given function using a given electronic device, but does not necessarily know how to go about performing the function, the user typically cannot engage in cooperative multi-modal interactions with the device to simply utter words in natural language to request the function.
Furthermore, relatively simple functions can often be tedious to perform using electronic devices that do not have voice recognition capabilities.
In another example, users often listen to music or interact with other media in mobile environments, such that interest in purchasing music, media, or other content may be fleeting or often occur on an impulse basis.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • System and method for providing a natural language content dedication service
  • System and method for providing a natural language content dedication service
  • System and method for providing a natural language content dedication service

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029]According to one aspect of the invention, FIG. 1 illustrates a block diagram of an exemplary voice-enabled device 100 that can be used for hybrid processing in a natural language voice services environment. As will be apparent from the further description to be provided herein, the voice-enabled device 100 illustrated in FIG. 1 may generally include an input device 112, or a combination of input devices 112, which may enable a user to interact with the voice-enabled device 100 in a multi-modal manner. In particular, the input devices 112 may generally include any suitable combination of at least one voice input device 112 (e.g., a microphone) and at least one non-voice input device 112 (e.g., a mouse, touch-screen display, wheel selector, etc.). As such, the input devices 112 may include any suitable combination of electronic devices having mechanisms for receiving both voice-based and non-voice-based inputs (e.g., a microphone coupled to one or more of a telematics device, pe...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The system and method described herein may provide a natural language content dedication service in a voice services environment. In particular, providing the natural language content dedication service may generally include detecting multi-modal device interactions that include requests to dedicate content, identifying the content requested for dedication from natural language utterances included in the multi-modal device interactions, processing transactions for the content requested for dedication, processing natural language to customize the content for recipients of the dedications, and delivering the customized content to the recipients of the dedications.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS[0001]This application is a continuation of U.S. patent application Ser. No. 12 / 943,699, entitled “System and Method for Providing a Natural Language Content Dedication Service,” filed Nov. 10, 2010, which claims the benefit of U.S. Provisional Patent Application Ser. No. 61 / 259,820, entitled “System and Method for Providing a Natural Language Content Dedication Service,” filed Nov. 10, 2009, the contents of which are hereby incorporated by reference in their entirety.FIELD OF THE INVENTION[0002]The invention generally relates to providing a natural language content dedication service in a voice services environment, and in particular, to detecting multi-modal device interactions that include requests to dedicate content, identifying the content requested for dedication from natural language utterances included in the multi-modal device interactions, processing transactions for the content requested for dedication, processing natural language t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(United States)
IPC IPC(8): G10L15/18G06F17/28
CPCG06F17/28G10L15/18G06Q30/0601G06F40/40
Inventor KENNEWICK, MIKEARMSTRONG, LYNN ELISE
Owner VB ASSETS LLC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products