Greater functionality also introduces trade-offs, however, including learning curves that often inhibit users from fully exploiting all of the capabilities of their electronic devices.
For example, many existing electronic devices include complex human to
machine interfaces that may not be particularly user-friendly, which can inhibit
mass-market adoption for many technologies.
Moreover, cumbersome interfaces often result in otherwise desirable features being difficult to find or use (e.g., because of menus that are complex or otherwise tedious to navigate).
As such, many users tend not to use, or even know about, many of the potential capabilities of their devices.
As such, the increased functionality of electronic devices often tends to be wasted, as market research suggests that many users only use only a fraction of the features or applications available on a given device.
Thus, as
consumer demand intensifies for simpler mechanisms to interact with electronic devices, cumbersome interfaces that prevent quick and focused interaction become an important concern.
Nevertheless, the ever-growing demand for mechanisms to use technology in intuitive ways remains largely unfulfilled.
Even so, existing voice user interfaces, when they actually work, still require significant learning on the part of the user.
Similarly, by forcing a user to provide pre-established commands or keywords to communicate requests in ways that a
system can understand, existing voice user interfaces do not effectively engage the user in a productive, cooperative dialogue to resolve requests and advance a conversation towards a satisfactory goal (e.g., when users may be uncertain of particular needs, available information, device capabilities, etc.).
As such, existing voice user interfaces tend to suffer from various drawbacks, including significant limitations on engaging users in a dialogue in a cooperative and conversational manner.
Additionally, many existing voice user interfaces fall short in utilizing information distributed across different domains, devices, and applications in order to resolve natural language voice-based inputs.
Thus, existing voice user interfaces suffer from being constrained to a finite set of applications for which they have been designed, or to devices on which they reside.
Although technological advancement has resulted in users often having several devices to suit their various needs, existing voice user interfaces do not adequately free users from device constraints.
For example, users may be interested in services associated with different applications and devices, but existing voice user interfaces tend to
restrict users from accessing the applications and devices as they see fit.
Moreover, users typically can only practicably carry a finite number of devices at any given time, yet content or services associated with users' devices other than those currently being used may be desired in various circumstances.
Accordingly, although users tend to have varying needs, where content or services associated with different devices may be desired in various contexts or environments, existing voice technologies tend to fall short in providing an integrated environment in which users can request content or services associated with virtually any device or network.
As such, constraints on
information availability and device interaction mechanisms in existing voice services environments tend to prevent users from experiencing technology in an intuitive, natural, and efficient way.
For instance, when a user wishes to perform a given function using a given electronic device, but does not necessarily know how to go about performing the function, the user typically cannot engage in cooperative multi-modal interactions with the device to simply utter words in natural language to request the function.
Furthermore, relatively simple functions can often be tedious to perform using electronic devices that do not have voice recognition capabilities.
Existing systems suffer from these and other problems.