Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

307 results about "Speech applications" patented technology

The Speech Application Programming Interface or SAPI is an API developed by Microsoft to allow the use of speech recognition and speech synthesis within Windows applications.

Unified messaging system using web based application server for management of messages using standardized servers

A unified web-based voice messaging system uses an application server, configured for executing a voice application defined by XML documents, that accesses subscriber attributes from a standardized information database server (such as LDAP), and messages from a standardized messaging server (such as IMAP), regardless of message format. The application server, upon receiving a request from a browser serving a user, accesses the standardized database server to obtain attribute information for responding to the voice application operation request. The application server generates an HTML document having media content and control tags for personalized execution of the voice application operation based on the attribute information obtained from the standardized database server. The application server also is configured for storing messages for a called party in the standardized messaging server by storing within the message format information that specifies the corresponding message format. Hence, the application server can respond to a request for a stored message from a subscriber by accessing the stored message from the standardized messaging server, and generating an HTML document having media content and control tags for presenting the subscriber with the stored message in a prescribed format based on the message format and the capabilities of the access device used by the subscriber.
Owner:CISCO TECH INC

Voice applications and voice-based interface

A system, method and computer program product are provided for initiating a tailored voice application according to an embodiment. First, a voice application is installed at a server. A request to instantiate the voice application is received from a user. User-specific configuration parameters are also received. An instance of the voice application is instantiated in a modified form based on the user-specific configuration parameters. A system, method and computer program product provide a voice-based interface according to one embodiment. A voice application is provided for verbally outputting content to a user. An instance of the voice application is instantiated. Content is selected for output. The content is output verbally using the voice application. The instance of the voice application pauses the output and resumes the output. A method for providing a voice habitat is also provided according to one embodiment. An interface to a habitat is provided. A user is allowed to aggregate content in the habitat utilizing the interface. A designation of content for audible output is received from the user. Some or all of the designated content is output. The user is also allowed to aggregate applications in the habitat utilizing the interface. Spoken commands are received from the user and are interpreted using a voice application. Commands are issued to one or more of the applications in the habitat via the voice application.
Owner:NVIDIA INT

Application server providing personalized voice enabled web application services using extensible markup language documents

A unified web-based voice messaging system provides voice application control between a web browser and an application server via an hypertext transport protocol (HTTP) connection on an Internet Protocol (IP) network. The application server, configured for executing a voice application defined by XML documents, selects an XML document for execution of a corresponding voice application operation based on a determined presence of a user-specific XML document that specifies the corresponding voice application operation. The application server, upon receiving a voice application operation request from a browser serving a user, determines whether a personalized, user specific XML document exists for the user and for the corresponding voice application operation. If the application server determines the presence of the personalized XML document for a user-specific execution of the corresponding voice application operation, the application server dynamically generates a personalized HTML page having media content and control tags for personalized execution of the voice application operation; however if the application server determines an absence of the personalized XML document for the user-specific execution of the corresponding voice application operation, the application server dynamically generates a generic HTML page for generic execution of the voice application operation. Hence, a user can personalize any number of voice application operations, enabling a web-based voice application to be completely customized or merely partially customized.
Owner:CISCO TECH INC

Applications Server and Method

A speech applications server is arranged to provide a user driven service in accordance with an application program in response to user commands for selecting service options. The user is prompted by audio prompts to issue the user commands. The application program comprises a state machine operable to determine a state of the application program from one of a predetermined set of states defining a logical procedure through the user selected service options, transitions between states being determined in accordance with logical conditions to be satisfied in order to change between one state of the set and another state of the set. The logical conditions include whether a user has provided one of a set of possible commands. A prompt selection engine is operable to generate the audio prompts for prompting the commands from the user in accordance with predetermined rules. The prompt selected by the prompt selection engine is determined at run-time. Since the state machine and the prompt selection engine are separate entities and the prompts to be selected are determined at run-time, it is possible to effect a change to the prompt selection engine without influencing the operation of the state machine, enabling different customisations to be provided for the same user driven services, in particular this allows multilingual support, with the possibility of providing rules to adapt the prompt structure allowing for grammatical differences between to languages to be taken into account thus providing higher quality multiple language support.
Owner:ORANGE SA (FR)
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products