Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

13243results about "Sound input/output" patented technology

Method and system for enabling connectivity to a data system

A method and system that provides filtered data from a data system. In one embodiment the system includes an API (application programming interface) and associated software modules to enable third party applications to access an enterprise data system. Administrators are enabled to select specific user interface (UI) objects, such as screens, views, applets, columns and fields to voice or pass-through enable via a GUI that presents a tree depicting a hierarchy of the UI objects within a user interface of an application. An XSLT style sheet is then automatically generated to filter out data pertaining to UI objects that were not voice or pass-through enabled. In response to a request for data, unfiltered data are retrieved from the data system and a specified style sheet is applied to the unfiltered data to return filtered data pertaining to only those fields and columns that are voice or pass-through enabled.
Owner:ORACLE INT CORP

Intelligent Automated Assistant

An intelligent automated assistant system engages with the user in an integrated, conversational manner using natural language dialog, and invokes external services when appropriate to obtain information or perform various actions. The system can be implemented using any of a number of different platforms, such as the web, email, smartphone, and the like, or any combination thereof. In one embodiment, the system is based on sets of interrelated domains and tasks, and employs additional functionally powered by external services with which the system can interact.
Owner:APPLE INC

Conversational computing via conversational virtual machine

A conversational computing system that provides a universal coordinated multi-modal conversational user interface (CUI) (10) across a plurality of conversationally aware applications (11) (i.e., applications that “speak” conversational protocols) and conventional applications (12). The conversationally aware maps, applications (11) communicate with a conversational kernel (14) via conversational application APIs (13). The conversational kernel (14) controls the dialog across applications and devices (local and networked) on the basis of their registered conversational capabilities and requirements and provides a unified conversational user interface and conversational services and behaviors. The conversational computing system may be built on top of a conventional operating system and APIs (15) and conventional device hardware (16). The conversational kernel (14) handles all I/O processing and controls conversational engines (18). The conversational kernel (14) converts voice requests into queries and converts outputs and results into spoken messages using conversational engines (18) and conversational arguments (17). The conversational application API (13) conveys all the information for the conversational kernel (14) to transform queries into application calls and conversely convert output into speech, appropriately sorted before being provided to the user.
Owner:UNILOC 2017 LLC

Speech interface system and method for control and interaction with applications on a computing system

A speech processing system which exploits statistical modeling and formal logic to receive and process speech input, which may represent data to be received, such as dictation, or commands to be processed by an operating system, application or process. A command dictionary and dynamic grammars are used in processing speech input to identify, disambiguate and extract commands. The logical processing scheme ensures that putative commands are complete and unambiguous before processing. Context sensitivity may be employed to differentiate data and commands. A multi faceted graphic user interface may be provided for interaction with a user to speech enable interaction with applications and processes that do not necessarily have native support for speech input.
Owner:SAMSUNG ELECTRONICS CO LTD

Multi-access mode electronic personal assistant

A system enables communication between server resources and a wide spectrum of end-terminals to enable access to the resources of both converged and non-converged networks via voice and / or electronically generated commands. An electronic personal assistant (ePA) incorporates generalizing / abstracting communications channels, data and resources provided through a converged computer / telephony system interface such that the data and resources are readily accessed by a variety of interface formats including a voice interface or data interface. A set of applications provides dual interfaces for rendering services and data based upon the manner in which a user accesses the data. An electronic personal assistant in accordance with an embodiment of the invention provides voice / data access to web pages, email, file shares, etc. A voice-based resource server authenticates a user by receiving vocal responses to one or more requests variably selected and issued by a speaker recognition-based authentication facility. Thereafter an application proxy is created.
Owner:MICROSOFT TECH LICENSING LLC

Media manager with integrated browsers

Methods and systems that improve the way media is played, sorted, modified, stored and cataloged are disclosed. One aspect relates to a browse window that allows a user to navigate through and select images that are related to media items. Another aspect relates to a graphical user interface of a media management program that utilizes multiple browse windows. Another aspect relates to simultaneously displayed media browse windows whose operations are integrated together so that the content shown therein is automatically synched when selections are made. Another aspect relates to resetting browsed content to the currently playing media.
Owner:APPLE INC

Dynamic audio ducking

ActiveUS20100211199A1Gain controlSpeech analysisDuckingLoudness
Various dynamic audio ducking techniques are provided that may be applied where multiple audio streams, such as a primary audio stream and a secondary audio stream, are being played back simultaneously. For example, a secondary audio stream may include a voice announcement of one or more pieces of information pertaining to the primary audio stream, such as the name of the track or the name of the artist. In one embodiment, the primary audio data and the voice feedback data are initially analyzed to determine a loudness value. Based on their respective loudness values, the primary audio stream may be ducked during the period of simultaneous playback such that a relative loudness difference is generally maintained with respect to the loudness of the primary and secondary audio streams. Accordingly, the amount of ducking applied may be customized for each piece of audio data depending on its loudness characteristics.
Owner:APPLE INC

Stereophonic reproduction maintaining means and methods for operation in horizontal and vertical A/V appliance positions

Display apparatus including a display and an orientation sensitive interface mechanism is disclosed. In an exemplary embodiment, the orientation sensitive interface includes first and second loudspeaker pairs. The first loudspeaker pair includes first and second loudspeakers and the second loudspeaker pair includes the second and third loudspeaker. The first and second loudspeaker pairs are disposed along transverse directions to each other. The display apparatus comprises a switch which switches between the first loudspeaker pair and the second loudspeaker pair. By providing the respective loudspeaker pairs, and switching between them, it is possible to orient the display apparatus in transverse directions corresponding to respective loudspeaker pairs, yet maintain a substantially stereophonic reproduction for each orientation.
Owner:HTC CORP

System and method for supporting interactive user interface operations and storage medium

There is provided a system for supporting interactive operations for inputting user commands to a household electric apparatus such as a television set / monitor and information apparatuses. According to the system for supporting interactive operations applying an animated character called a personified assistant interacting with a user based on speech synthesis and animation, realizing a user-friendly user interface and simultaneously making it possible to meet a demand for complex commands or providing an entry for services. Further, since the system is provided with a command system producing an effect close to human natural language, the user can easily operate the apparatus with a feeling close to ordinary human conversation.
Owner:SONY CORP

Interface for a Virtual Digital Assistant

The digital assistant displays a digital assistant object in an object region of a display screen. The digital assistant then obtains at least one information item based on a speech input from a user. Upon determining that the at least one information item can be displayed in its entirety in the display region of the display screen, the digital assistant displays the at least one information item in the display region, where the display region and the object region are not visually distinguishable from one another. Upon determining that the at least one information item cannot be displayed in its entirety in the display region of the video display screen, the digital assistant displays a portion of the at least one information item in the display region, where the display region and the object region are visually distinguishable from one another.
Owner:APPLE INC

Systems and methods to select media content

Systems and methods to select media content are provided. A method includes dynamically selecting content items for presentation via a media player based on user media selection settings. The user media selection settings specify a proportion of a first category of media content to be presented and a proportion of at least one second category of media content to be presented. The at least one second category includes a user defined category. First media content is associated with the first category based on an intrinsic property of the first media content and second media content is associated with the user defined category based on a property that is not intrinsic to the second media content. The method also includes generating an output stream presenting the dynamically selected content items.
Owner:AT&T INTPROP I L P
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products