Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

44930results about "Graph reading" patented technology

Flexible transparent touch sensing system for electronic devices

A transparent, capacitive sensing system particularly well suited for input to electronic devices is described. The sensing system can be used to emulate physical buttons or slider switches that are either displayed on an active display device or printed on an underlying surface. The capacitive sensor can further be used as an input device for a graphical user interface, especially if overlaid on top of an active display device like an LCD screen to sense finger position (X / Y position) and contact area (Z) over the display. In addition, the sensor can be made with flexible material for touch sensing on a three-dimensional surface. Because the sensor is substantially transparent, the underlying surface can be viewed through the sensor. This allows the underlying area to be used for alternative applications that may not necessarily be related to the sensing system. Examples include advertising, an additional user interface display, or apparatus such as a camera or a biometric security device.
Owner:SYNAPTICS INC

Video hand image-three-dimensional computer interface with multiple degrees of freedom

A video gesture-based three-dimensional computer interface system that uses images of hand gestures to control a computer and that tracks motion of the user's hand or a portion thereof in a three-dimensional coordinate system with ten degrees of freedom. The system includes a computer with image processing capabilities and at least two cameras connected to the computer. During operation of the system, hand images from the cameras are continually converted to a digital format and input to the computer for processing. The results of the processing and attempted recognition of each image are then sent to an application or the like executed by the computer for performing various functions or operations. When the computer recognizes a hand gesture as a "point" gesture with one or two extended fingers, the computer uses information derived from the images to track three-dimensional coordinates of each extended finger of the user's hand with five degrees of freedom. The computer utilizes two-dimensional images obtained by each camera to derive three-dimensional position (in an x, y, z coordinate system) and orientation (azimuth and elevation angles) coordinates of each extended finger.
Owner:WSOU INVESTMENTS LLC +1

System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs

A system and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs of a system user. The system comprises a computer-readable memory, a video camera for generating video signals indicative of the gestures of the system user and an interaction area surrounding the system user, and a video image display. The video image display is positioned in front of the system user. The system further comprises a microprocessor for processing the video signals, in accordance with a program stored in the computer-readable memory, to determine the three-dimensional positions of the body and principle body parts of the system user. The microprocessor constructs three-dimensional images of the system user and interaction area on the video image display based upon the three-dimensional positions of the body and principle body parts of the system user. The video image display shows three-dimensional graphical objects within the virtual reality environment, and movement by the system user permits apparent movement of the three-dimensional objects displayed on the video image display so that the system user appears to move throughout the virtual reality environment.
Owner:PHILIPS ELECTRONICS NORTH AMERICA

Man machine interfaces and applications

Affordable methods and apparatus are disclosed for inputting position, attitude (orientation) or other object characteristic data to computers for the purpose of Computer Aided Design, Painting, Medicine, Teaching, Gaming, Toys, Simulations, Aids to the disabled, and internet or other experiences. Preferred embodiments of the invention utilize electro-optical sensors, and particularly TV Cameras, providing optically inputted data from specialized datum's on objects and / or natural features of objects. Objects can be both static and in motion, from which individual datum positions and movements can be derived, also with respect to other objects both fixed and moving. Real-time photogrammetry is preferably used to determine relationships of portions of one or more datums with respect to a plurality of cameras or a single camera processed by a conventional PC.
Owner:PRYOR TIMOTHY R +1

System and apparatus for eyeglass appliance platform

The present invention relates to a personal multimedia electronic device, and more particularly to a head-worn device such as an eyeglass frame having a plurality of interactive electrical / optical components. In one embodiment, a personal multimedia electronic device includes an eyeglass frame having a side arm and an optic frame; an output device for delivering an output to the wearer; an input device for obtaining an input; and a processor comprising a set of programming instructions for controlling the input device and the output device. The output device is supported by the eyeglass frame and is selected from the group consisting of a speaker, a bone conduction transmitter, an image projector, and a tactile actuator. The input device is supported by the eyeglass frame and is selected from the group consisting of an audio sensor, a tactile sensor, a bone conduction sensor, an image sensor, a body sensor, an environmental sensor, a global positioning system receiver, and an eye tracker. In one embodiment, the processor applies a user interface logic that determines a state of the eyeglass device and determines the output in response to the input and the state.
Owner:CHAUM DAVID

Visual audio mixing system and method thereof

A visual audio mixing system which includes an audio input engine configured to input one or more audio files each associated with a channel. A shape engine is responsive to the audio input engine and is configured to create a unique visual image of a definable shape and / or color for each of the one or more of audio files. A visual display engine is responsive to the shape engine and is configured to display each visual image. A shape select engine is responsive to the visual display engine and is configured to provide selection of one or more visual images. The system includes a two-dimensional workspace. A coordinate engine is responsive to the shape select engine and is configured to instantiate selected visual images in the two-dimensional workspace. A mix engine is responsive to coordinate engine and is configured to mix the visual images instantiated in the two-dimensional workspace such that user provided movement of one or more of the visual images in one direction represents volume and user provided movement in another direction represents pan to provide a visual and audio representation of each audio file and its associated channel.
Owner:SHAPEMIX MUSIC

System and method for multi-modal focus detection, referential ambiguity resolution and mood classification using multi-modal input

Systems and methods are provided for performing focus detection, referential ambiguity resolution and mood classification in accordance with multi-modal input data, in varying operating conditions, in order to provide an effective conversational computing environment for one or more users.
Owner:IBM CORP

Distributed sensing techniques for mobile devices

Methods and apparatus of the invention allow the coordination of resources of mobile computing devices to jointly execute tasks. In the method, a first gesture input is received at a first mobile computing device. A second gesture input is received at a second mobile computing device. In response, a determination is made as to whether the first and second gesture inputs form one of a plurality of different synchronous gesture types. If it is determined that the first and second gesture inputs form the one of the plurality of different synchronous gesture types, then resources of the first and second mobile computing devices are combined to jointly execute a particular task associated with the one of the plurality of different synchronous gesture types.
Owner:MICROSOFT TECH LICENSING LLC

Method and apparatus for communication between humans and devices

This invention relates to methods and apparatus for improving communications between humans and devices. The invention provides a method of modulating operation of a device, comprising providing an attentive user interface for obtaining information about an attentive state of a user; and modulating operation of a device on the basis of the obtained information, wherein the operation that is modulated is initiated by the device. Preferably, the information about the user's attentive state is eye contact of the user with the device that is sensed by the attentive user interface.
Owner:QUEENS UNIV OF KINGSTON
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products