Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

59060results about "Input/output for user-computer interaction" patented technology

Gestures for touch sensitive input devices

Methods and systems for processing touch inputs are disclosed. The invention in one respect includes reading data from a multipoint sensing device such as a multipoint touch screen where the data pertains to touch input with respect to the multipoint sensing device, and identifying at least one multipoint gesture based on the data from the multipoint sensing device.
Owner:APPLE INC

Mode-based graphical user interfaces for touch sensitive input devices

A user interface method is disclosed. The method includes detecting a touch and then determining a user interface mode when a touch is detected. The method further includes activating one or more GUI elements based on the user interface mode and in response to the detected touch.
Owner:APPLE INC

Information input and output system, method, storage medium, and carrier wave

A coordinate input device detects coordinates of a position by indicating a screen of a display device with fingers of one hand, and transfers information of the detected coordinates to a computer through a controller. The computer receives an operation that complies with the detected coordinates, and executes the corresponding processing. For example, when it is detected that two points on the screen have been simultaneously indicated, an icon registered in advance is displayed close to the indicated position.
Owner:RICOH KK

Cellular phone with special sensor functions

Specific ambient and user behaviour sensing systems and methods are presented to improve friendliness and usability of electronic handheld devices, in particular cellular phones, PDAs, multimedia players and similar.The improvements and special functions include following components:a. The keypad is locked / unlocked (disabled / enabled) and / or the display activated based on the device inclination relative to its longitudinal and / or lateral axes.b. The keypad is locked if objects are detected above the display (for example the boundary of a bag or pursue).c. The keypad is locked / unlocked (disabled / enabled) and / or the display activated based on electric field displacement or bio-field sensing systems recognizing the user hand in any position behind the handheld device.d. The electric response signal generated by an electric field through the user hand in contact with a receiver plate is used to identify the user and in negative case lock the device.e. Connection with incoming calls is automatically opened as soon as a hand is detected behind the device and the device is put close to the ear (proximity sensor).f. The profile (ring-tone mode, volume and silent mode) can be changed just putting the device in a specific verse (upside up or upside down).g. Has a lateral curved touchpad with tactile markings over more surfaces to control a mouse pointer / cursor or selection with the thumb finger.
Owner:PIZZI DAVID

Contextual responses based on automated learning techniques

Techniques are disclosed for using a combination of explicit and implicit user context modeling techniques to identify and provide appropriate computer actions based on a current context, and to continuously improve the providing of such computer actions. The appropriate computer actions include presentation of appropriate content and functionality. Feedback paths can be used to assist automated machine learning in detecting patterns and generating inferred rules, and improvements from the generated rules can be implemented with or without direct user control. The techniques can be used to enhance software and device functionality, including self-customizing of a model of the user's current context or situation, customizing received themes, predicting appropriate content for presentation or retrieval, self-customizing of software user interfaces, simplifying repetitive tasks or situations, and mentoring of the user to promote desired change.
Owner:MICROSOFT TECH LICENSING LLC

Method and apparatus for integrating manual input

Apparatus and methods are disclosed for simultaneously tracking multiple finger and palm contacts as hands approach, touch, and slide across a proximity-sensing. compliant, and flexible multi-touch surface. The surface consists of compressible cushion, dielectric, electrode, and circuitry layers. A simple proximity transduction circuit is placed under each electrode to maximize signal-to-noise ratio and to reduce wiring complexity. Such distributed transduction circuitry is economical for large surfaces when implemented with thin-film transistor techniques. Scanning and signal offset removal on an electrode array produces low-noise proximity images. Segmentation processing of each proximity image constructs a group of electrodes corresponding to each distinguishable contact and extracts shape, position and surface proximity features for each group. Groups in successive images which correspond to the same hand contact are linked by a persistent path tracker which also detects individual contact touchdown and liftoff. Combinatorial optimization modules associate each contact's path with a particular fingertip, thumb, or palm of either hand on the basis of biomechanical constraints and contact features. Classification of intuitive hand configurations and motions enables unprecedented integration of typing, resting, pointing, scrolling, 3D manipulation, and handwriting into a versatile, ergonomic computer input device.
Owner:APPLE INC

Flexible transparent touch sensing system for electronic devices

A transparent, capacitive sensing system particularly well suited for input to electronic devices is described. The sensing system can be used to emulate physical buttons or slider switches that are either displayed on an active display device or printed on an underlying surface. The capacitive sensor can further be used as an input device for a graphical user interface, especially if overlaid on top of an active display device like an LCD screen to sense finger position (X / Y position) and contact area (Z) over the display. In addition, the sensor can be made with flexible material for touch sensing on a three-dimensional surface. Because the sensor is substantially transparent, the underlying surface can be viewed through the sensor. This allows the underlying area to be used for alternative applications that may not necessarily be related to the sensing system. Examples include advertising, an additional user interface display, or apparatus such as a camera or a biometric security device.
Owner:SYNAPTICS INC

Portable electronic device with multi-touch input

A portable communication device with multi-touch input detects one or more multi-touch contacts and motions and performs one or more operations on an object based on the one or more multi-touch contacts and / or motions. The object has a resolution that is less than a pre-determined threshold when the operation is performed on the object, and the object has a resolution that is greater than the pre-determined threshold at other times.
Owner:APPLE INC

Video hand image-three-dimensional computer interface with multiple degrees of freedom

A video gesture-based three-dimensional computer interface system that uses images of hand gestures to control a computer and that tracks motion of the user's hand or a portion thereof in a three-dimensional coordinate system with ten degrees of freedom. The system includes a computer with image processing capabilities and at least two cameras connected to the computer. During operation of the system, hand images from the cameras are continually converted to a digital format and input to the computer for processing. The results of the processing and attempted recognition of each image are then sent to an application or the like executed by the computer for performing various functions or operations. When the computer recognizes a hand gesture as a "point" gesture with one or two extended fingers, the computer uses information derived from the images to track three-dimensional coordinates of each extended finger of the user's hand with five degrees of freedom. The computer utilizes two-dimensional images obtained by each camera to derive three-dimensional position (in an x, y, z coordinate system) and orientation (azimuth and elevation angles) coordinates of each extended finger.
Owner:WSOU INVESTMENTS LLC +1

System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs

A system and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs of a system user. The system comprises a computer-readable memory, a video camera for generating video signals indicative of the gestures of the system user and an interaction area surrounding the system user, and a video image display. The video image display is positioned in front of the system user. The system further comprises a microprocessor for processing the video signals, in accordance with a program stored in the computer-readable memory, to determine the three-dimensional positions of the body and principle body parts of the system user. The microprocessor constructs three-dimensional images of the system user and interaction area on the video image display based upon the three-dimensional positions of the body and principle body parts of the system user. The video image display shows three-dimensional graphical objects within the virtual reality environment, and movement by the system user permits apparent movement of the three-dimensional objects displayed on the video image display so that the system user appears to move throughout the virtual reality environment.
Owner:PHILIPS ELECTRONICS NORTH AMERICA

Gesture-based computer interface

A system and method for manipulating virtual objects in a virtual environment, for drawing curves and ribbons in the virtual environment, and for selecting and executing commands for creating, deleting, moving, changing, and resizing virtual objects in the virtual environment using intuitive hand gestures and motions. The system is provided with a display for displaying the virtual environment and with a video gesture recognition subsystem for identifying motions and gestures of a user's hand. The system enables the user to manipulate virtual objects, to draw free-form curves and ribbons and to invoke various command sets and commands in the virtual environment by presenting particular predefined hand gestures and / or hand movements to the video gesture recognition subsystem.
Owner:LUCENT TECH INC

Apparatus for and method of controlling digital image processing apparatus

An apparatus for and method of controlling a digital image processing device in order to reduce power consumption by automatically recognizing a state in which power of a display device can be turned off or the display device can operate in a power saving mode, and in that case, turning off the display device or operating the display device in the power saving mode. The apparatus for controlling a digital image processing device includes: a motion sensing unit sensing a motion of the digital image processing device; and a control unit operating the digital image processing device in a power saving mode when it is determined that the digital image processing device moves.
Owner:SAMSUNG ELECTRONICS CO LTD

Man machine interfaces and applications

Affordable methods and apparatus are disclosed for inputting position, attitude (orientation) or other object characteristic data to computers for the purpose of Computer Aided Design, Painting, Medicine, Teaching, Gaming, Toys, Simulations, Aids to the disabled, and internet or other experiences. Preferred embodiments of the invention utilize electro-optical sensors, and particularly TV Cameras, providing optically inputted data from specialized datum's on objects and / or natural features of objects. Objects can be both static and in motion, from which individual datum positions and movements can be derived, also with respect to other objects both fixed and moving. Real-time photogrammetry is preferably used to determine relationships of portions of one or more datums with respect to a plurality of cameras or a single camera processed by a conventional PC.
Owner:PRYOR TIMOTHY R +1

Human-machine dialog system

The invention concerns a human-machine dialog system (1) comprising:a support (3) having a plurality of identical docking stations (30), each docking station (30) being associated with a universal human-machine dialog device (4), each universal human-machine dialog device (4) comprising at least a display member (41) and a sensor member (40),a plurality of modular members (2), each modular member (2) being arranged to be positioned in a docking station (30) in a removable and interchangeable manner and comprising a human-machine dialog interface (20) arranged to cooperate with said display member (41) and / or said sensor member (40).
Owner:SCHNEIDER ELECTRIC IND SAS

Controlling and accessing content using motion processing on mobile devices

Various embodiments provide systems and methods capable of facilitating interaction with handheld electronics devices based on sensing rotational rate around at least three axes and linear acceleration along at least three axes. In one aspect, a handheld electronic device includes a subsystem providing display capability, a set of motion sensors sensing rotational rate around at least three axes and linear acceleration along at least three axes, and a subsystem which, based on motion data derived from at least one of the motion sensors, is capable of facilitating interaction with the device.
Owner:INVENSENSE

System and apparatus for eyeglass appliance platform

The present invention relates to a personal multimedia electronic device, and more particularly to a head-worn device such as an eyeglass frame having a plurality of interactive electrical / optical components. In one embodiment, a personal multimedia electronic device includes an eyeglass frame having a side arm and an optic frame; an output device for delivering an output to the wearer; an input device for obtaining an input; and a processor comprising a set of programming instructions for controlling the input device and the output device. The output device is supported by the eyeglass frame and is selected from the group consisting of a speaker, a bone conduction transmitter, an image projector, and a tactile actuator. The input device is supported by the eyeglass frame and is selected from the group consisting of an audio sensor, a tactile sensor, a bone conduction sensor, an image sensor, a body sensor, an environmental sensor, a global positioning system receiver, and an eye tracker. In one embodiment, the processor applies a user interface logic that determines a state of the eyeglass device and determines the output in response to the input and the state.
Owner:CHAUM DAVID

Visual audio mixing system and method thereof

A visual audio mixing system which includes an audio input engine configured to input one or more audio files each associated with a channel. A shape engine is responsive to the audio input engine and is configured to create a unique visual image of a definable shape and / or color for each of the one or more of audio files. A visual display engine is responsive to the shape engine and is configured to display each visual image. A shape select engine is responsive to the visual display engine and is configured to provide selection of one or more visual images. The system includes a two-dimensional workspace. A coordinate engine is responsive to the shape select engine and is configured to instantiate selected visual images in the two-dimensional workspace. A mix engine is responsive to coordinate engine and is configured to mix the visual images instantiated in the two-dimensional workspace such that user provided movement of one or more of the visual images in one direction represents volume and user provided movement in another direction represents pan to provide a visual and audio representation of each audio file and its associated channel.
Owner:SHAPEMIX MUSIC

Gesture-controlled interfaces for self-service machines and other applications

A gesture recognition interface for use in controlling self-service machines and other devices is disclosed. A gesture is defined as motions and kinematic poses generated by humans, animals, or machines. Specific body features are tracked, and static and motion gestures are interpreted. Motion gestures are defined as a family of parametrically delimited oscillatory motions, modeled as a linear-in-parameters dynamic system with added geometric constraints to allow for real-time recognition using a small amount of memory and processing time. A linear least squares method is preferably used to determine the parameters which represent each gesture. Feature position measure is used in conjunction with a bank of predictor bins seeded with the gesture parameters, and the system determines which bin best fits the observed motion. Recognizing static pose gestures is preferably performed by localizing the body / object from the rest of the image, describing that object, and identifying that description. The disclosure details methods for gesture recognition, as well as the overall architecture for using gesture recognition to control of devices, including self-service machines.
Owner:JOLLY SEVEN SERIES 70 OF ALLIED SECURITY TRUST I

Real-time typing assistance

An apparatus and method are disclosed for providing feedback and guidance to touch screen device users to improve the text entry user experience and performance through the use of indicators such as feedback semaphores. Also disclosed are suggestion candidates, which allow a user to quickly select next words to add to text input data, or replacement words for words that have been designated as incorrect. According to one embodiment, a method comprises receiving text input data, providing an indicator for possible correction of the text input data, displaying suggestion candidates associated with alternative words for the data, receiving a single touch screen input selecting one of the suggestion candidates, and modifying the input data using the word associated with the selected suggestion candidate.
Owner:MICROSOFT TECH LICENSING LLC

System and method for multi-modal focus detection, referential ambiguity resolution and mood classification using multi-modal input

Systems and methods are provided for performing focus detection, referential ambiguity resolution and mood classification in accordance with multi-modal input data, in varying operating conditions, in order to provide an effective conversational computing environment for one or more users.
Owner:IBM CORP

System and method for display of information using a vehicle-mount computer

A system and method displays information using a vehicle-mount computer. The system includes (i) a computer touch screen for inputting and displaying information; (ii) a motion detector for detecting vehicle motion; (iii) a proximity sensor for detecting proximity to an item; and (vi) a vehicle-mount computer in communication with the computer touch screen, the motion detector, and proximity sensor, the vehicle-mount computer including a central processing unit and memory. The vehicle-mount computer's central processing unit is configured to store information associated with user-selected information from the computer touch screen and to display a zoomed view of the user-selected information on the computer touch screen. Further, the vehicle-mount computer's central processing unit is configured to override screen-blanking when user-selected information is displayed.
Owner:HAND HELD PRODS

System and method for providing virtual desktop extensions on a client desktop

The system and method described herein may identify one or more virtual desktop extensions available in a cloud computing environment and launch virtual machine instances to host the available virtual desktop extensions in the cloud. For example, a virtual desktop extension manager may receive a virtual desktop extension request from a client desktop and determine whether authentication credentials for the client desktop indicate that the client desktop has access to the requested virtual desktop extension. In response to authenticating the client desktop, the virtual desktop extension manager may then launch a virtual machine instance to host the virtual desktop extension in the cloud and provide the client desktop with information for locally controlling the virtual desktop extension remotely hosted in the cloud.
Owner:MICRO FOCUS SOFTWARE INC

Interactive video display system

A device allows easy and unencumbered interaction between a person and a computer display system using the person's (or another object's) movement and position as input to the computer. In some configurations, the display can be projected around the user so that that the person's actions are displayed around them. The video camera and projector operate on different wavelengths so that they do not interfere with each other. Uses for such a device include, but are not limited to, interactive lighting effects for people at clubs or events, interactive advertising displays, etc. Computer-generated characters and virtual objects can be made to react to the movements of passers-by, generate interactive ambient lighting for social spaces such as restaurants, lobbies and parks, video game systems and create interactive information spaces and art installations. Patterned illumination and brightness and gradient processing can be used to improve the ability to detect an object against a background of video images.
Owner:MICROSOFT TECH LICENSING LLC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products