Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

978 results about "Eye position" patented technology

In the design of human-machine user interfaces (HMIs or UIs), the Design Eye Position (DEP) is the position from which the user is intended to view the workstation for an optimal view of the visual interface. The Design Eye Position represents the ideal but notional location of the operator's view and is usually expressed as a monocular point midway between the pupils of the average user.

System and methods for controlling automatic scrolling of information on a display or screen

A system for controlling the automatic scrolling of information on a computer display. The system includes a computer display, a computer gimbaled sensor for tracking the position of the user's head and user's eye, and a scroll activating interface algorithm using a neural network to find screen gaze coordinates implemented by the computer. A scrolling function is performed based upon the screen gaze coordinates of the user's eye relative t activation area(s) on the display. The gimbaled sensor system contains a platform mounted at the top of the display. The gimbaled sensor system tracks the user's head and eye allowing the user to be free from attachments while the gimbaled sensor system is tracking, still allowing the user to freely move his head. A method of controlling automatic scrolling of information on a display includes the steps of finding a screen gaze coordinate on the display of the user determining whether the screen gaze coordinate is within at least one activated control region, and activating scrolling to provide a desired display of information when the gaze direction is within at least one activated control region. In one embodiment, the control regions are defined as upper control region, lower region, right region and left region for controlling the scrolling respectively in downward, upward, leftward and rightward directions. In another embodiment, control regions are defined by concentric rings for maintaining the stationary position of the information or controlling the scrolling of the information towards the center of the display or screen.
Owner:LEMELSON JEROME H +1

Augmented reality glasses for medical applications and corresponding augmented reality system

The invention describes augmented reality glasses (1) for medical applications configured to be worn by a user, comprising a frame (15) that supports a glasses lens (2a, 2b), wherein the frame (15) comprises an RGB lighting system comprising RGB-emitting devices (16a, 16b, 16c) configured to emit light beams (B1, B2, B3); first optical systems (17a, 17b, 17c) configured to collimate at least partially said beams (B1, B2, B3) into collimated beams (B1c; B2c; B3c); wherein the frame (15) further comprises a display (3) configured to be illuminated by the RGB lighting system (16) by means of the collimated beams (B1c; B2c; B3c); to receive first images (I) from a first processing unit (10); to emit the first images (I) as second images (IE1) towards the glasses lens (2a, 2b), wherein the lens (2a, 2b) is configured to reflect the second images (IE1) coming from the display (3) as images projected (IP) towards an internal zone (51) of the glasses corresponding to an eye position zone of the user who is wearing the glasses in a configuration for use of the glasses. The invention moreover describes an augmented reality system for medical applications on a user comprising the augmented reality glasses (1) of the invention, biomedical instrumentation (100) configured to detect biomedical and / or therapeutic and / or diagnostic data of a user and to generate first data (D1) representative of operational parameters (OP_S) associated with the user, transmitting means (101) configured to transmit the first data (D1) to the glasses (1); wherein the glasses (1) comprise a first processing unit (10) equipped with a receiving module (102) configured to receive the first data (D1) comprising the operational parameters (OP_S) associated with the user.
Owner:BADIALI GIOVANNI +3

Digital eye camera

A digital camera that combines the functions of the retinal camera and corneal camera into one, single, small, easy to use instrument. The single camera can acquire digital images of a retinal region of an eye, and digital images of a corneal region of the eye. The camera includes a first combination of optical elements for making said retinal digital images, and a second combination of optical elements for making said corneal digital images. A portion of these elements are shared elements including a first objective element of an objective lens combination, a digital image sensor and at least one eyepiece for viewing either the retina or the cornea. The retinal combination also includes a first changeable element of said objective lens system for focusing, in combination with said first objective element, portions or all of said retinal region at or approximately at a common image plane. The retinal combination also includes a retinal illuminating light source, an aperture within said frame and positioned within said first combination to form an effective retinal aperture located at or approximately at the lens of the eye defining an effective retinal aperture position, an infrared camera for determining eye position, and an aperture adjustment mechanism for adjusting the effective retinal aperture based on position signals from said infrared camera. The cornea combination of elements includes a second changeable element of said objective lens system for focusing, in combination with said first objective element, portions or all of said cornea region at or approximately at a common image plane.
Owner:CLARITY MEDICAL SYST

Rapid computation of local eye vectors in a fixed point lighting unit

A rapid method for calculating a local eye vector in a fixed point lighting unit. For a given triangle primitive which is to be projected into a given viewport in screen space coordinates, the local eye vector corresponds to a given eye position and a first vertex of the given triangle primitive. (A different local eye vector is calculated for each vertex of the given triangle primitive). The method first comprises generating a view vector matrix which corresponds to the given eye position and corner coordinates of the given viewport, where the corner coordinates are expressed in screen space coordinates. The view vector matrix is usable to map screen space coordinates to an eye vector space which corresponds to the given viewport. The method next includes receiving a first set of coordinates (in screen space) which correspond to the first vertex. The first set of coordinates are then scaled to a numeric range which is representable by the fixed point lighting unit. Next, the first set of coordinates are transformed using the view vector matrix, which produces a non-normalized local eye vector within the eye vector space for the given viewport. The non-normalized local eye vector is normalized to form a normalized local eye vector. The normalized local eye vector is then usable to perform subsequent lighting computations such as computation of specular reflection values for infinite light sources, producing more realistic lighting effects than if an infinite eye vector were used. These more realistic lighting effects do not come at decreased performance, however, as the local eye vector may be calculated rapidly using this method.
Owner:ORACLE INT CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products