Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

525 results about "Wears glasses" patented technology

Augmented reality glasses for medical applications and corresponding augmented reality system

The invention describes augmented reality glasses (1) for medical applications configured to be worn by a user, comprising a frame (15) that supports a glasses lens (2a, 2b), wherein the frame (15) comprises an RGB lighting system comprising RGB-emitting devices (16a, 16b, 16c) configured to emit light beams (B1, B2, B3); first optical systems (17a, 17b, 17c) configured to collimate at least partially said beams (B1, B2, B3) into collimated beams (B1c; B2c; B3c); wherein the frame (15) further comprises a display (3) configured to be illuminated by the RGB lighting system (16) by means of the collimated beams (B1c; B2c; B3c); to receive first images (I) from a first processing unit (10); to emit the first images (I) as second images (IE1) towards the glasses lens (2a, 2b), wherein the lens (2a, 2b) is configured to reflect the second images (IE1) coming from the display (3) as images projected (IP) towards an internal zone (51) of the glasses corresponding to an eye position zone of the user who is wearing the glasses in a configuration for use of the glasses. The invention moreover describes an augmented reality system for medical applications on a user comprising the augmented reality glasses (1) of the invention, biomedical instrumentation (100) configured to detect biomedical and / or therapeutic and / or diagnostic data of a user and to generate first data (D1) representative of operational parameters (OP_S) associated with the user, transmitting means (101) configured to transmit the first data (D1) to the glasses (1); wherein the glasses (1) comprise a first processing unit (10) equipped with a receiving module (102) configured to receive the first data (D1) comprising the operational parameters (OP_S) associated with the user.
Owner:BADIALI GIOVANNI +3

Optical lens structure of wearable virtual-reality headset capable of displaying three-dimensional scene

InactiveCN104808342AIncrease room for optimizationImprove clarityLensMagnifying glassesEyewearEngineering
The invention discloses an optical lens structure of a wearable virtual-reality headset capable of displaying three-dimensional scene. Two lenses of the left eye and the right eye are same in structure and both comprise double convex positive lenses and crescent negative lenses coaxially mounted at intervals. The double convex positive lenses close to the human eyes are mounted on a fixed temple, the crescent negative lenses are mounted on a movable temple, a guide rail is arranged on the inner wall of the fixed temple, and the movable temple is mounted on the guide rail movably. A display screen in front of the crescent negative lens is connected on the front portion of the fixed temple by a connecting frame, and the two lens structures are spaced by an intermediate partition. The two lenses are made of different kinds of optical plastics, and the optical surfaces of the front side and the rear side are aspheric surfaces. The optical lens structure can adjust diopter, a user can see the content on the screen without wearing glasses, chromatic aberration and distortion of the single lens are eliminated, the images inputted into a left screen and a right screen are not necessary to be preprocessed, image frames are improved and the user can see common left-right split-screen stereoscopic movies.
Owner:杭州映墨科技有限公司

Unmanned aerial vehicle on-vehicle first visual angle follow-up nacelle system based on VR interaction

PendingCN106125747ARealize visual manipulation interactionRealize linkageAttitude controlWireless transceiverControl system
The invention discloses an unmanned aerial vehicle on-vehicle first visual angle follow-up nacelle system based on VR interaction. The system comprises an on-vehicle photoelectric nacelle, an unmanned aerial vehicle flight control system, a ground control station, wearable VR glasses and a control handle. The VR glasses are worn on the head of a user and are connected with the ground control station through a USB bus. The control handle is used for manual operation by the user and is connected with the ground control station through Bluetooth. The on-vehicle photoelectric nacelle is connected with the ground control station through a wireless transceiver device. The unmanned aerial vehicle flight control system is connected with the ground control station through the wireless transceiver device. The unmanned aerial vehicle on-vehicle first visual angle follow-up nacelle system is a novel unmanned aerial vehicle on-vehicle follow-up nacelle system. Technical advantages of VR are sufficiently integrated at a man-machine interaction aspect. The unmanned aerial vehicle on-vehicle first visual angle follow-up nacelle system is substantially different from a traditional third-visual-angle task nacelle system. The unmanned aerial vehicle on-vehicle first visual angle follow-up nacelle system realizes uniqueness of first-visual-angle visual control and furthermore improves vivid sensory shock of the user. Furthermore the unmanned aerial vehicle on-vehicle first visual angle follow-up nacelle system has advantages of relatively flexible realization manner, simple operation, high control precision, low cost and high real-time performance.
Owner:STATE GRID FUJIAN ELECTRIC POWER CO LTD +3

Fatigue state detection method based on sub-block characteristic matrix algorithm and SVM (support vector machine)

The invention discloses a fatigue state detection method based on a sub-block characteristic matrix algorithm and an SVM (support vector machine), and belongs to the technical field of image processing and mode recognition. The method analyzes and judges whether a driver is in a fatigue state or not through facial features. The method includes the steps: firstly, acquiring a driver video image, and performing illumination compensation and face area detection; secondly, performing eye and mouth area detection in a face area. According to the method, characteristic extraction of an eye image isperformed by an eye sub-block characteristic matrix algorithm, influence of illumination conditions and glasses wearing on detection can be reduced, characteristic extraction of a mouth image is performed by a mouth sub-block characteristic matrix algorithm, interference of tooth appearing and mouth beard in detection can be reduced, images after characteristic extraction are classified by an SVMalgorithm, and reliability is improved under the condition of a small sample training set. According to the method, fatigue characteristics are analyzed according to the eyes and the mouth, the methodtransmits warning information when the driver is in a fatigue state, and traffic accidents can be decreased.
Owner:JILIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products