Eureka-AI is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Eureka AI

15723 results about "Video camera" patented technology

A video camera is a camera used for electronic motion picture acquisition (as opposed to a movie camera, which records images on film), initially developed for the television industry but now common in other applications as well.

Apparatus and system for prompt digital photo delivery and archival

The invention comprises a wireless camera apparatus and system for automatic capture and delivery of digital image “messages” to a remote system at a predefined destination address. Initial transmission occurs via a wireless network, and the apparatus process allows the simultaneous capture of new messages while transmissions are occurring. The destination address may correspond to an e-mail account, or may correspond to a remote server from which the image and data can be efficiently processed and/or further distributed. In the latter case, data packaged with the digital message is used to control processing of the message at the server, based on a combination of pre-defined system and user options. Secured Internet access to the server allows flexible user access to system parameters for configuration of message handling and distribution options, including the option to build named distribution lists which are downloaded to the wireless camera. For example, configuration data specified on the server may be downloaded to the wireless camera to allow users to quickly specify storage and distribution options for each message, such as archival for later retrieval, forwarding to recipients in a distribution list group, and/or immediate presentation to a monitoring station for analysis and follow-up. The apparatus and system is designed to provide quick and simple digital image capture and delivery for business and personal use.

System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs

A system and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs of a system user. The system comprises a computer-readable memory, a video camera for generating video signals indicative of the gestures of the system user and an interaction area surrounding the system user, and a video image display. The video image display is positioned in front of the system user. The system further comprises a microprocessor for processing the video signals, in accordance with a program stored in the computer-readable memory, to determine the three-dimensional positions of the body and principle body parts of the system user. The microprocessor constructs three-dimensional images of the system user and interaction area on the video image display based upon the three-dimensional positions of the body and principle body parts of the system user. The video image display shows three-dimensional graphical objects within the virtual reality environment, and movement by the system user permits apparent movement of the three-dimensional objects displayed on the video image display so that the system user appears to move throughout the virtual reality environment.

System and method for displaying and selling goods and services

The ShopLive system supports existing merchants and malls to better serve customers by providing easy access to merchandise and sales assistance. The shopper accesses the ShopLive system through various portals. They can be a PC, Web TV, mall kiosk, store kiosk, mobile terminal, screen telephone or any other communication device capable of connecting to a communications network. When the shopper starts the shopping mission they can logon in or if already enrolled, they can use a password for a quick entry. They may chose to shop anonymously. A shopper can set up a shopping mission by defining class of goods, price, color and the like and set out to search for that either in their physical location or remotely. Once the items are located video cameras scan the merchandise to the shopper through the terminal. The cameras may be remotely operable to swing through different views to better display the goods. Or they can view items according to pre-determined scan patterns. Sound and other sensory stimulus such as tactile sensors may be used to enhance the shopping experience. The shopper may also ask for help from an assistant (SLA) that acts just like a sales person in a retail setting. This person can help select goods and can discuss the items selected. The SLA can also check product availability and help complete the purchase as in a normal sales transaction. Or, the shopper can use the ShopLive system to check out themselves. As the shopper moves through the shopping mission, they can add items to their electronic shopping cart and have a one-stop check out or they can check out with each merchant. The shopper is also entered into the available loyalty programs and presented with coupons and rebates. At the end of the shopping mission the shopper can either physically pick up the selections are arrange shipping. The ShopLive system supports multiple selling activities including auctions. It is also a rich data-base for merchants and allows targeted advertising. A live browser accesses the shopper to present sales and incentives to the customer. The ShopLive system connects the Shopper and the merchant to make the shopping experience more effective for both.

Algorithm for monitoring head/eye motion for driver alertness with one camera

Visual methods and systems are described for detecting alertness and vigilance of persons under conditions of fatigue, lack of sleep, and exposure to mind altering substances such as alcohol and drugs. In particular, the intention can have particular applications for truck drivers, bus drivers, train operators, pilots and watercraft controllers and stationary heavy equipment operators, and students and employees during either daytime or nighttime conditions. The invention robustly tracks a person's head and facial features with a single on-board camera with a fully automatic system, that can initialize automatically, and can reinitialize when it need's to and provide outputs in realtime. The system can classify rotation in all viewing direction, detects' eye/mouth occlusion, detects' eye blinking, and recovers the 3D(three dimensional) gaze of the eyes. In addition, the system is able to track both through occlusion like eye blinking and also through occlusion like rotation. Outputs can be visual and sound alarms to the driver directly. Additional outputs can slow down the vehicle cause and/or cause the vehicle to come to a full stop. Further outputs can send data on driver, operator, student and employee vigilance to remote locales as needed for alarms and initiating other actions.

Dynamic target tracking and positioning method of unmanned plane based on vision

The invention discloses a dynamic target tracking and positioning method of an unmanned plane based on vision, and belongs to the navigation field of the unmanned planes. The dynamic target tracking and positioning method comprises the following steps of: carrying out video processing, dynamic target detecting and image tracking; carrying out cloud deck servo control; establishing a corresponding relationship between a target in the image and a target in the real environment, and further measuring the distance between a camera and a dynamic target to complete precise positioning of the dynamic target; and enabling an unmanned plane control system to fly by automatically tracking the dynamic target on the ground. The dynamic target tracking and positioning method of the unmanned plane based on the vision can automatically realize the movement target detecting, image tracking and optical axis automatic deflecting without the full participation of the people, so that the dynamic target is always displayed at the center of an image-forming plane; and the distance between the unmanned plane and the dynamic target is measured in real time according to an established model on the basis of obtaining the height information of the unmanned plane. Therefore, the positioning of the dynamic target is realized; closed-loop control is formed by using the positioned dynamic target as a feedback signal, so that the tracking flight of the unmanned plane is guided.

3D imaging system

The present invention provides a system (method and apparatus) for creating photorealistic 3D models of environments and/or objects from a plurality of stereo images obtained from a mobile stereo camera and optional monocular cameras. The cameras may be handheld, mounted on a mobile platform, manipulator or a positioning device. The system automatically detects and tracks features in image sequences and self-references the stereo camera in 6 degrees of freedom by matching the features to a database to track the camera motion, while building the database simultaneously. A motion estimate may be also provided from external sensors and fused with the motion computed from the images. Individual stereo pairs are processed to compute dense 3D data representing the scene and are transformed, using the estimated camera motion, into a common reference and fused together. The resulting 3D data is represented as point clouds, surfaces, or volumes. The present invention also provides a system (method and apparatus) for enhancing 3D models of environments or objects by registering information from additional sensors to improve model fidelity or to augment it with supplementary information by using a light pattern projector. The present invention also provides a system (method and apparatus) for generating photo-realistic 3D models of underground environments such as tunnels, mines, voids and caves, including automatic registration of the 3D models with pre-existing underground maps.
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products