Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

293 results about "Exact location" patented technology

Automatic-tracking and automatic-zooming method for acquiring iris images

The invention relates to an automatic-tracking and automatic-zooming method for acquiring iris images. The method includes the following steps: a camera with wide angle of view, a camera with narrow angle of view (long focus) and an infrared light source are closely installed on a controllable tripod head which drives the two cameras and the infrared light source to rotate simultaneously; a system adopts the camera with wide angle of view to continuously detect facial images of human; when the facial image is available, eye images of human are detected to obtain the exact locations of human eyes; and then the tripod head is controlled to rotate up and down or left and right, which allows that the camera with narrow angle of view and the infrared light source aim at the human eyes; user distance and picture quality are determined simultaneously to control the camera to automatically zoom; when clear iris images are acquired, image processing and iris recognition are carried out and the user identity is determined. A system device comprises a computer, the controllable tripod head, the camera with wide angle of view, the camera with narrow angle of view, the infrared light source, a power supply, an image grabbing card and other auxiliary equipments.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI

Robot system having image processing function

A robot system having an image processing function capable of detecting position and/or posture of individual workpieces randomly arranged in a stack to determine posture, or posture and position of a robot operation suitable for the detected position and/or posture of the workpiece. Reference models are created from two-dimensional images of a reference workpiece captured in a plurality of directions by a first visual sensor and stored. Also, the relative positions/postures of the first visual sensor with respect to the workpiece at the respective image capturing, and relative position/posture of a second visual sensor to be situated with respect to the workpiece are stored. Matching processing between an image of a stack of workpieces captured by the camera and the reference models are performed and an image of a workpiece matched with one reference model is selected. A three-dimensional position/posture of the workpiece is determined from the image of the selected workpiece, the selected reference model and position/posture information associated with the reference model. The position/posture of the second visual sensor to be situated for measurement is determined based on the determined position/posture of the workpiece and the stored relative position/posture of the second visual sensor, and precise position/posture of the workpiece is measured by the second visual sensor at the determined position/posture of the second visual sensor. A picking operation for picking out a respective workpiece from a randomly arranged stack can be performed by a robot based on the measuring results of the second visual sensor.
Owner:FANUC LTD

Vision guiding AGV (Automatic Guided Vehicle) system and method of embedded system

The invention discloses a vision guiding AGV (Automatic Guided Vehicle) system and a method of an embedded system. Two cameras fixed on a trolley are used for acquiring guiding path information in real time, wherein the cameras and the ground form a certain angle to be inclined forwards and is used for acquiring prospect images; and the cameras are arranged at the middle part and the front part of the interior of the trolley, are vertical to the ground and are used for secondary exact location. The vision guiding AGV method comprises the following step of: embedding an anti-metal radio frequency mark on the ground surface of a key position, wherein a vehicle-mounted radio frequency card reader obtains information in the mark when the trolley passes through the upper part of the mark. When a laser scanner scans a front obstacle in real time, obstacle avoidance detection and obstacle avoidance are realized. A control box of the embedded system disclosed by the invention is as an inner kernel for image collection, image treatment and policy control, the acquired image is subjected to Gauss high-pass filtering, edge detection and two-step Hough transformation so that position deviation and angle deviation of the trolley relative to a current path can be calculated, and two-dimensional deviation value of the AGV currently corresponding to a location reference point is fed back at a station point.
Owner:NANJING UNIV OF AERONAUTICS & ASTRONAUTICS
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products