Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

1647 results about "Stereopsis" patented technology

Stereopsis (from the Greek στερεο- stereo- meaning "solid", and ὄψις opsis, "appearance, sight") is a term that is most often used to refer to the perception of depth and 3-dimensional structure obtained on the basis of visual information deriving from two eyes by individuals with normally developed binocular vision. Because the eyes of humans, and many animals, are located at different lateral positions on the head, binocular vision results in two slightly different images projected to the retinas of the eyes. The differences are mainly in the relative horizontal position of objects in the two images. These positional differences are referred to as horizontal disparities or, more generally, binocular disparities. Disparities are processed in the visual cortex of the brain to yield depth perception. While binocular disparities are naturally present when viewing a real 3-dimensional scene with two eyes, they can also be simulated by artificially presenting two different images separately to each eye using a method called stereoscopy. The perception of depth in such cases is also referred to as "stereoscopic depth".

Apparatus and method for determining orientation parameters of an elongate object

An apparatus and method employing principles of stereo vision for determining one or more orientation parameters and especially the second and third Euler angles θ, ψ of an elongate object whose tip is contacting a surface at a contact point. The apparatus has a projector mounted on the elongate object for illuminating the surface with a probe radiation in a known pattern from a first point of view and a detector mounted on the elongate object for detecting a scattered portion of the probe radiation returning from the surface to the elongate object from a second point of view. The orientation parameters are determined from a difference between the projected and detected probe radiation such as the difference between the shape of the feature produced by the projected probe radiation and the shape of the feature detected by the detector. The pattern of probe radiation is chosen to provide information for determination of the one or more orientation parameters and can include asymmetric patterns such as lines, ellipses, rectangles, polygons or the symmetric cases including circles, squares and regular polygons. To produce the patterns the projector can use a scanning arrangement or a structured light optic such as a holographic, diffractive, refractive or reflective element and any combinations thereof. The apparatus is suitable for determining the orientation of a jotting implement such as a pen, pencil or stylus.
Owner:ELECTRONICS SCRIPTING PRODS

Active safety type assistant driving method based on stereoscopic vision

InactiveCN102685516AAvoid tailgatingPreventing accidents such as frontal collisionsImage enhancementImage analysisActive safetyDriver/operator
The invention discloses an active safety type assistant driving method based on stereoscopic vision. An active safety type assistant driving system comprehensively utilizes an OME information technology, consists of a stereoscopic vision subsystem, an image immediate processing subsystem and a safety assistant driving subsystem and comprises two sets of high resolution CCD (charge-coupled device) cameras, an ambient light sensor, a two-channel video collecting card, a synchronous controller, a data transmission circuit, a power supply circuit, an image immediate processing algorithms library, a voice reminding module, a screen display module and an active safety type driving control module. According to the active safety type assistant driving method, separation lines and parameters such as relative distance, relative speed, the relative acceleration and the like of dangerous objects such as front vehicles, front bicycles, front pedestrians and the like can be accurately identified in real time in sunny days, cloudy days, at nigh and under the severe weather conditions such as rain with snow, dense fog and the like, so that the system can prompt a driver to adopt countermeasure through voice and can realize automatic deceleration and emergency brake at emergency situation, thereby ensuring safe travel in a whole day.
Owner:李慧盈 +2

Accurate part positioning method based on binocular microscopy stereo vision

The invention discloses an accurate part positioning method based on binocular microscopy stereo vision, which belongs to the technical field of computer visual measuring and relates to an accurate precision part positioning method based on the binocular microscopy stereo vision. A binocular microscopy stereo vision system is adopted, two CCD (charge coupled device) cameras are adopted to acquire the images of the measured parts, the image information in the to-be-measured area on the measured part is amplified by a stereo microscope, a checkerboard calibrating board is adopted to calibrate the two CCD cameras, and a Harris corner point detecting algorithm and a sub-pixel extracting algorithm are adopted to extract feature points. The extracted feature points are subjected to the primary matching and correcting of matching point pairs, and the feature point image coordinates are inputted to a calibrated system to obtain the space actual coordinates of the feature points. The accurate part positioning method based on the binocular microscopy stereo vision solves the measuring difficult problems generated by the small size of the to-be-measured area, high positioning demand, non-contact and the like. The accurate positioning of the precision part is well finished by adopting the non-contact measuring method of the binocular microscopy stereo vision.
Owner:DALIAN UNIV OF TECH

Autonomously identifying and capturing method of non-cooperative target of space robot

InactiveCN101733746AReal-time prediction of motion statusPredict interference in real timeProgramme-controlled manipulatorToolsKinematicsTarget capture
The invention relates to an autonomously identifying and capturing method of a non-cooperative target of a space robot, comprising the main steps of (1) pose measurement based on stereoscopic vision, (2) autonomous path planning of the target capture of the space robot and (3) coordinative control of a space robot system, and the like. The pose measurement based on the stereoscopic vision is realized by processing images of a left camera and a right camera in real time, and computing the pose of a non-cooperative target star relative to a base and a tail end, wherein the processing comprises smoothing filtering, edge detection, linear extraction, and the like. The autonomous path planning of the target capture of the space robot comprises is realized by planning the motion tracks of joints in real time according to the pose measurement results. The coordinative control of the space robot system is realized by coordinately controlling mechanical arms and the base to realize the optimal control property of the whole system. In the autonomously identifying and capturing method, a self part of a spacecraft is directly used as an identifying and capturing object without installing a marker or a comer reflector on the target star or knowing the geometric dimension of the object, and the planned path can effectively avoid the singular point of dynamics and kinematics.
Owner:HARBIN INST OF TECH

Substation equipment operation and maintenance simulation training system and method based on augmented reality

The invention provides a substation equipment operation and maintenance simulation training system and method based on augmented reality. The system comprises a training management server, an AR subsystem and a training live-action visual monitoring subsystem; the AR subsystem comprises a binocular stereoscopic vision three-dimensional perception unit used for identifying, tracking and registering target equipment, a display unit making a displayed virtual scene overlaid on an actual scene, a gesture recognition unit used for recognizing actions of a user and a voice recognition unit used for recognizing voice of the user; the training live-action visual monitoring subsystem comprises a multi-view stereoscopic vision measurement and tracking camera and a equipment state monitoring video camera which are arranged in the actual scene; the equipment state monitoring video camera is used for recognizing the working condition of substation equipment, and the multi-view stereoscopic vision measurement and tracking camera is used for obtaining three-dimensional space coordinates of the substation equipment in the actual scene, measuring three-dimensional space coordinates of trainees on the training site and tracking and measuring walking trajectories of the trainees.
Owner:JINZHOU ELECTRIC POWER SUPPLY COMPANY OF STATE GRID LIAONING ELECTRIC POWER SUPPLY +3
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products