Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

97 results about "Absolute orientation" patented technology

Absolute orientation. [′ab·sə‚lüt ȯr·ē·ən′tā·shən] (navigation) The adjusting to proper scale, orientating of the model datum parallel to sea level or another given vertical datum, and positioning of the model with reference to the horizontal datum of a stereoscopic model or group of models.

Computer interface employing a manipulated object with absolute pose detection component and a display

A system that has a remote control, e.g., a wand, equipped with a relative motion sensor that outputs data indicative of a change in position of the wand. The system also has one or more light sources and a photodetector that detects their light and outputs data indicative of the detected light. The system uses one or more controllers to determine the absolute position of the wand based on the data output by the relative motion sensor and by the photodetector. The data enables determination of the absolute pose of the wand, which includes the absolute position of a reference point chosen on the wand and the absolute orientation of the wand. To properly express the absolute parameters of position and/or orientation of the wand a reference location is chosen with respect to which the calculations are performed. The system is coupled to a display that shows an image defined by a first and second orthogonal axes such as two axes belonging to world coordinates (Xo,Yo,Zo). The one or more controllers are configured to generate signals that are a function of the absolute position of the wand in or along a third axis for rendering the display. To simplify the mapping of a real three-dimensional environment in which the wand is operated to the cyberspace of the application that the system is running, the third axis is preferably the third Cartesian coordinate axis of world coordinates (Xo,Yo,Zo).
Owner:ELECTRONICS SCRIPTING PRODS

Computer interface employing a manipulated object with absolute pose detection component and a display

A system that has a remote control, e.g., a wand, equipped with a relative motion sensor that outputs data indicative of a change in position of the wand. The system also has one or more light sources and a photodetector that detects their light and outputs data indicative of the detected light. The system uses one or more controllers to determine the absolute position of the wand based on the data output by the relative motion sensor and by the photodetector. The data enables determination of the absolute pose of the wand, which includes the absolute position of a reference point chosen on the wand and the absolute orientation of the wand. To properly express the absolute parameters of position and / or orientation of the wand a reference location is chosen with respect to which the calculations are performed. The system is coupled to a display that shows an image defined by a first and second orthogonal axes such as two axes belonging to world coordinates (Xo,Yo,Zo). The one or more controllers are configured to generate signals that are a function of the absolute position of the wand in or along a third axis for rendering the display. To simplify the mapping of a real three-dimensional environment in which the wand is operated to the cyberspace of the application that the system is running, the third axis is preferably the third Cartesian coordinate axis of world coordinates (Xo,Yo,Zo).
Owner:ELECTRONICS SCRIPTING PRODS

Two-dimensional code and vision-inert combined navigation system and method for robot

The invention provides a two-dimensional code and a vision-inert combined navigation system and method for a robot. A sealed assistant frame is arranged at the periphery of the two-dimensional code, and the sealed assistant frame and the two-dimensional code are both applied to vision navigation. The two-dimensional code is used in the vision-inert combined navigation system for the robot; the vision-inert combined navigation method for the robot comprises the following steps: paving a plurality of two-dimensional codes with the sealed assistant frames at the peripheries on a ground; when the robot walks forwards, taking images by using imaging equipment; acquiring the absolute position and the absolute direction angle of the imaging equipment, and acquiring the absolute coordinates of the two-dimensional codes, and the absolute position and the absolute direction angle of the imaging equipment; confirming the relative position of the robot relative to a present starting point and the relative direction angle of the robot relative to a present starting direction angle; acquiring the absolute position of the robot, and taking the absolute position as a next starting point; acquiring the absolute direction angle of the robot, and taking the absolute direction angle as a next starting direction angle.
Owner:BEIJING JIZHIJIA TECH CO LTD

Determining the direction of travel of an automotive vehicle from yaw rate and relative steering wheel angle

A method of using relative steering wheel angle of an automotive vehicle, vehicle yaw rate, and vehicle speed to determine whether the vehicle is traveling forward or backward. Forward and backward steering wheel angles are calculated from vehicle speed and yaw rate (22). A difference between relative steering wheel angle and forward steering wheel angle (10), and a difference between relative steering wheel angle and backward steering wheel angle (12) are calculated. The difference between relative steering wheel angle and forward steering wheel angle is filtered (14), and a difference between the filtered and the unfiltered difference between relative steering wheel angle and forward steering wheel angle is calculated to obtain a forward net difference (18). The difference between relative steering wheel angle and backward steering wheel angle is filtered (16), and a difference between the filtered and the unfiltered difference between relative steering wheel angle and backward steering wheel angle is calculated to obtain a backward net difference (20). While repeatedly performing the foregoing steps, forward net difference values derived from the forward net differences are accumulated (24), and backward net difference values derived from the backward net differences are accumulated (26). The travel direction is determined by comparing the accumulation of forward net difference values and the accumulation of backward net difference values (28). Absolute steering wheel angle and road bank angle can also be calculated.
Owner:FORD GLOBAL TECH LLC

Photogrammetry and remote sensing comprehensive teaching method and system

The invention discloses a photogrammetry and remote sensing comprehensive teaching method and a system. The method comprises the following steps that aerial images of a preset flight planning area are obtained and pre-treated, control points are distributed in the flight planning area according to the pretreated aerial images and control point distributing standards, the geographical coordinate measuring is conducted, exterior orientation factors of all the images are obtained by according to the measured geographical coordinate information of the control points and the aerial images by aerial triangulation, relative orientation and absolute orientation are conducted on all the images in an all-digital photogrammetry working station according to the obtained exterior orientation factors of all the images, a three-dimensional model is built, the topography and feature data collection is conducted in the built three-dimensional model, a digital surveying and mapping product is generated according to the collected topography and feature data, and the three-dimensional modeling and space analysis are conducted by using the generated digital surveying and mapping product based on the project application needs. The teaching efficiency can be improved, and the teaching effect can be improved.
Owner:HENAN UNIV OF URBAN CONSTR +1

Method for automatically extracting height of building based on stereoscopic satellite image

The invention discloses a method for automatically extracting height of a building based on a stereoscopic satellite image, and relates to the field of satellite image interpretation. The method comprises the following steps of obtaining an original stereoscopic satellite image pair, SRTM data and DOM data of a target building; pre-processing the original stereoscopic satellite image pair, and performing relative orientation and absolute orientation in sequence to generate an epipolar image used for extracting DSM data; extracting initial DSM data of the target building, referring to a DLG data road image layer, a landform image layer and the DOM data of the target building to obtain a check point which meets the requirements; obtaining DEM data in which elevation information of the target building is removed by filtering; integrating the data; performing stereoscopic checking and correcting on a top point elevation value and a foundation elevation value of the target building; subtracting the foundation elevation value of the building from the top point elevation value of the building to obtain the elevation information of the building by combining a house image layer of the target building in DOM and DLG. The method can be used for extracting the elevation information of the building more quickly and more efficiently.
Owner:MAPUNI TECH CO LTD

Reflecting interface orientation quantitative decision method based on phased receiving directivity and device thereof

The invention provides a reflecting interface orientation quantitative decision method based on phased receiving directivity and a device thereof and relates to the field of petroleum geophysical exploration and acoustic wave signal processing technologies. The method comprises the following steps: a multichannel receipt signal acquired by each array element of an arc array receiver is read and a reflected wave signal is obtained; according to the reflected wave signal, phased synthetic reflected waveforms in multiple equally-spaced orientations within the range of circumferentially 360 degrees through phased synthesis, and a rough orientation range of a reflecting interface relative to the arc array receiver is determined according to size relation between peak-to-peak values of target reflection mode wave in each phased synthetic reflected waveform; and phased synthetic waveform in each orientation is obtained through a phased superposition treatment method in the preset first stepping orientation within the rough orientation range, an orientation variation curve of reflected wave amplitude is obtained according to the peak-to-peak value of the target reflection mode wave in each phased synthetic reflected waveform, and orientation and absolute orientation of the reflecting interface relative to the arc array receiver are quantitatively determined according to the orientation indicated by the maximum value of the curve.
Owner:BC P INC CHINA NAT PETROLEUM CORP +1
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products