Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

2138 results about "Pose" patented technology

In computer vision and robotics, a typical task is to identify specific objects in an image and to determine each object's position and orientation relative to some coordinate system. This information can then be used, for example, to allow a robot to manipulate an object or to avoid moving into the object. The combination of position and orientation is referred to as the pose of an object, even though this concept is sometimes used only to describe the orientation. Exterior orientation and translation are also used as synonyms of pose.

Method and apparatus for determining absolute position of a tip of an elongate object on a plane surface with invariant features

A method and apparatus for determining a pose of an elongate object and an absolute position of its tip while the tip is in contact with a plane surface having invariant features. The surface and features are illuminated with a probe radiation and a scattered portion, e.g., the back-scattered portion, of the probe radiation returning from the plane surface and the feature to the elongate object at an angle τ with respect to an axis of the object is detected. The pose is derived from a response of the scattered portion to the surface and the features and the absolute position of the tip on the surface is obtained from the pose and knowledge about the feature. The probe radiation can be directed from the object to the surface at an angle σ to the axis of the object in the form of a scan beam. The scan beam can be made to follow a scan pattern with the aid of a scanning arrangement with one or more arms and one or more uniaxial or biaxial scanners. Angle τ can also be varied, e.g., with the aid of a separate or the same scanning arrangement as used to direct probe radiation to the surface. The object can be a pointer, a robotic arm, a cane or a jotting implement such as a pen, and the features can be edges, micro-structure or macro-structure belonging to, deposited on or attached to the surface which the tip of the object is contacting.
Owner:ELECTRONICS SCRIPTING PRODS

Autonomously identifying and capturing method of non-cooperative target of space robot

InactiveCN101733746AReal-time prediction of motion statusPredict interference in real timeProgramme-controlled manipulatorToolsKinematicsTarget capture
The invention relates to an autonomously identifying and capturing method of a non-cooperative target of a space robot, comprising the main steps of (1) pose measurement based on stereoscopic vision, (2) autonomous path planning of the target capture of the space robot and (3) coordinative control of a space robot system, and the like. The pose measurement based on the stereoscopic vision is realized by processing images of a left camera and a right camera in real time, and computing the pose of a non-cooperative target star relative to a base and a tail end, wherein the processing comprises smoothing filtering, edge detection, linear extraction, and the like. The autonomous path planning of the target capture of the space robot comprises is realized by planning the motion tracks of joints in real time according to the pose measurement results. The coordinative control of the space robot system is realized by coordinately controlling mechanical arms and the base to realize the optimal control property of the whole system. In the autonomously identifying and capturing method, a self part of a spacecraft is directly used as an identifying and capturing object without installing a marker or a comer reflector on the target star or knowing the geometric dimension of the object, and the planned path can effectively avoid the singular point of dynamics and kinematics.
Owner:HARBIN INST OF TECH

Pose alignment system and method of aircraft part based on four locater

The invention discloses an aircraft component position and pose adjusting system based on four locators and a method thereof. The position and pose adjusting system comprises four three-coordinate locators, a spherical technical connector, an aircraft component to be adjusted, a laser tracker and a target reflecting sphere, the three-coordinate locator comprises a bottom plate, and an X-direction motion mechanism, a Y-direction motion mechanism, a Z-direction motion mechanism and a displacement sensor which are arranged from the lower part and the upper part in sequence. The position and pose adjusting method comprises the following steps: firstly, a global coordinate system OXYZ is established, and the current position and pose and the target position and pose of the aircraft component to be adjusted are calculated; secondly, the path of the aircraft component to be adjusted from the current position and pose to the target position and pose is planed; thirdly, the tracks of motion mechanisms in all the directions are formed according to the path; and fourthly, the three locators are synergetically moved, and the position adjusting is realized. The method has the following advantages: firstly, the supporting to the aircraft component to be adjusted can be realized; secondly, the automatic adjusting to position and pose of the aircraft component to be adjusted can be realized; and thirdly, the inch adjusting of position and pose of the aircraft component to be adjusted can be realized.
Owner:ZHEJIANG UNIV +1

Industrial robot visual recognition positioning grabbing method, computer device and computer readable storage medium

The invention provides an industrial robot visual recognition positioning grabbing method, a computer device and a computer readable storage medium. The method comprises the steps that image contour extraction is performed on an acquired image; when the object contour information exists in the contour extraction result, positioning and identifying the target object by using an edge-based templatematching algorithm; when the target object pose information is preset target object pose information, correcting the target object pose information by using a camera calibration method; and performingcoordinate system conversion on the corrected pose information by using a hand-eye calibration method. The computer device comprises a controller, and the controller is used for implementing the industrial robot visual recognition positioning grabbing method when executing the computer program stored in the memory. A computer program is stored in the computer readable storage medium, and when thecomputer program is executed by the controller, the industrial robot visual recognition positioning grabbing method is achieved. The method provided by the invention is higher in recognition and positioning stability and precision.
Owner:GREE ELECTRIC APPLIANCES INC

Cascaded convolutional neural network-based quick detection method of irregular-object grasping pose of robot

InactiveCN108510062AImprove real-time performanceGuaranteed to be dynamically scalableImage enhancementImage analysisColor imageNerve network
The invention relates to a cascaded convolutional neural network-based quick detection method of an irregular-object grasping pose of a robot. Firstly, a cascaded-type two-stage convolutional-neural-network model of a position-attitude rough-to-fine form is constructed, in a first stage, a region-based fully convolutional network (R-FCN) is adopted to realize grasping positioning and rough estimation of a grasping angle, and in a second stage, accurate calculation of the grasping angle is realized through constructing a new Angle-Net model; and then current scene images containing to-be-grasped objects are collected to be used as original on-site image samples to be used for training, the two-stage convolutional-neural-network model is trained by means of a transfer learning mechanism, then each collected monocular color image is input to the cascaded-type two-stage convolutional-neural-network model in online running, and finally, an end executor of the robot is driven by an obtainedgrasping position and attitude for object grasping control. According to the method, grasping detection accuracy is high, detection speed of the irregular-object grasping pose of the robot is effectively increased, and real-time performance of running of a grasping attitude detection algorithm is improved.
Owner:SOUTHEAST UNIV

Online offset correction method and device for robot hand-eye calibration

The invention relates to an online offset correction method and device for robot hand-eye calibration. The online offset correction method comprises the steps that coordinate values of the circle centers of nine circles on a calibration plate in a camera coordinate system and a base coordinate system are obtained; a transformation equation of the circle center of each circle on the calibration plate from the camera coordinate system to the robot base coordinate system is established, through offset coordinates of each circle, the least square method is adopted to calculate a homogeneous transformation matrix of the camera coordinate system relative to the robot base coordinate system; and according to calibrated pose values of the camera coordinate system relative to the base coordinate system, errors of the offset correction calibration results of the nine circles on the calibration plate are analyzed through a vector two-norm formula, and thus the precision of the online offset correction method is evaluated. According to the online offset correction method and device for robot hand-eye calibration, offset in the robot hand-eye calibration process is corrected, flexible, preciseand rapid adjustment on a production line can be achieved, high-repeatability and precise grabbing operation can be achieved, the online offset correction method and device can be applied to operationof an SCARA robot hand-eye device, and the simple, efficient and high-precision effects are achieved.
Owner:武汉库柏特科技有限公司

Visual 3D taking and placing method and system based on cooperative robot

The invention provides a visual 3D taking and placing method and system based on a cooperative robot. The method comprises the steps that internal and external parameters of a camera of a binocular structured light three-dimensional scanner are calibrated; the hands and eyes of the cooperative robot are calibrated, and a calibration result matrix is obtained; a three-dimensional digital model of to-be-taken-and-placed target objects is collected; the calibrated binocular structured light three-dimensional scanner is adopted to obtain point cloud data of the to-be-taken-and-placed target objects which are stacked in a scattered mode, and the point cloud is segmented to obtain scene point clouds of the multiple to-be-taken-and-placed target objects; the to-be-taken-and-placed target object with the highest grabbing success rate is selected as a grabbing target according to the scene point clouds of the multiple to-be-taken-and-placed target objects; the three-dimensional digital model ofthe grabbing target and scene point pair features are registered, pre-defined taking and placing pose points are registered into a scene, and a registered pose estimation result is obtained and serves as a grabbing pose of the grabbing target; and a preliminary grabbing path track of the cooperative robot is planned. The target object can be accurately recognized, and the grabbing positioning precision is high.
Owner:新拓三维技术(深圳)有限公司

Synchronous positioning and composition algorithm based on point cloud segmentation matching closed-loop correction

The invention discloses a synchronous positioning and composition algorithm based on point cloud segmentation matching closed-loop correction, and belongs to the technical field of robot autonomous navigation and computer graphics. According to the algorithm, inter-frame matching is carried out on feature points extracted from three-dimensional point cloud to obtain relative pose transformation ofthe robot, meanwhile, the obtained pose is stored at the rear end in a graph form, and then the point cloud is recorded based on the pose to form a map; the point cloud is fragemented and stored by using a point cloud segmentation and description algorithm, and the point cloud fragments are matched by using a random forest algorithm to form a closed-loop constraint condition; and finally, the historical poses and the map are corrected through a graph optimization algorithm to realize synchronous positioning and composition of the robot. According to the method, while the local positioning precision is ensured, the historical poses and maps are stored and corrected, accumulated errors in an outdoor long-distance environment are effectively reduced, and then synchronous positioning and composition of the robot with good global consistency are achieved.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

Real-time human body action recognizing method and device based on depth image sequence

ActiveCN103246884AEliminate the normalization stepAvoid Action Recognition FailuresCharacter and pattern recognitionHuman bodyTraining - action
The invention relates to the technical field of mode recognizing, in particular to a real-time human body action recognizing method and device based on depth image sequence. The method comprises the steps of S1, extracting target action sketch from a target depth image sequence and extracting a training action sketch from a training depth image set; S2, performing gesture clustering on the training action sketch and performing action calibrating on the clustered outcome; S3, computing the gesture characteristics of the target action sketch and training action sketch; S4, performing the gesture training based on a Gauss mixing model by combining the gesture characteristics of the training action sketch and constructing a gesture model; S5, computing the transferring probability among all gestures of the clustered outcome in each action and constructing an action image model; and S6, performing action recognizing on the target depth image sequence according to the gesture characteristics of the target action sketch, the gesture model and the action image model. The real-time human body action recognizing method disclosed by the invention has the advantages of improving the efficiency of action recognizing and the accuracy and the robustness of the action recognizing.
Owner:TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products