Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

332 results about "Visually guided" patented technology

Robot trajectory tracking control method based on visual guidance

ActiveCN111590594ASolve poor trajectory accuracyOptimize layoutProgramme-controlled manipulatorVision basedVision sensor
The invention relates to a robot trajectory tracking control method based on visual guidance. The method comprises the following steps: establishing a robot visual servo control system; establishing abinocular vision unified measurement field; carrying out observing by using a binocular vision device to obtain a pose transformation relationship between an end effector coordinate system and a measurement coordinate system, and converting the pose transformation relationship to a robot base coordinate system through the binocular vision measurement field; carrying out smooth estimation on the observed pose of an end effector by utilizing a Kalman filter; calculating the pose error of the end effector; and designing a visual servo controller based on fuzzy PID, processing the pose error to obtain an expected pose at the next moment, and sending the expected pose to a robot system to control the end effector to move. The technical scheme is oriented to the field of flexible machining of aerospace large parts and the application requirements of robot high-precision machining equipment, the pose of the end effector is sensed in real time through a visual sensor, so that a closed-loop feedback system is formed, and the trajectory motion precision of a six-degree-of-freedom series robot is greatly improved.
Owner:NANJING UNIV OF AERONAUTICS & ASTRONAUTICS

Two-dimensional code guidance control method of autonomous mobile robot

The invention relates to a two-dimensional code guidance control method of an autonomous mobile robot, and belongs to the technical field of guidance control of mobile robots. The method comprises thesteps of: marking a guidance target position on a constructed map by using a plurality of two-dimensional code based on two-dimensional code vision and in combination with real-time positioning and composition of the mobile robot; moving, by the autonomous mobile robot, to a two-dimensional code area to complete rough positioning; and then, identifying, by the mobile robot, the two-dimensional code, calculating a three-dimensional coordinate point and pose coordinates of the point, adjusting the forward speed and direction of the mobile robot according to the identified posture of the two-dimensional code and a spatial position of the two-dimensional code relative to a camera, so that the robot moves to the guidance target position to complete target guidance. The two-dimensional code guidance control method provided by the invention has the advantages that a two-dimensional code label is used to cooperate with the camera to perform visual guidance, so that the posture characteristicsare obvious, the processing is fast, the hardware cost is low, and the guidance accuracy is high.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Camera pose calibration method based on spatial point location information

The invention discloses a camera pose calibration method based on spatial point location information. For a robot vision system with an independently-installed camera, a sphere is placed at the tail end of a robot to serve as a calibration object, then the robot is operated to change the position and posture of the sphere to move to different point locations, images and point clouds of table tennis balls at the tail end of the robot are collected; the sphere center is fitted to serve as a spatial point, and meanwhile the corresponding position and posture of the robot are recorded; then a transformation relation between a camera coordinate system and a robot base coordinate system is calculated by searching an equation relation between specific point position changes; the points collectedunder a camera coordinate system are converted into points under a robot base coordinate system, and directly achieving target grabbing of the robot based on visual guidance. According to the invention, the sphere is used as a calibration object, the operation is simple, flexible and portable, the tedious calibration process is simplified, and compared with a method for conversion by means of a calibration plate or a calibration intermediate coordinate system, the method has higher precision, and does not introduce an intermediate transformation relationship and excessive error factors.
Owner:EUCLID LABS NANJING CORP LTD

Visual guide method for object grabbing of robot hand

ActiveCN109623821AEfficient and accurate grabbingProgramme-controlled manipulatorRobot handVisually guided
The invention relates to the technical field of visual guide methods, and discloses a visual guide method for object grabbing of a robot hand. The visual guide method comprises the following steps: 1)performing position calibration on a visual camera positioned on a conveyor line, wherein the visual camera takes a picture of an object conveyed on the conveyor line, processes the picture of the object, and measures coordinate positions of the picture of the taken object; 2) converting the coordinate positions of the picture into coordinate positions of the robot hand, and obtaining an absoluteposition of the object conveyed by the conveyor line; and 3) correcting the absolute position of the object, and stopping the robot hand on the absolute position of the object. According to the visual guide method for object grabbing of the robot hand provided by the invention, the picture of the object on the conveyor line is taken by the visual camera, and the picture is converted into coordinate positions, namely the absolute position of the object, of the robot hand through measuring and calculating, so that the effect of visually guiding the robot hand to move is realized, and therefore,the robot hand can efficiently and accurately grab the object in an object grabbing process of the robot hand.
Owner:RIZHAO YUEJIANG TECH CO LTD

Dispensing method, system and device based on visual guidance and storage medium

The invention discloses a dispensing method, system and device based on visual guidance and a storage medium. The method comprises the following steps: calibrating a camera, and determining the relation between an image coordinate system and a world coordinate system; calculating the distance between the camera and the dispensing needle head; point gluing track information is obtained from a provided CAD standard glue road drawing file and registered as a standard template; collecting image information of the product workpiece by using a camera, and comparing the image information with the standard template to determine a deviation value; according to the deviation value, point gluing track information after deviation of the product workpiece is calculated; According to the deviated point gluing track information, the distance between the camera and the dispensing needle head and the mechanical coordinate when the camera collects the image, the actual point gluing track information of the product workpiece is calculated, and the dispensing operation is assisted. According to the invention, the time and the workload of personnel for adjusting the dispensing track can be obviously reduced, the dispensing precision is high, the dispensing track is stable, and the production quality and the production efficiency are greatly improved.
Owner:GUANGDONG AOPUTE TECH CO LTD

User health condition comprehensive evaluation method and system

The invention relates to a user health condition comprehensive evaluation method and system. The system comprises a user information input module, a knowledge graph reasoning module and a result analysis module, and is used for collecting various types of information of genetic diseases, behavior characteristics, living environments and physiological states of users through the graphical user information input module and mapping the various types of information input by the users to specific user tags. After the labels are input into the knowledge graph reasoning module, the knowledge graph module performs reasoning by utilizing the user labels to obtain various high-risk diseases of the user, so that inspection items suitable for the corresponding diseases and suggestions and improvementmethods of bad behavior habits of the user are continued to be reasoned. In the result analysis module, the current health score of the user can be calculated according to the reasoning result, the user is more visually guided to judge the health condition of the user; meanwhile, comparison of the health conditions of the user before and after behavior feature improvement can be provided, and helpand power are provided for the user to improve the wrong habit of the user.
Owner:浙江禾连网络科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products