Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

2308 results about "Visual positioning" patented technology

Indoor mobile robot positioning system and method based on two-dimensional code

The present invention relates to an indoor mobile robot positioning system and method based on two-dimensional code. The system includes a two-dimensional code positioning controller mounted on a mobile robot, a two-dimensional code acquisition device and a two-dimensional code label distributed in the indoor environment. The two-dimensional code positioning controller is composed of a microprocessor and a communication interface that are connected together. The microprocessor is connected with the two-dimensional code acquisition device through the communication interface and is used for controlling the two-dimensional code acquisition device to acquire two-dimensional code images, receive the two-dimensional code images acquired by the two-dimensional code image acquisition device and realize precise positioning function. The method acquires an actual position of the mobile robot through photographing the indoor two-dimensional code labels, transforming coordinates and mapping code values. The method of the invention organically combines visual positioning technique, two-dimensional code positioning technology and two degree of freedom measuring technology to realize the function of precise positioning of the mobile robot and solve problems of complex image processing and inaccurate positioning of a traditional vision positioning system.
Owner:爱泊科技(海南)有限公司

Unmanned aerial vehicle-based express delivery delivering system and method

The invention discloses an unmanned aerial vehicle-based express delivery delivering system and method. The unmanned aerial vehicle-based express delivery delivering system comprises a ground control center, an unmanned aerial vehicle group and an intelligent express delivery cabinet net. During delivering processes, an express delivery company sends an express delivery delivering request to the ground control center, a specific intelligent express delivery cabinet is generated for the order, and a task flight route is generated for an unmanned aerial vehicle. After reaching a target position, the unmanned aerial vehicle applies to the express delivery cabinet for communication verification; after passing the verification, the unmanned aerial vehicle switches to a visual positioning module if the express delivery cabinet is unoccupied; after positioning operation is performed successfully, an electronic door of the express delivery cabinet is automatically opened, and the unmanned aerial vehicle hovers or lands on a landing stage for delivery operation. Deliveries can be transmitted to pre-distributed compartment by the intelligent express delivery cabinet, pick-up codes are generated and sent to users, and the users can pick up the deliveries based on the pick-up codes. Current express delivery delivering efficiency can be greatly improved via the unmanned aerial vehicle-based express delivery delivering system and method, manpower cost and time cost can be lowered, and security and privacy of express delivery can be well guaranteed.
Owner:CENT SOUTH UNIV

Accurate visual positioning and orienting method for rotor wing unmanned aerial vehicle

InactiveCN104298248APrecision hoverFlexible and convenient hoveringPosition/course control in three dimensionsVisual field lossVisual recognition
The invention discloses an accurate visual positioning and orienting method for a rotor wing unmanned aerial vehicle on the basis of an artificial marker. The accurate visual positioning and orienting method includes the following steps that the marker with a special pattern is installed on the surface of an artificial facility or the surface of a natural object; a camera is calibrated; the proportion mapping relation among the actual size of the marker, the relative distance between the marker and the camera and the size, in camera imaging, of the marker is set up, and the keeping distance between the unmanned aerial vehicle and the marker is set; the unmanned aerial vehicle is guided to fly to the position where the unmanned aerial vehicle is to be suspended, the unmanned aerial vehicle is adjusted so that the marker pattern can enter the visual field of the camera, and a visual recognition function is started; a visual processing computer compares the geometrical characteristic of the pattern shot currently and a standard pattern through visual analysis to obtain difference and transmits the difference to a flight control computer to generate a control law so that the unmanned aerial vehicle can be adjusted to eliminate the deviation of the position, the height and the course, and accurate positioning and orienting suspension is achieved. The accurate visual positioning and orienting method is high in independence, good in stability, high in reliability and beneficial for safety operation, nearby the artificial facility the natural object, of the unmanned aerial vehicle.
Owner:NANJING UNIV OF AERONAUTICS & ASTRONAUTICS

Global positioning system (GPS) and machine vision-based integrated navigation and positioning system and method

The invention discloses a global positioning system (GPS) and machine vision-based integrated navigation and positioning system and a method. The system comprises a GPS positioning device which is utilized for carrying out GPS positioning process on a navigated vehicle, acquiring position coordinates, course angles and driving speed of the navigated vehicle and transmitting the acquired information to a fusion positioning device, a machine vision positioning device which is utilized for collecting images of farmland on a navigation path, carrying out image processing on the collected images, extracting the navigation path to obtain position coordinates of known points on the navigation path, and transmitting the position coordinates to the fusion positioning device, and the fusion positioning device which is utilized for carrying out spatial adjustment and temporal adjustment processes on information from the GPS positioning device and the machine vision positioning device and carrying out filtering processing to obtain final positioning information. The method and the system provided by the invention have the advantages of high positioning accuracy, simple operations and good applicability for real-time operation in field.
Owner:CHINA AGRI UNIV

Binocular vision positioning method and binocular vision positioning device for robots, and storage medium

The invention discloses a binocular vision positioning method and a binocular vision positioning device for robots, and a storage medium. The binocular vision positioning method includes acquiring current binocular images and current positions and posture of the robots; acquiring historical key images from key libraries if the current binocular images are key images; splicing the current binocularimages and the historical key images according to the current positions and posture and historical key positions and posture to obtain vision point cloud maps; acquiring preliminarily built laser point cloud maps; optimizing the current positions and posture according to the vision point cloud maps and the laser point cloud maps. The historical key images acquired from the key libraries and the current binocular images have overlapped visual fields, and the historical key images are related to the historical key positions and posture. The binocular vision positioning method, the binocular vision positioning device and the storage medium have the advantages that position and posture estimated values are optimized by the aid of the preliminarily built laser point cloud maps at moments corresponding to key frames, and accordingly position and posture estimation cumulative errors can be continuously corrected in long-term robot operation procedures; information of the accurate laser pointcloud maps is imported in optimization procedures, and accordingly the binocular vision positioning method and the binocular vision positioning device are high in positioning accuracy.
Owner:HANGZHOU JIAZHI TECH CO LTD

Robot distributed type representation intelligent semantic map establishment method

The invention discloses a robot distributed type representation intelligent semantic map establishment method which comprises the steps of firstly, traversing an indoor environment by a robot, and respectively positioning the robot and an artificial landmark with a quick identification code by a visual positioning method based on an extended kalman filtering algorithm and a radio frequency identification system based on a boundary virtual label algorithm, and constructing a measuring layer; then optimizing coordinates of a sampling point by a least square method, classifying positioning results by an adaptive spectral clustering method, and constructing a topological layer; and finally, updating the semantic property of a map according to QR code semantic information quickly identified by a camera, and constructing a semantic layer. When a state of an object in the indoor environment is detected, due to the adoption of the artificial landmark with a QR code, the efficiency of semantic map establishing is greatly improved, and the establishing difficulty is reduced; meanwhile, with the adoption of a method combining the QR code and an RFID technology, the precision of robot positioning and the map establishing reliability are improved.
Owner:BEIJING UNIV OF CHEM TECH

Pole tower model matching and visual navigation-based power unmanned aerial vehicle and inspection method

The invention discloses a pole tower model matching and visual navigation-based power unmanned aerial vehicle and an inspection method. In an unmanned aerial vehicle, a depth image of a front end of the unmanned aerial vehicle is acquired by a dual-eye visual sensor, distance between the unmanned aerial vehicle and a front object is further measured, a surrounding image is acquired by a cloud deckand a camera, the object is further identified, and the flight gesture of the unmanned aerial vehicle is controlled by flight control of unmanned aerial vehicle. The method comprises the steps of performing pole tower model building on different types of power transmission line pole towers; automatically identifying the power transmission line pole towers and pole tower types by the unmanned aerial vehicle during the flight process, matching and loading a pre-built pole tower model; performing visual positioning on the power transmission line pole towers by the unmanned aerial vehicle, and acquiring relative positions of the unmanned aerial vehicle and the pole towers; and performing flight inspection by the unmanned aerial vehicle according to optimal flight path. By the unmanned aerialvehicle, the modeling workload is greatly reduced, and the model universality is improved; and the inspection method does not dependent on absolute coordinate flight, the flexibility is greatly improved, the cost is reduced, and the power facility safety is improved.
Owner:NARI TECH CO LTD

Implementation method for workpiece grasping of industrial robot based on visual positioning

The invention relates to an implementation method for workpiece grasping of an industrial robot based on visual positioning, which comprises the steps that a workpiece image is acquired through a fixed global CCD camera, and workpiece image information is transmitted to a robot control system through an Ethernet interface; the robot control system processes the workpiece image and acquires workpiece position vector information; the robot performs Cartesian and joint coordinate transformation according to the workpiece position vector information so as to realize positioning and grasping of a tail-end claw for a workpiece. The invention provides a workpiece position information calculation method, which is characterized in that contour region screening is performed on all detected images when contour detection is performed on workpiece images, isolated and small-segment continuous edges are deleted, non-target contours are removed, and the target contour identification accuracy is improved; and meanwhile, when workpiece position information is calculated, judgment is performed on a long side of a workpiece, and the robot is controlled to grasp the long side of the workpiece, so that a failure in grasping caused by short side clamping of the robot is avoided, and the grasping efficiency is improved.
Owner:SHENYANG GOLDING NC & INTELLIGENCE TECH CO LTD

Task collaborative visual navigation method of two unmanned aerial vehicles

ActiveCN102628690AControl the amount of information transmittedImprove matchNavigation instrumentsInformation transmissionUncrewed vehicle
The invention provides a task collaborative visual navigation method of two unmanned aerial vehicles. The method comprises the following steps: determining an interactive communication mode between a first unmanned aerial vehicle and a second unmanned aerial vehicle, wherein the first unmanned aerial vehicle is used to perform visual positioning, and the second unmanned aerial vehicle is used to perform environment identification route planning; performing fusion processing on visual positioning information generated by the first unmanned aerial vehicle and route information generated by the second unmanned aerial vehicle to generate respective flight control instruction information of the first and second unmanned aerial vehicles at all times; transferring the flight control instruction information to the corresponding first and second unmanned aerial vehicles respectively by the interactive communication mode so as to perform visual navigation safe flight. The method provided in the invention can effectively control information transmission volume of real-time videos and image transmission in cooperative visual navigation of the unmanned aerial vehicles, has advantages of good match capability and good reliability, and is an effective technology for implementing cooperative visual navigation of unmanned aerial vehicles cluster to avoid risks, barriers and the like.
Owner:TSINGHUA UNIV

Urban area and indoor high-precision visual positioning system and method

The invention provides an urban area and indoor high-precision visual positioning system and method. The image information of a surrounding environment is acquired based on a visual sensor to realize high-precision urban area and indoor positioning. The method comprises the steps that the distinguishing feature information of an image is calculated and extracted after the visual sensor captures the scene image information; according to the distinguishing feature information, similarity degree recognition and matching are carried out in the feature information base of a digital three-dimensional model; according to matching coordinate information recorded in an image feature matching process, a geometrical mapping relation from a three-dimensional scene to a two-dimensional image is restored, and a camera intersection model from a two-dimensional image coordinate to a three-dimensional space coordinate is established to determine the three-dimensional position and attitude information of the visual sensor and a dynamic user; multi-source image information captured by the visual sensor is received; a digital three-dimensional model corresponding to a real scene is reconstructed or updated; and the feature information base is updated. According to the invention, robust and sustainable positioning ability is provided when the surrounding environment changes.
Owner:WUHAN UNIV

Binocular vision positioning and three-dimensional mapping method for indoor mobile robot

InactiveCN103926927AStrong ability to express the environmentSolve the shortcomings of high complexity and poor real-time performancePosition/course control in two dimensions3D modellingOdometerVisual positioning
The invention discloses a binocular vision positioning and three-dimensional mapping method for an indoor mobile robot. A binocular vision positioning and three-dimensional mapping system for the indoor mobile robot is mainly composed of a two-wheeled mobile robot platform, an odometer module, an analog camera, an FPGA core module, an image acquisition module, a wireless communication module, a storage module, a USB module and a remote data processing host computer. The FPGA core module controls the image acquisition module to collect information of a left image and a right image and sends the information of the left image and the right image to the remote data processing host computer. Distance information between images and the robot is obtained on the remote data processing host computer according to the information of the left image and the right image in a fast belief propagation algorithm, a three-dimensional environmental map is established in the remote data processing host computer, recognition of a specific marker is realized in an improved SIFT algorithm, and then the position of the mobile robot is determined through information integration in a particle filter algorithm. The binocular vision positioning and three-dimensional mapping system is compact in structure and capable of having access to an intelligent space system so that the indoor mobile robot can detect the environment in real time and provide more and accurate services according to the corresponding position information at the same time.
Owner:CHONGQING UNIV

Combination manufacturing method and device for injection mold with conformal cooling water path

The invention discloses a combination manufacturing method and device for an injection mold with a conformal cooling water path. The device comprises a light beam focusing system, a close wave length coaxial vision positioning system, a powder pavement system and a gas protection system. The gas protection system comprises a sealed forming chamber, a shielding gas device and a powder purification device, wherein the shielding gas device is connected to one side of the sealed forming chamber, and the powder purification device is connected to the other side of the sealed forming chamber. The method is a combination processing method combined with the laser region selection fusion technology and the precision cutting processing technology, the advantages of the laser region selection fusion flexible processing are retained, and the feature that the precision of the high-speed cutting processing is good is given to play. In the process of region selection laser fusion processing, laser surface refusion processing is carried out on each layer, and the compactness and the surface quality of the mold are improved. The technique of density changing rapid manufacturing is adopted, and manufacturing efficiency is improved. Precision mold components with the interior special-shape cooling water path and the complex inner cavity structure can be integrally processed at a time.
Owner:SOUTH CHINA UNIV OF TECH

Transformer substation inspection robot path planning navigation method

The invention relates to a transformer substation inspection robot path planning navigation method. The method comprises the steps that a robot walks one circle around a transformer substation, and a two-dimensional grid map of the transformer substation is generated; transformer substation equipment image information is scanned, and a feature image is selected to serve as a road and equipment identification basis; an optimal inspection path is planned; surrounding environment information is scanned to generate a two-dimensional grid map of the surrounding environment, the position where the inspection robot is located is identified by comparing the surrounding environment map with the transformer substation map, and rough positioning is achieved; surrounding equipment image information is acquired, the equipment position is identified by comparing the surrounding environment image information with an equipment feature image, errors generated in the map matching link are corrected, and then higher-precision positioning is achieved; whether map matching positioning and visual positioning are in a same area or not is inspected, if yes, it is proved that positioning is accurate, and if not, it shows that positioning is wrong, and then map matching positioning and visual positioning are conducted again. Therefore, the positioning navigation reliability and accuracy are greatly improved.
Owner:WUHAN UNIV

Sound source independent searching and locating method

The invention relates to an independent sound source searching and positioning method which is based on a mobile robot and comprises the following steps: firstly, a microphone array is utilized to carry out initial positioning to a target sound source: an array which consists of four microphones is arranged at the simulated head part of the mobile robot, wherein, the four microphones are respectively distributed and arranged on the four vertex positions of the biggest inscribed square of the exterior frame circle of the simulated head part of the robot, the distance among the microphones are equal and the left and right ears of the mobile robot are formed to be used for respectively collecting two-way aural signals of the target sound source, then the initial position of the target sound source can be obtained after the two-way aural signals are treated with mathematical treatment based on time delay; secondly, aural positioning and visual positioning are fused: namely, after the initial position of the target sound source is obtained, the simulated head part of the robot is horizontally rotated by utilizing azimuth angle information and rotated up and down by utilizing pitch angle information, or the mobile robot body is simultaneously moved so as to cause the target sound source to be within the visual field of the robot; finally, visual signals are utilized to carry out accurate visual positioning to the target sound source.
Owner:SHAANXI JIULI ROBOT MFG
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products