Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

643 results about "Visual navigation" patented technology

Visual navigation/inertial navigation full combination method

The invention relates to a visual navigation / inertial navigation full combination method. The method comprises the following steps: first, calculation of visual navigation: observation equations are listed based on collinearity equations, carrier positions and attitude parameters are obtained through the least square principle and adjustment,, and variance-covariance arrays among the parameters are calculated; second, calculation of inertial navigation: navigation calculation is carried out in the local horizontal coordinates, carrier positions, speeds and attitude parameters of each moment are obtained, variance-covariance arrays among the parameters are calculated; third, correction of the inertial navigation system through the visual system: by means of the Kalman filtering, navigation parameter errors and device errors of the inertial navigation system are estimated, and subjected to compensation and feedback correction, and therefore the optimal estimated values of all the parameters of the inertial navigation system are obtained; fourth, correction of the visual system through the inertial navigation system: all the parameters of the visual system are corrected through the sequential adjustment treatment. Compared to the prior art, the method has advantages of rigorous theories, stable performances, high efficiency and the like.
Owner:TONGJI UNIV

Double-manipulator fruit and vegetable harvesting robot system and fruit and vegetable harvesting method thereof

The invention discloses a double-manipulator fruit and vegetable harvesting robot system and a fruit and vegetable harvesting robot method of the double-manipulator fruit and vegetable harvesting robot system. According to the system, a binocular stereoscopic vision system is used for visual navigation of walking motion of a robot and obtaining of position information of harvested targets and barriers; a manipulator device is used for grasping and separating according to the positions of the harvested targets and the barriers; a robot movement platform is used for autonomous moving under the operation environment; a main control computer is a control center, integrates a control interface and all software modules, and controls the whole system. The binocular stereoscopic vision system comprises two color vidicons, an image collection card and an intelligent control cloud deck; the manipulator device comprises two five degree-of-freedom manipulator bodies, a joint servo driver, an actuator motor and the like; the robot movement platform comprises a wheel-type body, a power source and power control device and a fruit and vegetable harvesting device. Binocular vision and the double-manipulator bionic personification are used for building the fruit and vegetable harvesting robot, and autonomous navigation walking and automatic harvesting of the fruit and vegetable targets are achieved.
Owner:溧阳常大技术转移中心有限公司

Visual navigation method and system of mobile robot as well as warehouse system

The invention discloses a visual navigation method and system of a mobile robot. According to the method, a scene image in a scene where the mobile robot is located is acquired in real time and converted into a grayscale image; a two-dimensional code in the grayscale image is identified and decoded, and state transition information and speed change information are obtained; meanwhile, an outline center line of a stripe in the same frame of grayscale image is determined, and the offset distance and the offset angle between the outline center line of the stripe and a center line of the grayscale image are calculated; the linear velocity and the motion direction of the mobile terminal are adjusted according to the state transition information and the speed change information, and meanwhile, the angular velocity of the mobile robot is corrected in real time according to the offset distance and the offset angle. The two-dimensional code on the stripe and the scene image for correction are acquired simultaneously in the image manner, the two-dimensional code and the stripe in the same frame of image are merged, meanwhile, preset motion and real-time correction of the robot are controlled, the control and correction method can be significantly simplified, the velocity can be increased, and the system can be more stable.
Owner:NINGBO INST OF MATERIALS TECH & ENG CHINESE ACADEMY OF SCI

Graphical user interface utilizing three-dimensional scatter plots for visual navigation of pictures in a picture database

A novel graphical user interface (GUI) using metadata, generates three-dimensional scatter plots (100, 200, 300, 400) for the efficient and aesthetic navigation and retrieval of pictures in a picture database. The first and second dimensions (102, 104, 202, 204, 302, 304, 402, 404) represent abscissas and ordinates corresponding to two picture characteristics chosen by the user. Distinguishing characteristics of icons (108-126, 208-230, 308-326, 408-430) in the scatter plot (100, 200, 300, 400), which icons represent groups of pictures, indicate the third dimension, also chosen by the user. In the preferred embodiment, the third dimension is indicated by the color of the icon (108-126, 208-230, 308-326, 408-430). Along with many other possibilities, the three dimensions of a scatter plot (100, 200, 300, 400) can represent combinations of “Who,”“What,”“When,”“Where,” and “Why” picture characteristic information contained in the picture metadata. Activating an icon (108-126, 208-230, 308-326, 408-430) produces a thumbnail of the pictures in the group represented by the particular icon (108-126, 208-230, 308-326, 408-430). Updating one display dimension dynamically updates the other display dimensions.
Owner:MONUMENT PEAK VENTURES LLC

Pole tower model matching and visual navigation-based power unmanned aerial vehicle and inspection method

The invention discloses a pole tower model matching and visual navigation-based power unmanned aerial vehicle and an inspection method. In an unmanned aerial vehicle, a depth image of a front end of the unmanned aerial vehicle is acquired by a dual-eye visual sensor, distance between the unmanned aerial vehicle and a front object is further measured, a surrounding image is acquired by a cloud deckand a camera, the object is further identified, and the flight gesture of the unmanned aerial vehicle is controlled by flight control of unmanned aerial vehicle. The method comprises the steps of performing pole tower model building on different types of power transmission line pole towers; automatically identifying the power transmission line pole towers and pole tower types by the unmanned aerial vehicle during the flight process, matching and loading a pre-built pole tower model; performing visual positioning on the power transmission line pole towers by the unmanned aerial vehicle, and acquiring relative positions of the unmanned aerial vehicle and the pole towers; and performing flight inspection by the unmanned aerial vehicle according to optimal flight path. By the unmanned aerialvehicle, the modeling workload is greatly reduced, and the model universality is improved; and the inspection method does not dependent on absolute coordinate flight, the flexibility is greatly improved, the cost is reduced, and the power facility safety is improved.
Owner:NARI TECH CO LTD

Ethernet-exchange-bus-based unmanned plane flight control system and method

The invention provides an ethernet-exchange-bus-based unmanned plane flight control system and method. The system comprises a flight control computer (ARM), a flight indication lamp control module (LED+), an ultrasonic sensor, a light stream sensor, a power management unit (PMU), a storage battery, an inertia management unit (IMU), a satellite navigation unit (GPS/BD+), a motor, and a steering engine. An Ethernet exchange chip (LAN swtich) is embedded into the flight control computer (ARM) and is connected with the flight control computer (ARM) by an LAN and is connected with a flight control peripheral module by an LAN. The system and method have the following beneficial effects: a communication link between an unmanned plane and the ground is simplified; a high-definition digital image can be returned to the ground in real time or be connected to the interne or be passed back to a command center; the visual navigation, obstacle avoidance, and image target identification tracking are supported; the communication demand of the unmanned plane formation flight is satisfied; the traditional IOSD equipment as well as the investment are saved; and the new function of the modularized smooth extension flight control is supported.
Owner:深圳大漠大智控技术有限公司

Task collaborative visual navigation method of two unmanned aerial vehicles

ActiveCN102628690AControl the amount of information transmittedImprove matchNavigation instrumentsInformation transmissionUncrewed vehicle
The invention provides a task collaborative visual navigation method of two unmanned aerial vehicles. The method comprises the following steps: determining an interactive communication mode between a first unmanned aerial vehicle and a second unmanned aerial vehicle, wherein the first unmanned aerial vehicle is used to perform visual positioning, and the second unmanned aerial vehicle is used to perform environment identification route planning; performing fusion processing on visual positioning information generated by the first unmanned aerial vehicle and route information generated by the second unmanned aerial vehicle to generate respective flight control instruction information of the first and second unmanned aerial vehicles at all times; transferring the flight control instruction information to the corresponding first and second unmanned aerial vehicles respectively by the interactive communication mode so as to perform visual navigation safe flight. The method provided in the invention can effectively control information transmission volume of real-time videos and image transmission in cooperative visual navigation of the unmanned aerial vehicles, has advantages of good match capability and good reliability, and is an effective technology for implementing cooperative visual navigation of unmanned aerial vehicles cluster to avoid risks, barriers and the like.
Owner:TSINGHUA UNIV

Visual navigation based multi-crop row detection method

The invention relates to a visual navigation based multi-crop row detection method, which belongs to the related field of machine vision navigation and image processing, and aims to quickly and accurately extract a plurality of ridge lines in farmland and meet the requirements of the real-time navigation and positioning of agricultural machinery. The invention provides an agricultural machine vision navigation based multi-crop row detection method, which comprises the steps: calibrating camera parameters, acquiring video and image frames, and carrying out distortion correction for the image; dividing a crop ridge line area, extracting navigation positioning points by using the vertical projection method, and calculating the world coordinates of the positioning points; using the random straight line detection method to calculate the positioning points, and detecting the straight lines on which crop ridge rows are positioned; and obtaining the position of each crop ridge row in a world coordinate system relative to the agricultural operation machinery by calculation according to the slope parameters and intercept parameters of the straight lines. Compared with the traditional technology, the technical scheme of the invention greatly reduces time complexity and space complexity, and also improves the accuracy and the real-time of navigation.
Owner:CHINA AGRI UNIV

Vehicle guide system of electric vehicle charging station and guide method

The invention relates to a guide method for a vehicle guide system of an electric vehicle charging and replacing station. The guide system comprises a monitor center, one or more vehicle terminals, and one or more ground marks, wherein the monitor center comprises a monitor server; the vehicle terminal comprises a central control processing unit, which is respectively connected with an RFID (radio frequency identification) module, an image acquisition module, a video encoding module, a vehicle data acquisition module, a vehicle control module, a display module, a GPS (global positioning system) module, a wireless communication module (II), a power supply module, and a storage module; and the ground mark comprises an image mark line and an RFID mark. The guide method comprises the following steps: (1) vehicle information reading and processing; (2) path planning, and map information returning; (3) visual navigation; (4) IRID location charging and battery replacing. By adopting image mark navigation and RFID mark location, the method can guide electric vehicle to park at charging position or replacing position rapidly and accurately, thereby providing guarantee for stable operation of the electric vehicle charging and replacing station.
Owner:STATE GRID CORP OF CHINA +1

Navigation control system based on vision and ultrasonic waves

The invention relates to a navigation control system based on vision and ultrasonic waves, which comprises a caterpillar robot body, an ultrasonic ranging subsystem, a vision subsystem and a motion control subsystem, wherein the ultrasonic ranging subsystem is arranged in front of the robot, the vision subsystem is arranged over the robot, and the motion control subsystem is positioned above the robot. The ultrasonic ranging subsystem comprises ultrasonic transmitters, ultrasonic receivers and a signal processing circuit, and a plurality of transmitters and a plurality of receivers are distributed in an equally spaced mode. The vision subsystem comprises a CCD (Charge Coupled Device) camera, an analog-to-digital conversion circuit and a DSP (Digital Signal Processor) image processor, and the motion control subsystem comprises a DSP motion controller and corresponding peripheral circuits. Information processed by the vision subsystem and the ultrasonic ranging subsystem is transmitted to a fuzzy controller inlaid in the motion control subsystem, and the fuzzy controller outputs control information to control the motion of the robot. The system has the advantages that the advantages of visual navigation and ultrasonic navigation are combined, so the navigation accuracy is increased; as the DSP high-speed processor is adopted, the instantaneity and the expandability are increased; and the anti-jamming capability is enhanced through applying a fuzzy control method.
Owner:NORTHWEST A & F UNIV

Unmanned logistics vehicle based on depth learning

The invention relates to an unmanned logistics vehicle based on depth learning. The unmanned logistics vehicle comprises a logistics vehicle body, an ultrasonic obstacle avoidance module, a binocular stereo vision obstacle avoidance module, a motor driving module, an embedded system, a power supply module, and a visual navigation processing system. The binocular stereo vision obstacle avoidance module is used for detecting a distant obstacle in a road scene. The ultrasonic obstacle avoidance module is used for detecting a near distance obstacle, and distance information of the obstacles obtained by the two modules are called obstacle avoidance information. According to the visual navigation processing system, a depth learning model trained by the sample set is used to process collected road image data, and control command information is outputted. Finally, a decision model integrates control instruction information and the obstacle avoidance information to control the motor driving module so as to realize the unmanned driving function of a logistics vehicle. According to the unmanned logistics vehicle, the installation of auxiliary equipment is not needed, the depth learning model can sense and understand a road surrounding environment through a learning sample set, and the unmanned driving function of the logistics vehicle is realized.
Owner:NORTHEASTERN UNIV

Multi-sensor fusion-based visual navigation AGV system

The invention relates to a multi-sensor fusion-based visual navigation AGV system comprising a vehicle body. A remote ultrasonic ranging module and an image acquisition device are installed at the front side of the vehicle body; and near-field ultrasonic ranging modules are uniformly distributed at the two sides of the vehicle body. A GPS positioning module, a power supply module, a motor drive module and an upper computer are installed at the vehicle body. The AVG visual guiding method of the vehicle body includes a step that is executed by one time at a system initialization period or a phase one executed after setting condition triggering by the system and a phase two executed continuously at a system running period; and the phase one is an adaptive learning phase and the phase two is a road surface detection and road path planning phase. According to the invention, the system has the following advantages: manual guidance identifier laying is not required; the application of the system is flexible; the universality is high; the integrated construction cost of the AGV system is effectively lowered; the system is suitable for various complex road conditions and various weather conditions; and influences on road identification by factors like illumination, shadow, and lane lines and the like can be effectively eliminated by using the adaptive learning algorithm.
Owner:浙江科钛机器人股份有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products