Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

739 results about "Robot vision" patented technology

High-resolution polarization-sensitive imaging sensors

An apparatus and method to determine the surface orientation of objects in a field of view is provided by utilizing an array of polarizers and a means for microscanning an image of the objects over the polarizer array. In the preferred embodiment, a sequence of three image frames is captured using a focal plane array of photodetectors. Between frames the image is displaced by a distance equal to a polarizer array element. By combining the signals recorded in the three image frames, the intensity, percent of linear polarization, and angle of the polarization plane can be determined for radiation from each point on the object. The intensity can be used to determine the temperature at a corresponding point on the object. The percent of linear polarization and angle of the polarization plane can be used to determine the surface orientation at a corresponding point on the object. Surface orientation data from different points on the object can be combined to determine the object's shape and pose. Images of the Stokes parameters can be captured and viewed at video frequency. In an alternative embodiment, multi-spectral images can be captured for objects with point source resolution. Potential applications are in robotic vision, machine vision, computer vision, remote sensing, and infrared missile seekers. Other applications are detection and recognition of objects, automatic object recognition, and surveillance. This method of sensing is potentially useful in autonomous navigation and obstacle avoidance systems in automobiles and automated manufacturing and quality control systems.
Owner:THE UNITED STATES OF AMERICA AS REPRESENTED BY THE SECRETARY OF THE NAVY

Robot audiovisual system

A robot visuoauditory system that makes it possible to process data in real time to track vision and audition for an object, that can integrate visual and auditory information on an object to permit the object to be kept tracked without fail and that makes it possible to process the information in real time to keep tracking the object both visually and auditorily and visualize the real-time processing is disclosed. In the system, the audition module (20) in response to sound signals from microphones extracts pitches therefrom, separate their sound sources from each other and locate sound sources such as to identify a sound source as at least one speaker, thereby extracting an auditory event (28) for each object speaker. The vision module (30) on the basis of an image taken by a camera identifies by face, and locate, each such speaker, thereby extracting a visual event (39) therefor. The motor control module (40) for turning the robot horizontally. extracts a motor event (49) from a rotary position of the motor. The association module (60) for controlling these modules forms from the auditory, visual and motor control events an auditory stream (65) and a visual stream (66) and then associates these streams with each other to form an association stream (67). The attention control module (6) effects attention control designed to make a plan of the course in which to control the drive motor, e.g., upon locating the sound source for the auditory event and locating the face for the visual event, thereby determining the direction in which each speaker lies. The system also includes a display (27, 37, 48, 68) for displaying at least a portion of auditory, visual and motor information. The attention control module (64) servo-controls the robot on the basis of the association stream or streams.
Owner:JAPAN SCI & TECH CORP

Three-dimensional imaging method and device utilizing planar lightwave circuit

The invention discloses a three-dimensional imaging method and a device utilizing a planar lightwave circuit. The three-dimensional imaging method includes that coherent light emitted from coherent light source is converted into a two dimensional point light source array; the position of every point light source in the two dimensional point light source array is randomly distributed; three-dimensional images are discretized into a large amount of vexel; the vexel is divided into a plurality of groups from high to low according to the brightness; a phase regulating amplitude of the point light source is calculated according to the distance between every point light source and every vexel of every group to enable the lightwave from every point light source to be in the same phase when reaches the vexel; every point light source is accumulated as a complex amplitude regulation amplitude for generating every vexel; and an amplitude regulator and a phase regulator of every point light source are driven to generate every group of vexel based on constructive interference. The imaging device is formed by coherent light source, the planar lightwave circuit, a conductive glass front panel and a back driving circuit. The three-dimensional imaging method and the device utilizing the planar lightwave circuit are capable of being widely applied to the fields of three-dimensional display of a computer and a television, three-dimensional human-machine exchange, robot vision and the like.
Owner:李志扬

Industrial robot visual recognition positioning grabbing method, computer device and computer readable storage medium

The invention provides an industrial robot visual recognition positioning grabbing method, a computer device and a computer readable storage medium. The method comprises the steps that image contour extraction is performed on an acquired image; when the object contour information exists in the contour extraction result, positioning and identifying the target object by using an edge-based templatematching algorithm; when the target object pose information is preset target object pose information, correcting the target object pose information by using a camera calibration method; and performingcoordinate system conversion on the corrected pose information by using a hand-eye calibration method. The computer device comprises a controller, and the controller is used for implementing the industrial robot visual recognition positioning grabbing method when executing the computer program stored in the memory. A computer program is stored in the computer readable storage medium, and when thecomputer program is executed by the controller, the industrial robot visual recognition positioning grabbing method is achieved. The method provided by the invention is higher in recognition and positioning stability and precision.
Owner:GREE ELECTRIC APPLIANCES INC

Real-time closed loop predictive tracking method of maneuvering target

InactiveCN102096925AReliable trackingContinuous and stable trackingImage analysisPrediction algorithmsClosed loop
The invention discloses a real-time closed loop predictive tracking method of a maneuvering target, which is a closed loop real-time self-adaptive processing method of on-line predictive immediate tracking in a maneuvering small target imaging tracking system and is mainly used for fields of photoelectric imaging tracking, robot vision, intelligent traffic control and the like. Due to the adoption of the method, a captured target can be extracted to to establish a flight track, the target flight track is filtered, the position of a target at a next collection time is predicted, a platform is processed in real time on line with high performance of a DSP main processor and a FPGA coprocessor, a prediction algorithm which can cope with target maneuver with higher accuracy is adopted to predict the motion state of the target in real time and a prediction result is utilized to drive a piezoelectric ceramic motor two-dimensional motion station to carry out overcompensation, thereby the self-adaptive predictive tracking is realized. The invention has the advantages that the method can overcome the defect of a largened tracking error caused by system delay and can still carry out continuous and stable tracking when the target maneuvers or is temporarily sheltered.
Owner:SHANGHAI INST OF TECHNICAL PHYSICS - CHINESE ACAD OF SCI

Robot vision servo control method based on image mixing moment

The invention discloses a robot vision servo control method based on the image mixing moment. Firstly, construction of mixing moment features in one-to-one correspondence with space attitudes after target object imaging under the robot expected pose is given; and then a target object image is obtained under any attitude, the current mixing moment feature information value is calculated, the deviation of the mixing moment feature value is calculated according to information of an expected image and information of the current image, if the deviation is smaller than the preset threshold value, itis shown that the robot achieves the expected pose, if the deviation is not smaller than the threshold value, an image jacobian matrix relevant to the mixing moment features is deducted, a vision servo robot is used so that the robot can move towards the expected pose till the feather deviation is smaller than the preset threshold value, and the control process is over. By means of the robot vision servo control method, the image field mixing moment feature corresponding to the space movement track of the robot is introduced in to serve as the control input, vision servo control of a eye-in-hand robot system under the working space model unknown circumstance is completed, and the method can be widely applied to robot intelligent control based on machine vision.
Owner:CENT SOUTH UNIV

Mobile vision robot and measurement and control method thereof

The invention provides a mobile vision robot and a measurement and control method thereof. Positioning and state information of the mobile robot and a robot operating target is obtained through a machine vision detection method, and meanwhile, movement and other operation of the robot are controlled according to information provided by vision images; and the robot vision images are provided by two cameras, one camera is fixed to a mobile platform and used for acquiring images on the robot movement path, and the other camera is fixed to the tail end of a manipulator and used for acquiring detailed images of a manipulator operating target. According to the mobile vision robot and the measurement and control method thereof, the positioning and state information of the mobile robot and the specific target are calculated through an image distortion correcting and mode recognizing method according to the obtained image information, and then the robot is controlled to complete instructed action on a target object. Meanwhile, the invention provides a method for eliminating accumulative errors generated in the operating process of the robot through vision images. The mobile vision robot has the characteristics of high precision and high anti-interference capacity, complex environment support is not needed, and the mobile vision robot is suitable for various laboratories and factory environments.
Owner:XI AN JIAOTONG UNIV

Robot parallel polishing system

The invention discloses a robot parallel polishing system which comprises a coarse polishing system, a refined polishing system, a base component, a system control cabinet, a pneumatic control cabinet, a robot vision self-positioning system, workpieces and a working table. The workpieces with large free curved faces can be coarsely and finely polished through the polishing systems at the same time, simultaneous polishing on the two same workpieces can be achieved, the polishing accuracy can be effectively guaranteed, and the polishing efficiency is improved. Before the robot parallel polishing system works, the polishing areas are divided by path generative software according to three-dimensional models of the workpieces, the polishing paths are generated, standards of the workpieces are rapidly and accurately demarcated through the robot vision self-positioning system, two industrial robots drive a pneumatic polishing head to conduct coarse polishing and refined polishing on the polishing areas according to the planed paths, the pneumatic mild force control technology, the normal polishing force control technology and the path real-time calibration compensation technology are adopted in the polishing process, and the polishing accuracy and the coincidence of the polishing quality are effectively guaranteed.
Owner:中科君胜(深圳)智能数据科技发展有限公司

Panoramic three-dimensional photographing device

The invention discloses a panoramic three-dimensional photographing device which is established by integrating four omnibearing photographing devices with identical imaging parameters, wherein a plane is used for connecting the four omnibearing photographing devices with identical imaging parameters so as to ensure that fixed single viewpoints of four ODVSs (Omnidirectional Vision Sensors) with identical imaging parameters are positioned on the same plane; the connection manner is as follows: four hyperbolical-surface mirror planes with identical parameters are fixed on a transparent glass face, and four cameras with identical inner and outer parameters are fixed on the same plane; and a microprocessor is used for performing three-dimensional imaging processing on images of the four ODVSs and comprises a panoramic image reading and preprocessing unit, a perspective unfolding unit and a panoramic three-dimensional image output unit; and the panoramic three-dimensional photographing device can be widely applied to a plurality of application fields such as robot vision, animated films, games and the like. The invention provides the panoramic three-dimensional photographing device with the advantages of high cost performance, simplicity for operation and capability of photographing a panoramic three-dimensional video image in real time.
Owner:ZHEJIANG UNIV OF TECH

Flexible robot vision recognition and positioning system based on depth learning

The present invention discloses a flexible robot vision recognition and positioning system based on the depth learning. The system is implemented in the following steps: obtaining an image of a part, carrying out binarization processing on the image of the part to extract an outer contour of the image of the part; finding out a circumscribed rectangle of the outer contour edge in the lateral direction, determining to-be-recognized areas, and normalizing the areas to a standard image; gradually rotating the standard image at an equal angle, finding out a rotation angle alpha when the standard image is rotated to a minimum area of the circumscribed rectangle of the outer contour edge in the lateral direction; using a depth learning network to extract the outer contour edge when the rotation angle is alpha, and recognizing the part and the pose of the part; and according to the rotation angle alpha and the pose, calculating the actual pose of the to-be-recognized part before rotating, and transmitting the pose data to a flexible robot, so that the flexible robot can pick up the to-be-recognized part. According to the system disclosed by the present invention, contour shape features contained in the part image data are automatically extracted layer by layer by using the depth learning network, so that accuracy and adaptability of part recognition and positioning can be greatly improved under the complicated conditions.
Owner:CHONGQING UNIV OF TECH

Three-dimensional virtual fitting system

The invention discloses a three-dimensional virtual fitting system and relates to the technical field of robot vision and digitized costume design. The three-dimensional virtual fitting system is characterized in that the object of costume three-dimensional virtual fitting is a three-dimensional human body model which is built through shooting and processing of a real human body. A customer is shot by a camera to obtain multiple groups of images; contour extraction and feature extraction are carried out on the images in a computer, based on axial deformation of viewpoints, with the human body model as reference, further factors including human body types and the like are considered, and the real-person human body model of the customer is finally obtained; the obtained model is input into the three-dimensional fitting system; through two-way selection, a style, cloth, design and color and the like of clothes are determined by the customer and a designer; the data are input into a 'custom-made system' to complete a costume design, an individual fashion sample is generated automatically and input back into the three-dimensional fitting system, and then a custom ready-to-wear is sewed in a virtual mode and worn on the real-person human body model to display an individual fitting effect intuitively; relative marking is carried out by an evaluation system, and scores can be a reference index for the customer and the designer; and the design level and design efficiency of the costume are greatly improved.
Owner:JIANGNAN UNIV

Vision localization and navigation method and system for polling robot of transformer substation

The invention discloses a vision localization and navigation method and system for a polling robot of a transformer substation. The vision localization and navigation method comprises the following steps: preprocessing a collected environmental two-dimensional colored image, extracting feature points, and determining descriptors of the feature points; matching the descriptors of two adjacent frames so as to obtain a feature matching result, and determining a relative position and posture change of the polling robot; establishing a three-dimensional space model of the transformer substation according to the relative position and posture change of the polling robot, the positions of the feature points and the depth values of the feature points corresponding to depth image positions; matching the two-dimensional colored image with the three-dimensional space model, locating the position of the polling robot, constructing a two-dimensional occupancy network, and determining an obstacle avoidance route. Based on image information analysis and relevant algorithms, the accurate position of the polling robot can be positioned, obstacles in a whole situation can be automatically detected, and the route can be planned according to the obstacles and the positions of the obstacles. The method is easy to implement, low in cost, good in stability and high in precision.
Owner:GUANGDONG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products