Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

9368 results about "Machine vision" patented technology

Machine vision (MV) is the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance, usually in industry. Machine vision refers to many technologies, software and hardware products, integrated systems, actions, methods and expertise. Machine vision as a systems engineering discipline can be considered distinct from computer vision, a form of computer science. It attempts to integrate existing technologies in new ways and apply them to solve real world problems. The term is the prevalent one for these functions in industrial automation environments but is also used for these functions in other environments such as security and vehicle guidance.

System and method for three-dimensional alignment of objects using machine vision

This invention provides a system and method for determining the three-dimensional alignment of a modeledobject or scene. After calibration, a 3D (stereo) sensor system views the object to derive a runtime 3D representation of the scene containing the object. Rectified images from each stereo head are preprocessed to enhance their edge features. A stereo matching process is then performed on at least two (a pair) of the rectified preprocessed images at a time by locating a predetermined feature on a first image and then locating the same feature in the other image. 3D points are computed for each pair of cameras to derive a 3D point cloud. The 3D point cloud is generated by transforming the 3D points of each camera pair into the world 3D space from the world calibration. The amount of 3D data from the point cloud is reduced by extracting higher-level geometric shapes (HLGS), such as line segments. Found HLGS from runtime are corresponded to HLGS on the model to produce candidate 3D poses. A coarse scoring process prunes the number of poses. The remaining candidate poses are then subjected to a further more-refined scoring process. These surviving candidate poses are then verified by, for example, fitting found 3D or 2D points of the candidate poses to a larger set of corresponding three-dimensional or two-dimensional model points, whereby the closest match is the best refined three-dimensional pose.
Owner:COGNEX CORP

Multi-sensor information fusion-based collision and departure pre-warning device and method

The invention discloses a multi-sensor information fusion-based forward collision and lane departure pre-warning device, which comprises a millimeter-wave radar, a communication interface module, a video camera, an image acquisition module, a display module, a warning module and a vehicle-mounted processing unit, wherein the vehicle-mounted processing unit can perform fusion processing of radar signals and machine visual information from the communication interface module and the image acquisition module. The pre-warning device is used for warning of forward collision and lane departure of a vehicle, and a pre-warning method comprises the steps of: driving devices; receiving and processing radar data and image data; fusing information; and finally displaying the fused information. According to the multi-sensor information fusion-based forward collision and lane departure pre-warning device disclosed by the invention, potential risk of collision during running of the vehicle can be found through the radar and the video camera so as to provide warning information for a driver. According to the pre-warning method disclosed by the invention, since a method of visible sensation combined with the radar is adopted, the accuracy rate of prevention of collision and lane departure of the vehicle is fundamentally improved, and the degree of accuracy can be increased by 5-20 percent as proved by experiments.
Owner:CHINA AUTOMOTIVE TECH & RES CENT

Machine vision and inertial navigation fusion-based mobile robot motion attitude estimation method

The invention discloses a machine vision and inertial navigation fusion-based mobile robot motion attitude estimation method which comprises the following steps of: synchronously acquiring a mobile robot binocular camera image and triaxial inertial navigation data; distilling front/back frame image characteristics and matching estimation motion attitude; computing a pitch angle and a roll angle by inertial navigation; building a kalman filter model to estimate to fuse vision and inertial navigation attitude; adaptively adjusting a filter parameter according to estimation variance; and carrying out accumulated dead reckoning of attitude correction. According to the method, a real-time expanding kalman filter attitude estimation model is provided, the combination of inertial navigation and gravity acceleration direction is taken as supplement, three-direction attitude estimation of a visual speedometer is decoupled, and the accumulated error of the attitude estimation is corrected; and the filter parameter is adjusted by fuzzy logic according to motion state, the self-adaptive filtering estimation is realized, the influence of acceleration noise is reduced, and the positioning precision and robustness of the visual speedometer is effectively improved.
Owner:ZHEJIANG UNIV

Safe state recognition system for people on basis of machine vision

The invention discloses a safe state recognition system for people on the basis of machine vision, aiming to solve the problem that the corresponding intelligent control decision for the vehicle driving behaviour can not be formulated according to the safe state of the people in the prior art. The method comprises the following steps: collecting a vehicle-mounted dynamic video image; detecting and recognizing a pedestrian in an interested area in front of a vehicle; tracking a moving pedestrian; detecting and calculating the distance of pedestrian in front of the vehicle; and obtaining vehicle real-time speed; and recognizing the safe state of the pedestrian. The process of recognizing the safe state of the pedestrian comprises the following steps: building a critical conflict area; judging the safe state when the pedestrian is out of the conflict area in the relative moving process; and judging the safe state when the pedestrian is in the conflict area in the relative moving process. Whether the pedestrian enters a dangerous area can be predicted by the relative speed and the relative position of a motor vehicle and the pedestrian, which are obtained by a vision sensor in the above steps. The safe state recognition system can assist drivers in adopting measures to avoid colliding pedestrians.
Owner:JILIN UNIV

Cucumber picking robot system and picking method in greenhouse

The invention discloses a cucumber picking robot system in a greenhouse environment. The robot system comprises a binocular stereo vision system, a mechanical arm device and a robot mobile platform; the binocular stereo vision system is used for acquiring cucumber images, processing the images in real time and acquiring the position information of the acquired targets; the mechanical arm device is used for capturing and separating the acquired targets according to the position information of the acquired targets; and the robot mobile platform is used for independently moving in the greenhouse environment; wherein, the binocular stereo vision system comprises two black and white cameras, a dual-channel vision real-time processor, a lighting device and an optical filtering device; the mechanical arm device comprises an actuator, a motion control card and a joint actuator; and the robot mobile platform comprises a running mechanism, a motor actuator, a tripod head camera, a processor and a motion controller. The invention also discloses a cucumber picking method in the greenhouse environment. The method of combining machine vision and agricultural machinery is adopted to construct the cucumber picking robot system which is suitable for the greenhouse environment, thus realizing automatic robot navigation and automatic cucumber reaping, and reducing the human labor intensity.
Owner:SUZHOU AGRIBOT AUTOMATION TECH

Device and method for detecting micro defects on bright and clean surface of metal part based on machine vision

The invention relates to a device and method for detecting micro defects on the bright and clean surface of a metal part based on machine vision. The device comprises an imaging, positioning and adjusting mechanism and a processing unit, wherein the imaging, positioning and adjusting mechanism comprises a base plate, a guide rod, a fixed support, a sliding support, a stepping motor, a CCD (Charge Coupled Device) camera, a telecentric lens and parallel light sources, wherein the imaging and coaxial lighting of the CCD camera are primarily adjusted; an image collection card, an industrial personal computer, an equipment control card and an alarm are electrically connected in the processing unit and are used for collecting, transmitting, storing, processing, displaying and alarming image. The method comprises coaxial lighting adjustment and image processing, wherein coaxial lighting adjustment comprises the steps of triggering the equipment control card via software of the industrial personal computer to drive the stepping motor, and adjusting the rotating angles of the parallel light sources until the coaxial lighting condition is satisfied; and image processing comprises the steps of detecting defects on the internal surface of the detected part, respectively detecting large and small defects on the outer edge on the surface of the detected part, displaying the processing images in real time and judging the results.
Owner:安徽中科智能高技术有限责任公司

High-resolution polarization-sensitive imaging sensors

An apparatus and method to determine the surface orientation of objects in a field of view is provided by utilizing an array of polarizers and a means for microscanning an image of the objects over the polarizer array. In the preferred embodiment, a sequence of three image frames is captured using a focal plane array of photodetectors. Between frames the image is displaced by a distance equal to a polarizer array element. By combining the signals recorded in the three image frames, the intensity, percent of linear polarization, and angle of the polarization plane can be determined for radiation from each point on the object. The intensity can be used to determine the temperature at a corresponding point on the object. The percent of linear polarization and angle of the polarization plane can be used to determine the surface orientation at a corresponding point on the object. Surface orientation data from different points on the object can be combined to determine the object's shape and pose. Images of the Stokes parameters can be captured and viewed at video frequency. In an alternative embodiment, multi-spectral images can be captured for objects with point source resolution. Potential applications are in robotic vision, machine vision, computer vision, remote sensing, and infrared missile seekers. Other applications are detection and recognition of objects, automatic object recognition, and surveillance. This method of sensing is potentially useful in autonomous navigation and obstacle avoidance systems in automobiles and automated manufacturing and quality control systems.
Owner:THE UNITED STATES OF AMERICA AS REPRESENTED BY THE SECRETARY OF THE NAVY

Multi-scale small object detection method based on deep-learning hierarchical feature fusion

The invention relates to the object verification technology in the machine vision field, and especially relates to a multi-scale small object detection method based on deep-learning hierarchical feature fusion; for solving the defects that the existing object detection is low in detection precision under real scene, constrained by scale size and different for small object detection, the invention puts forward a multi-scale small object detection method based on deep-learning hierarchical feature fusion. The detection method comprises the following steps: taking an image under the real scene as a research object, extracting the feature of the input image by constructing the convolution neural network, producing less candidate regions by using a candidate region generation network, and then mapping candidate region to a feature image generated by the convolution neural network to obtain the feature of each candidate region, obtaining the feature with fixed size and fixed dimension after passing a pooling layer to input to the full-connecting layer, wherein two branches behind the full-connecting layer respectively output the recognition type and the returned position. The method disclosed by the invention is suitable for the object verification in the machine vision field.
Owner:HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products