Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

922 results about "Machine vision system" patented technology

A machine vision system primarily enables a computer to recognize and evaluate images. It is similar to voice recognition technology, but uses images instead. A machine vision system typically consists of digital cameras and back-end image processing hardware and software.

System and method for servoing robots based upon workpieces with fiducial marks using machine vision

A system and method for servoing robot marks using fiducial marks and machine vision provides a machine vision system having a machine vision search tool that is adapted to register a pattern, namely a trained fiducial mark, that is transformed by at least two translational degrees and at least one mon-translational degree of freedom. The fiducial is provided to workpiece carried by an end effector of a robot operating within a work area. When the workpiece enters an area of interest within a field of view of a camera of the machine vision system, the fiducial is recognized by the tool based upon a previously trained and calibrated stored image within the tool. The location of the work-piece is derived by the machine vision system based upon the viewed location of the fiducial. The location of the found fiducial is compared with that of a desired location for the fiducial. The desired location can be based upon a standard or desired position of the workpiece. If a difference between location of the found fiducial and the desired location exists, the difference is calculated with respect to each of the translational axes and the rotation. The difference can then be further transformed into robot-based coordinates to the robot controller, and workpiece movement is adjusted based upon the difference. Fiducial location and adjustment continues until the workpiece is located the desired position with minimum error.
Owner:COGNEX TECH & INVESTMENT

Robot mechanical picker system and method

Embodiments of the invention comprise a system and method that enable robotic harvesting of agricultural crops. One approach for automating the harvesting of fresh fruits and vegetables is to use a robot comprising a machine-vision system containing rugged solid-state digital cameras to identify and locate the fruit on each tree, coupled with a picking system to perform the picking. In one embodiment of the invention a robot moves through a field first to “map” the field to determine plant locations, the number and size of fruit on the plants and the approximate positions of the fruit on each plant. A robot employed in this embodiment may comprise a GPS sensor to simplify the mapping process. At least one camera on at least one arm of a robot may be mounted in appropriately shaped protective enclosure so that a camera can be physically moved into the canopy of the plant if necessary to map fruit locations from inside the canopy. Once the map of the fruit is complete for a field, the robot can plan and implement an efficient picking plan for itself or another robot. In one embodiment of the invention, a scout robot or harvest robot determines a picking plan in advance of picking a tree. This may be done if the map is finished hours, days or weeks before a robot is scheduled to harvest, or if the picking plan algorithm selected requires significant computational time and cannot be implemented in “real time” by the harvesting robot as it is picking the field. If the picking algorithm selected is less computationally intense, the harvester may calculate the plan as it is harvesting. The system harvests according to the selected picking plan. The picking plan may be generated in the scout robot, harvest robot or on a server. Each of the elements in the system may be configured to communicate with each other using wireless communications technologies.
Owner:VISION ROBOTICS

Multi-sensor fusion-based autonomous obstacle avoidance unmanned aerial vehicle system and control method

The invention discloses a multi-sensor fusion-based autonomous obstacle avoidance unmanned aerial vehicle system and a control method. The system includes an environment information real-time detection module which carries out real-time detection on surrounding environment through adopting a multi-sensor fusion technology and transmits detected information to an obstacle data analysis processing module, the obstacle data analysis processing module which carries out environment structure sensing construction on the received information of the surrounding environment so as to determine an obstacle, and an obstacle avoidance decision-making module which determines an obstacle avoidance decision according to the output result of the obstacle data analysis processing module, so as to achieve obstacle avoidance of an unmanned aerial vehicle through the driving of power modules which is performed by a flight control system. According to the multi-sensor fusion-based autonomous obstacle avoidance unmanned aerial vehicle system and the control method of the invention, binocular machine vision systems are arranged around the body of the unmanned aerial vehicle, so that 3D space reconstruction can be realized; and an ultrasonic device and a millimeter wave radar in an advancing direction are used in cooperation, so that an obstacle avoidance method is more comprehensive. The system has the advantages of high real-time performance of obstacle detection, long visual detection distance and high resolution.
Owner:STATE GRID INTELLIGENCE TECH CO LTD

Human-machine-interface and method for manipulating data in a machine vision system

This invention provides a Graphical User Interface (GUI) that operates in connection with a machine vision detector or other machine vision system, which provides a highly intuitive and industrial machine-like appearance and layout. The GUI includes a centralized image frame window surrounded by panes having buttons and specific interface components that the user employs in each step of a machine vision system set up and run procedure. One pane allows the user to view and manipulate a recorded filmstrip of image thumbnails taken in a sequence, and provides the filmstrip with specialized highlighting (colors or patterns) that indicate useful information about the underlying images. The system is set up and run are using a sequential series of buttons or switches that are activated by the user in turn to perform each of the steps needed to connect to a vision system, train the system to recognize or detect objects/parts, configure the logic that is used to handle recognition/detection signals, set up system outputs from the system based upon the logical results, and finally, run the programmed system in real time. The programming of logic is performed using a programming window that includes a ladder logic arrangement. A thumbnail window is provided on the programming window in which an image from a filmstrip is displayed, focusing upon the locations of the image (and underlying viewed object/part) in which the selected contact element is provided.
Owner:COGNEX CORP

Method and apparatus for nondestructively testing food synthetic quality

The invention discloses a non-destructive inspection method for the comprehensive quality of food and a device thereof, wherein image information reflecting characteristics of an inspected object, such as color, texture, size and shape, etc. is acquired by a machine vision system, and spectral information reflecting physical and chemical indexes of the sample such as moisture, sugar, protein, lipid and PH value, etc. is obtained by a spectrographic detection system, and the acquired image information and the spectral information undergo the preprocessing on the data layer and the information integration on the characteristic layer or the decision layer; together with a built food classification grading expert system, the quality of the inspection object is comprehensively graded. The invention comprehensively utilizes the light image information and the spectral information to inspect the appearance and inner quality of food, thereby the invention can make a quick, convenient, non-destructive and objective inspection on the comprehensive quality of food; the method and the device are widely used to classify food materials, monitor the food processing and grade the food, etc., which ensures the quality of food and contributes to the good quality and low price of food.
Owner:ZHEJIANG UNIV

Seedling replanting system based on machine vision

The invention discloses a seedling transplanting system based on machine vision and consists of components of seedling transportation, machine vision identification, control and transplanting. A conveyor belt of the seedling transportation component can convey seedling plates to the transplanting positions and automatically convey after the completion of the transplanting; color images obtained from the machine vision system are used for detecting a plurality of appearance growth indicators such as the size, the number of leaves of the seedlings so as to comprehensively judge whether the seedlings are suitable for transplanting and meanwhile determine the location information of the seedlings to be transferred to the control system by a computer through a serial communication interface RS-232; then the PLC of the control system sends commands to an end effector to realize automatic transplantation of the seedlings through the control system; the end effector adopts shovel-shaped fingers driven by a linear cylinder and meets the seedling transplanting demand for seedling plates with different sizes through regulating the angle of fingers. The invention utilizes the vision system to obtain the growth and location information of the seedlings and realizes the transplantation through the control of the computer and the PLC.
Owner:ZHEJIANG UNIV

Imaging for a machine-vision system

Manufacturing lines include inspection systems for monitoring the quality of parts produced. Manufacturing lines for making semiconductor devices generally inspect each fabricated part. The information obtained is used to fix manufacturing problems in the semiconductor fab plant. A machine-vision system for inspecting devices includes a light source for propagating light to the device and an image detector that receives light from the device. Also included is a light sensor assembly for receiving a portion of the light from the light source. The light sensor assembly produces an output signal responsive to the intensity of the light received at the light sensor assembly. A controller controls the amount of light received by the image detector to a desired intensity range in response to the output from the light sensor. The image detector may include an array of imaging pixels. The imaging system may also include a memory device which stores correction values for at least one of the pixels in the array of imaging pixels. To minimize or control thermal drift of signals output from an array of imaging pixels, the machine-vision system may also include a cooling element attached to the imaging device. The light source for propagating light to the device may be strobed. The image detector that receives light from the device remains in a fixed position with respect to the strobed light source. A translation element moves the strobed light source and image detector with respect to the device. The strobed light may be alternated between a first and second level.
Owner:ISMECA SEMICONDUCTOR HOLDING SA

System and method for excluding extraneous features from inspection operations performed by a machine vision inspection system

Systems and methods for a machine vision metrology and inspection system are provided for excluding extraneous image features from various inspection or control operations of the machine vision system. The extraneous image features may be in close proximity to other image features to be inspected. One aspect of various embodiments of the invention is that no filtering or other image modifications are performed on the “non-excluded” original image data in the region of the feature to be inspected. Another aspect of various embodiments of the invention is that a region of interest associated with a video tool provided by the user interface of the machine vision system can encompass a region or regions of the feature to be inspected, as well as regions having excluded data, making the video tool easy to use and robust against reasonably expected variations in the spacing between the features to be inspected and the extraneous image features. In various embodiments of the invention, the extraneous image excluding operations are concentrated in the region of interest defining operations of the machine vision system, such that the feature measuring or characterizing operations of the machine vision system operate similarly whether there is excluded data in the associated region of interest or not. Various user interface features and methods are provided for implementing and using the extraneous image feature excluding operations when the machine vision system is operated in a learning or training mode used to create part programs usable for repeated automatic workpiece inspection. The invention is of particular use when inspecting flat panel display screen masks having occluded features to be inspected.
Owner:MITUTOYO CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products