Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

3212 results about "Vision sensor" patented technology

Visual recognition and positioning method for robot intelligent capture application

The invention relates to a visual recognition and positioning method for robot intelligent capture application. According to the method, an RGB-D scene image is collected, a supervised and trained deep convolutional neural network is utilized to recognize the category of a target contained in a color image and a corresponding position region, the pose state of the target is analyzed in combinationwith a deep image, pose information needed by a controller is obtained through coordinate transformation, and visual recognition and positioning are completed. Through the method, the double functions of recognition and positioning can be achieved just through a single visual sensor, the existing target detection process is simplified, and application cost is saved. Meanwhile, a deep convolutional neural network is adopted to obtain image features through learning, the method has high robustness on multiple kinds of environment interference such as target random placement, image viewing anglechanging and illumination background interference, and recognition and positioning accuracy under complicated working conditions is improved. Besides, through the positioning method, exact pose information can be further obtained on the basis of determining object spatial position distribution, and strategy planning of intelligent capture is promoted.
Owner:合肥哈工慧拣智能科技有限公司

Flexible visually directed medical intubation instrument and method

A flexible medical intubation instrument provided for placement into an animal or human patient comprises a catheter with at least a pair of longitudinally extending lumens or channels including a sensor and/or actuator channel and a working channel. In the sensor/actuator channel is provided a fixed or slideably removable sensor cable having a sensor for sensing a characteristic or condition including any of the following: a visual sensor for optical viewing, a chemical sensor, a pH sensor, a pressure sensor, an infection sensor, an audio sensor, or a temperature sensor. The sensors are coupled by the sensor/actuator cable through light transmission, electric current, or radio transmission to a viewing instrument or other output device such as a meter or video screen for displaying the condition that is sensed within the body of the patient while the flexibility of the composite structure comprising the catheter and cable enable the entire instrument to flex laterally as it moves through curved passages or around obstructions during insertion or removal. While making observations through the sensor channel, the working channel simultaneously functions as a drain or an irrigation duct, a feeding tube, or to provide a passage for the insertion of one or a succession of surgical devices such that the catheter serves as a protective artificial tract or liner as surgical devices are inserted and removed through it in succession so as to minimize tissue trauma, infection, and pain experienced by the patient. The instrument can be used in urology, as well as a visually directed nasogastric tube, as a visually directed external gastrostomy tube, or as a visually directed internal gastric tube or percutaneous endoscopic gastrostomy tube and in other applications.
Owner:PERCUVISION

Safe state recognition system for people on basis of machine vision

The invention discloses a safe state recognition system for people on the basis of machine vision, aiming to solve the problem that the corresponding intelligent control decision for the vehicle driving behaviour can not be formulated according to the safe state of the people in the prior art. The method comprises the following steps: collecting a vehicle-mounted dynamic video image; detecting and recognizing a pedestrian in an interested area in front of a vehicle; tracking a moving pedestrian; detecting and calculating the distance of pedestrian in front of the vehicle; and obtaining vehicle real-time speed; and recognizing the safe state of the pedestrian. The process of recognizing the safe state of the pedestrian comprises the following steps: building a critical conflict area; judging the safe state when the pedestrian is out of the conflict area in the relative moving process; and judging the safe state when the pedestrian is in the conflict area in the relative moving process. Whether the pedestrian enters a dangerous area can be predicted by the relative speed and the relative position of a motor vehicle and the pedestrian, which are obtained by a vision sensor in the above steps. The safe state recognition system can assist drivers in adopting measures to avoid colliding pedestrians.
Owner:JILIN UNIV

Robot system having image processing function

A robot system having an image processing function capable of detecting position and/or posture of individual workpieces randomly arranged in a stack to determine posture, or posture and position of a robot operation suitable for the detected position and/or posture of the workpiece. Reference models are created from two-dimensional images of a reference workpiece captured in a plurality of directions by a first visual sensor and stored. Also, the relative positions/postures of the first visual sensor with respect to the workpiece at the respective image capturing, and relative position/posture of a second visual sensor to be situated with respect to the workpiece are stored. Matching processing between an image of a stack of workpieces captured by the camera and the reference models are performed and an image of a workpiece matched with one reference model is selected. A three-dimensional position/posture of the workpiece is determined from the image of the selected workpiece, the selected reference model and position/posture information associated with the reference model. The position/posture of the second visual sensor to be situated for measurement is determined based on the determined position/posture of the workpiece and the stored relative position/posture of the second visual sensor, and precise position/posture of the workpiece is measured by the second visual sensor at the determined position/posture of the second visual sensor. A picking operation for picking out a respective workpiece from a randomly arranged stack can be performed by a robot based on the measuring results of the second visual sensor.
Owner:FANUC LTD

Train operation fault automatic detection system and method based on binocular stereoscopic vision

The invention discloses a train operation fault automatic detection system and method based on binocular stereoscopic vision, and the method comprises the steps: collecting left and right camera images of different parts of a train based on a binocular stereoscopic vision sensor; achieving the synchronous precise positioning of various types of target regions where faults are liable to happen based on the deep learning theory of a multi-layer convolution neural network or a conventional machine learning method through combining with the left and right image consistency fault (no-fault) constraint of the same part; carrying out the preliminary fault classification and recognition of a positioning region; achieving the synchronous precise positioning of multiple parts in a non-fault region through combining with the priori information of the number of parts in the target regions; carrying out the feature point matching of the left and right images of the same part through employing the technology of binocular stereoscopic vision, achieving the three-dimensional reconstruction, calculating a key size, and carrying out the quantitative description of fine faults and gradually changing hidden faults, such as loosening or playing. The method achieves the synchronous precise detection of the deformation, displacement and falling faults of all big parts of the train, or carries out the three-dimensional quantitative description of the fine and gradually changing hidden troubles, and is more complete, timely and accurate.
Owner:BEIHANG UNIV

Systematic calibration method of welding robot guided by line structured light vision sensor

The invention relates to a systematic calibration method of a welding robot guided by a line structured light vision sensor, which comprises the following steps: firstly, controlling a mechanical arm to change pose, obtaining a round target image through a camera, accomplishing the matching of the round target image and a world coordinate, and then obtaining an internal parameter matrix and an external parameter matrix RT of the camera; secondly, solving a line equation of a line laser bar by Hough transformation, and using the external parameter matrix RT obtained in the first step to obtain a plane equation of the plane of the line laser bar under a coordinate system of the camera; thirdly, calculating to obtain a transformation matrix of a tail end coordinate system of the mechanical arm and a base coordinate system of the mechanical arm by utilizing a quaternion method; and fourthly, calculating a coordinate value of a tail end point of a welding workpiece under the coordinate of the mechanical arm, and then calculating an offset value of the workpiece in the pose combined with the pose of the mechanical arm. The systematic calibration method of the welding robot guided by the line structured light vision sensor is flexible, simple and fast, and is high in precision and generality, good in stability and timeliness and small in calculation amount.
Owner:JIANGNAN UNIV +1

System and method for enhancing computer-generated images of terrain on aircraft displays

A system and method are disclosed for enhancing the visibility and ensuring the correctness of terrain and navigation information on aircraft displays, such as, for example, continuous, three-dimensional perspective view aircraft displays conformal to the visual environment. More specifically, an aircraft display system is disclosed that includes a processing unit, a navigation system, a database for storing high resolution terrain data, a graphics display generator, and a visual display. One or more independent, higher precision databases with localized position data, such as navigation data or position data is onboard. Also, one or more onboard vision sensor systems associated with the navigation system provides real-time spatial position data for display, and one or more data links is available to receive precision spatial position data from ground-based stations. Essentially, before terrain and navigational objects (e.g., runways) are displayed, a real-time correction and augmentation of the terrain data is performed for those regions that are relevant and/or critical to flight operations, in order to ensure that the correct terrain data is displayed with the highest possible integrity. These corrections and augmentations performed are based upon higher precision, but localized onboard data, such as navigational object data, sensor data, or up-linked data from ground stations. Whenever discrepancies exist, terrain data having a lower integrity can be corrected in real-time using data from a source having higher integrity data. A predictive data loading approach is used, which substantially reduces computational workload and thus enables the processing unit to perform such augmentation and correction operations in real-time.
Owner:HONEYWELL INT INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products