Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

72 results about "Visual capture" patented technology

In psychology, visual capture is the dominance of vision over other sense modalities in creating a percept. In this process, the visual senses influence the other parts of the somatosensory system, to result in a perceived environment that is not congruent with the actual stimuli. Through this phenomenon, the visual system is able to disregard what other information a different sensory system is conveying, and provide a logical explanation for whatever output the environment provides. Visual capture allows one to interpret the location of sound as well as the sensation of touch without actually relying on those stimuli but rather creating an output that allows the individual to perceive a coherent environment.

Visual capture method and device based on depth image and readable storage medium

InactiveCN107748890AImprove robustnessTexture features, so that the system can not only recognize lessCharacter and pattern recognitionCluster algorithmPattern recognition
The invention discloses a visual capture method and device based on a depth image and a readable storage medium. The method comprises steps that a point cloud image is acquired through a depth cameraKinect, the acquired point cloud image is segmented through an RANSAN random sampling consensus algorithm and an Euclidean clustering algorithm, and an identification-needing target object is acquiredthrough segmentation; 3D global characteristics and color characteristics of the object are respectively extracted and are fused to form a new global characteristic; off-line training of a multi-class support vector machine classifier SVM is carried out through utilizing the new global characteristic of the object, category of the target object is identified through utilizing the trained multi-class support vector machine classifier SVM according to the new global characteristic; then the category and the grasping position of the target object are determined; and lastly, according to the category of the target object and the grasping position of the target object, a manipulator and a gripper are controlled for grasping the target object to the specified position. The method is advantagedin that the target object can be accurately identified and grasped.
Owner:SHANTOU UNIV

Floor sweeping robot having visual navigation function and navigation method thereof

The invention provides a floor sweeping robot having the visual navigation function and a navigation method thereof. The floor sweeping robot comprises a sweeping host and a visual navigation controller. The sweeping host comprises a sensing unit, a driving unit, and a driving control unit. The visual navigation controller includes a first visual capture unit, a second visual capture unit, a WIFI communication unit, a storage unit, an indoor simultaneous positioning and mapping unit and a path navigation and control unit. The first visual capture unit and the second visual capture unit are configured to capture the related visual information. The related visual information includes the map information of the local map and the global map of a room wherein the floor sweeping robot is located. The path navigation and control unit is used for sending out a path navigation instruction according to the visual information, the map information and the sensing information. The driving control unit is used for controlling the driving unit according to the path navigation instruction. Advantageously, based on the floor sweeping robot having the visual navigation function and the navigation method thereof, the visual navigation and path planning function can be realized based on the simple modification of an ordinary floor sweeping robot (without a camera).
Owner:LOOQ SYST

Visual perception device and control method thereof

The invention relates to a visual perception device which comprises a processing unit, a camera unit, a display unit, a display screen and a storage unit, wherein the processing unit comprises an image processing module, a visual calibration module, a cursor control module and an object control module; the image processing module is used for controlling the camera unit to intake the visual images of the eyes of a user when the user stares at a target object in the display screen, processing the visual images to obtain a visual focal position, and computing visual calibration offset; the visual calibration module is used for carrying out coordinate calibration on the visual focal position according to the visual calibration offset; the cursor control module is used for selecting the peripheral area of the visual focal point as a visual cursor to judge whether the dwell time of the visual cursor is longer than a set time; and the object control module is used for controlling the visual cursor to select the target object when the dwell time is longer than the set time, and controlling the next target object to enter a visual cursor area when the dwell time is shorter than or equal the set time. The invention can reduce the unreliability of visual capture and the number of times of manual interaction, and realizes the effect of electricity saving and energy saving at the same time.
Owner:HENGQIN INT INTPROP EXCHANGE CENT CO LTD

Visual capture based unmanned aerial vehicle transplanting system

The invention relates to the technical field of application of unmanned aerial vehicles, in particular to a visual capture based unmanned aerial vehicle transplanting system. The visual capture basedunmanned aerial vehicle transplanting system comprises an engine body and a transplanting mechanism, the transplanting mechanism is slidably arranged at one end of the bottom of the engine body to beconnected with seedlings for transplanting, the transplanting mechanism includes a first sliding table and a transplanting assembly, and further includes a material channel and a clamping mechanism, the material channel is horizontally arranged at the other end of the base of the engine body to accommodate to-be-transplanted seedlings, the clamping mechanism is arranged above the material channeland is located at the other end of the base of the engine body to take the seedlings from the material channel, and the clamping mechanism includes a second sliding table and a clamping assembly. By the arrangement that a current mainstream manual transplanting mode is replaced with an automatic mode, the working efficiency is improved, time is saved, and the labor intensity is low; compared withan vehicle transplanter in the prior art, the unmanned aerial vehicle transplanting system has the advantages of simple structure, small size, less destructive power to farmland and lower cost at thesame time.
Owner:倪晋挺

Large-view-field real-time deformation measurement system and method for spacecraft structure static test

PendingCN113513999AWith dynamic real-time measurementWith real-time measurementUsing optical meansStrength propertiesClassical mechanicsUncrewed vehicle
The invention relates to a large-view-field real-time deformation measurement system and method for a spacecraft structure static test, and belongs to the field of structure deformation measurement in a large spacecraft structure static load test. The system comprises a test piece module, a calibration module, a multi-camera networking vision measurement module and a self-adaptive view field adjustment module; a test supporting tool is used for fixing a test piece and providing a loading boundary for the test piece, and the self-adaptive view field adjustment module is used for adjusting the measurement view field and the measurement position of the multi-camera networking system to achieve global visual capture of all measured points of measured pieces of various sizes and shapes. According to the measurement method, a fixed calibration target calibration system and a mobile unmanned aerial vehicle suspended calibration ruler calibration system are combined to calibrate a multi-camera measurement system. And high-precision measurement of the multi-camera networking measurement system is realized by adopting a method of carrying out real-time synchronous calibration and correction on the camera group before the test and before each measurement in the test.
Owner:BEIJING SATELLITE MFG FACTORY

Robot Vision Grasping Method

The invention discloses a robot vision grasping method. The robot vision grasping method includes the steps that a product information tag is arranged on a to-be-grasped product, wherein the product information tag comprises size information of the to-be-grasped product and information of the position of the product information tag on the to-be-grasped product; the to-be-grasped product is conveyed through a conveying device; the conveying device is allowed to stop conveying of the to-be-grasped product; a picture of the to-be-grasped product in the static state is obtained through an image obtaining device, wherein the picture comprises an image of the product information tag; the image obtaining device transmits the picture to an image processing control device, and in this way, the image processing control device obtains the information of the position of the product information tag relative to the image obtaining device, the size information of the to-be-grasped product and the information of the position of the product information tag on the to-be-grasped product according to an image of the product information tag in the picture; and the image processing control device transmits corresponding indication information to a robot to control the grabbing action of the robot on the to-be-grasped product in the static state.
Owner:FREESENSE IMAGE TECH

Intelligent system for obtaining state of power equipment based on computer vision

The invention aims to provide an intelligent system for obtaining the state of the power equipment based on the computer vision, which is good in safety, high in reliability, high in adaptability, simple and convenient in digital management, convenient to use and low in production cost. The system comprises an equipment identification module, a visual capture module and a state identification module. The equipment identification module is used for marking a state indication area on the power equipment, and when the operation state of the power equipment is changed, the state indication area is changed; the visual capture module is used for capturing image information of the equipment identification module, pre-processing the image information after determining the identity and the effective area of the power equipment, and then transmitting the pre-processed image information to the state recognition module; and the state identification module is used for verifying the preprocessed image information by applying a computer vision algorithm and cooperating with an internal preset image identification model, and extracting the operation state of the power equipment. The system is applied to the technical field of power equipment state acquisition.
Owner:ZHUHAI LIXIANG TECH +1
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products