Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

77 results about "Visual matching" patented technology

Binocular visible light camera and thermal infrared camera-based target identification method

The invention discloses a binocular visible light camera and thermal infrared camera-based target identification method. The method comprises the steps of calibrating internal and external parametersof two cameras of a binocular visible light camera through a position relationship between an image collected by the binocular visible light camera and a pseudo-random array stereoscopic target in a world coordinate system, and obtaining a rotation and translation matrix position relationship, between world coordinate systems, of the two cameras; according to an image collected by a thermal infrared camera, calibrating internal and external parameters of the thermal infrared camera; calibrating a position relationship between the binocular visible light camera and the thermal infrared camera;performing binocular stereoscopic visual matching on the images collected by the two cameras of the binocular visible light camera by adopting a sift feature detection algorithm, and calculating a visible light binocular three-dimensional point cloud according to a matching result; performing information fusion on temperature information of the thermal infrared camera and the three-dimensional point cloud of the binocular visible light camera; and inputting an information fusion result to a trained deep neural network for performing target identification.
Owner:SOUTHWEAT UNIV OF SCI & TECH

Automatic spraying system and automatic spraying method based on point cloud and image matching

The invention provides an automatic spraying system based on point cloud and image matching. The automatic spraying system comprises a three-dimensional scanning module, an automatic trajectory planning module, a visual matching module and a spraying module. The three-dimensional scanning module is used for scanning a spraying object and acquiring a point cloud model according to scanned three-dimensional point cloud data; the automatic trajectory planning module is used for planning spraying trajectories in a point cloud space; the visual matching module is used for acquiring a transformational relation between a point cloud coordinate system and a spraying robot coordinate system; the spraying module is used for automatically spraying the spraying object. The automatic spraying system has the advantages that an algorithm is planned automatically according to a robot spraying path of the point cloud coordinate system, the point cloud coordinate system is correlated with the robot coordinate system according to the cloud point and image matching algorithm, and accordingly, automatic spraying of the spraying object is realized; on the basis of guaranteeing spraying efficiency, the spraying quality is improved greatly, calculation quantity for trajectory planning is reduced, and trajectory planning quality is improved.
Owner:CHONGQING UNIV

Double exposure implementation method for inhomogeneous illumination image

InactiveCN103530848AKeep detailsRemove color distortionImage enhancementVisual matchingIlluminance
The invention discloses a double exposure implementation method for an inhomogeneous illumination image. The double exposure implementation method for the inhomogeneous illumination image comprises the following steps: obtaining an illumination image; obtaining a reflection image; overlaying the illumination image with the reflection image to obtain the global enhancement effect; fusing the global enhancement effect with the original image; and carrying out color correction on an enhancement result to obtain a visual matching image. According to the double exposure implementation method for the inhomogeneous illumination image, which is disclosed by the invention, the smooth property of the illumination image is constrained, and the reflection image is sharpened by the visual threshold value characteristic to guarantee the detail information of the image; with an image fusion method, the luminance, the contrast ratio and the color information of an original image luminance range can be effectively kept; because the characteristic of the human eye visual perception of average background brightness is introduced in, the fused image can be used for effectively eliminating image color distortion near a shadow boundary; the color of a low-illuminance zone is restored by the color correction technology; the colors of the low-illuminance zone and the luminance range are free from obvious distortion; the continuity is good; and the visual effect is more nature.
Owner:AIR FORCE UNIV PLA

Scenic spot scenery moving augmented reality method based on space relationship and image analysis

The invention discloses a scenic spot scenery moving augmented reality method based on space relationship and image analysis. The method comprises the following steps that a geographical space data model facing an object is adopted for building a scenic spot geographical database according to the scenic spot scenery moving augmented reality guide demands and the scenic spot scenery space data organization characteristics; a built-in sensor of an intelligent mobile phone is adopted for obtaining the current position coordinates and the space orientation, an image pick-up view sight model based on multiple sensors is built, and the corresponding relationship between the real scenery images and the actual geographical space is generated; key frames are extracted from video image flows shot by the intelligent mobile phone and are fast divided sequentially through binaryzation and mathematical morphology methods; the visual matching between the real scenery space and the information space is realized through a measure of combining the image analysis with the space relationship, and the scenic spot scenery is identified; and the identified scenery is subjected to tracking registration by a moving detection method. The scenic spot scenery moving augmented reality method overcomes the defects of poor precision and low image identification technology efficiency of a space relationship realization method.
Owner:HANGZHOU NORMAL UNIVERSITY

Vehicle re-recognition method based on space-time constraint model optimization

The invention discloses a vehicle re-recognition method based on space-time constraint model optimization. The method comprises the following steps: 1) obtaining a to-be-queried vehicle image; 2) fora given vehicle query image and a plurality of candidate pictures, extracting vehicle attitude features through a vehicle attitude classifier and outputting a vehicle attitude category; 3) fusing thevehicle attitude feature and the fine-grained identity feature of the vehicle to obtain a fusion feature of the vehicle based on visual information, and obtaining a visual matching probability; 4) estimating the relative driving direction of the vehicle, and establishing a vehicle space-time transfer model; 5) obtaining a vehicle space-time matching probability; 6) based on the Bayesian probability model, combining the visual matching probability and the space-time matching probability of the vehicle to obtain a final vehicle matching joint probability; and 7) arranging the joint probabilitiesof the queried vehicle and all candidate vehicles in a descending order to obtain a vehicle re-recognition sorting table. The method provided by the invention greatly reduces the false recognition rate of vehicles and improves the accuracy of a final recognition result.
Owner:WUHAN UNIV OF TECH

Comprehensive evaluation optimization method and comprehensive evaluation optimization system for light source vision and non-vision effect performance

The invention is suitable for the technical field of an illumination spectrum, and provides a comprehensive evaluation optimization method for a light source vision and non-vision effect performance. The comprehensive evaluation optimization method comprises the following steps of obtaining a non-vision biological effect matching function c<->(lambda), a scotopic vision matching function s<->(lambda) and a photopic vision matching function p<->(lambda) according to a spectrum-sensitive curve of a non-vision biological effect C(lambda), a scotopic vision V'(lambda) and a photonic vision V(lambda); according to the c<->(lambda), the s<->(lambda) and the p<->(lambda) and the arrangement relationship of light source spectrums, obtaining radiation efficiency functions of the C(lambda), the V'(lambda) and the V(lambda), and performing unification for obtaining three stimulus values C, S and P which correspond with the C(lambda), the V'(lambda) and the V(lambda); calculating c, s and p, and establishing a CSP spectrum locus diagram according to the c, the s and the p; drawing a coordinate which corresponds with an equal-energy white and black-body radiation light source on the CSP spectrum; and drawing an S / P value spectrum and a C / P value spectrum on the CSP spectrum. According to the comprehensive evaluation optimization method, related visions are integrated and unified, thereby facilitating comprehensive performance improvement of different visions and realizing accurate calculation; and furthermore light source spectrum optimization is facilitated, thereby satisfying a requirement for required visual performance.
Owner:SHENZHEN UNIV

Method of realizing real-time visual tracking on target based on image color and texture analysis

The invention relates to the technical field of unmanned aerial vehicles, and particularly discloses a method of realizing real-time visual tracking on a target based on an image color and texture analysis. The method comprises a step of extraction and analysis of image features in which proper visual features are extracted according to the texture and the color of the image, significant features of the target in a scene are analyzed and extracted, and reference is provided for design of a tracking algorithm, a step of design of the target tracking algorithm in which the reasonable target tracking algorithm is designed according to a visual feature analysis result, the target tracking algorithm adopts a color tracking algorithm or a texture tracking algorithm and application conditions of the tracking algorithm are brought forward, and a step of target matching and recovering in which the proper target features are used for visual matching to judge the accuracy of the tracking result. When target occlusion or image loss happens, the target can be recovered through full image search. A visual tracking method is designed for a specific application situation, and high-reliability tracking on the target is thus realized.
Owner:西安因诺航空科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products