Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

78results about How to "Improve fusion quality" patented technology

Multi-focusing image fusion method utilizing core Fisher classification and redundant wavelet transformation

The invention discloses a multi-focusing image fusion method utilizing core Fisher classification and redundant wavelet transformation. The method comprises the following steps: firstly, carrying out image block segmentation on source images and calculating definition characteristics of each image block; secondly, taking part of areas of the source images as a training sample and obtaining various parameters of a core Fisher classifier after training; thirdly, utilizing the known core Fisher classifier to obtain preliminary fusion images; and finally, utilizing redundant wavelet transformation and space correlation coefficients to carry out fusion processing on the image blocks positioned at the junction of the clear and fuzzy areas of the source images to obtain final fusion images. The invention has better image fusion performance, does not have obvious blocking artifacts and artifacts in fusion results, obtains better compromise between the effective enhancement of the image fusion quality and the reduction of the calculation quantity and can be used in the subsequent image processing and display. When wavelet decomposition layers with less number are adopted, the invention is more suitable for an occasion with higher real-time requirement.
Owner:CHONGQING SURVEY INST

Omnidirectional three-dimensional laser color scanning system and method thereof

The invention relates to the technical field of three-dimensional color point cloud data processing and three-dimensional scene reconstruction, and provides an omnidirectional three-dimensional laser color scanning system and a method thereof. The system comprises a laser scanner, a high speed camera, an electrically controlled rotary table, a conductive slip ring, an encoder, a data acquisition module, a switch, a servo drive, a computer and an equipment support. The method comprises the steps that 1, the electrically controlled rotary table is controlled to uniformly rotate; 2, hardware and software composite data are synchronized; and 3, synchronous acquisition data are fused. According to the invention, the software and hardware compound data synchronization method is used to synchronously acquire laser point cloud, a two-dimensional image and a rotation angle in real time; the synchronization precision is high; data fusion is carried out on the laser point cloud and the two-dimensional image of each synchronization moment to acquire the three-dimensional color point cloud data of a scene in real time; data acquisition is in real time; and data fusion is carried out on the two-dimensional image of each synchronization moment, which has the advantages of enormous image information and high fusion quality.
Owner:北京汉德图像设备有限公司

dual-light image fusion model based on a depth convolution antagonism generation network DCGAN

The invention discloses a dual-light image fusion model based on a depth convolution antagonism generation network DCGAN. The model extracts image features of the same target under visible light and infrared light through a depth discrimination convolution network, and sparsely encodes the two image features according to the same feature dictionary; the dual-light image fusion model extracts imagefeatures of the same target under visible light and infrared light through a depth discrimination convolution network. Then the coding feature is fused and used as the input data of the depth convolution generation network, so that the generation network generates the fusion image. Finally, the error training model between the fusion feature and the coding fusion feature is used to generate the dual-light fusion image. The model uses depth learning network to extract and encode the features of visible and infrared images, and the feature points of the two images can be automatically matched by fusing the features of the encoding. The model of the invention can be called at any time after training, and the double-light image with high fusion quality can be automatically generated by inputting visible light image and infrared light image at the same time.
Owner:STATE GRID GANSU ELECTRIC POWER CORP +1

Multi-mode medical image fusion method based on low-rank decomposition and sparse representation

The present invention discloses a multi-mode medical image fusion method based on low-rank decomposition and sparse representation. The method includes the following steps that: two different multi-mode medical images to be fused are subjected to low-rank decomposition, so that a low-rank part image and a sparse part image are obtained respectively; a KSVD (K-means singular value decomposition) algorithm is adopted to train a selected non-medical image set so that a low-rank dictionary can be obtained, and the KSVD (K-means singular value decomposition) algorithm is utilized to perform low-rank decomposition on the selected non-medical image set, so that a sparse part image set can be obtained, and the sparse part image set is trained, so that a sparse dictionary can be obtained; a sparse representation method is adopted to sparsely reconstruct the low-rank part image and the sparse part image, so that a low-rank reconstructed image and a sparse reconstructed image can be obtained; the sparse representation method is adopted to sparsely fuse the low-rank reconstructed image and the sparse reconstructed image, so that a fused image can be obtained; the difference values of the two multi-mode medical images and the sparse reconstructed image and the low-rank reconstructed image are calculated; and the difference values are added into the fused image, so that a final sparse fused image can be obtained. The subjective and objective evaluation indexes of the multi-mode medical image fusion method of the invention are better than the indexes of a traditional fusion method.
Owner:云南联合视觉科技有限公司

Remote sensing panchromatic and multispectral image distributed fusion method based on residual network

The invention provides a remote sensing panchromatic and multispectral image distributed fusion method based on a residual network, which mainly solves the problems of spectrum distortion, low spatial resolution and low fusion quality in the prior art, and comprises the following steps: collecting an original image of a target area through a satellite, and preprocessing the original image; constructing a simulation training set and a test set by using the preprocessed panchromatic image and the preprocessed multispectral image according to a Wald criterion, constructing a residual network-based distributed fusion model consisting of three branches, and fully training a network by taking the panchromatic image and the multispectral image of the training set as input of the network; and inputting panchromatic and multispectral images to be fused into the trained fusion network to obtain a fused image. According to the method, the features of different scales of different branches are used for fusion, more spectral information and spatial information are reserved, the method has better performance in the aspects of improving the spatial resolution and reserving the spectral information, and the fusion quality is improved.
Owner:HAINAN UNIVERSITY

SAR image fusion method based on multiple-dimension geometric analysis

ActiveCN101441766AImprove fusion qualityLittle detailImage enhancementImaging processingMultiscale geometric analysis
The invention discloses an SAR image fusion method based on a multi-scale geometric analysis tool and belongs to the field of image processing technology; and the method mainly solves the problems that the prior fusion method has phenomenon of fuzzy and massive detailed compositions. The method is realized through the following steps: (1) two source images are subjected to image fusion respectively based on wavelet transformation to obtain a fusion result img_wave; (2) the two source images are subjected to image fusion based on Contourlet conversion to obtain a fusion result img_cont; (3) information entropy, average gradient and standard deviation of the two fusion results img_wave and img_cont are calculated respectively; (4) the information entropy, average gradient and standard deviation of the image fusion result img_wave based on wavelet transformation and the information entropy, average gradient and standard deviation of the image fusion result img_cont based on Contourlet conversion are compared so as to judge the quality of the fusion results; and (5) according to a judgment result, secondary fusion is selected. The SAR image fusion method improves the information content of the fused image, ensures the definition of the image and can be used for fusion of an SAR image, a natural image and a medical image.
Owner:XIDIAN UNIV

Method for evaluating validity of estimated value of radar relative system error

ActiveCN110045342APrinciple method scienceThe implementation steps are reasonableWave based measurement systemsObservation dataRadar observations
The invention belongs to the technical field of multi-radar data fusion, and particularly relates to a method for evaluating the validity of an estimated value of a radar relative system error. According to the method provided by the invention, a group of observation data of a typical route target by primary and secondary station radars is selected, after central unified rectangular coordinate conversion, linear parameter iterative estimation is performed on two radar observation track lines, a system error correction track line of the secondary station radar by using an unweighted linear track line parameter estimation model and a weighted linear track line parameter estimation model successively, and a validity index of the estimated value of a system error of a secondary station correction track line is constructed by calculating relative entropies with a primary station observation track line before and after the system error of a secondary station observation track line is corrected, so as to evaluate the validity and the correction effect of the estimated value of the system error. According to the method provided by the invention, the principle method is scientific, and theimplementation steps are reasonable. Compared with the traditional system error validity evaluation method, the complexity is simplified while improving the accuracy and the operability, and the engineering implementation is facilitated.
Owner:STRATEGIC EARLY WARNING RES INST OF THE PEOPLES LIBERATION ARMY AIR FORCE RES INST

Real-time visible light image and infrared image multi-channel fusion method and device

The invention discloses a real-time visible light image and infrared image multi-channel fusion method and device, and the method comprises the steps: decomposing a visible light matrix corresponding to a visible light image to obtain a visible light low-frequency matrix and a visible light high-frequency matrix, and decomposing an infrared matrix corresponding to an infrared image to obtain a visible light low-frequency matrix and a visible light high-frequency matrix; obtaining an infrared low-frequency matrix and an infrared high-frequency matrix; according to the visible light low-frequency weight and the infrared low-frequency weight, fusing the visible light low-frequency matrix and the infrared low-frequency matrix to obtain a low-frequency fusion matrix, and according to the visible light high-frequency weight and the infrared high-frequency weight, fusing the visible light high-frequency matrix and the infrared high-frequency matrix to obtain a high-frequency fusion matrix; and fusing the low-frequency fusion matrix and the high-frequency fusion matrix based on a preset fusion mode to generate a fusion matrix corresponding to the visible light image and the infrared image. Visibly, the visible light image and the infrared image can be fused by implementing the method, so that the resolution of the main body content and the background area in the image is improved, and meanwhile, the meticulous degree of the image texture is improved.
Owner:深圳鼎匠科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products