Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

78 results about "Fusion image" patented technology

Image fusion method based on depth learning

The present invention relates to an image fusion method, especially to an image fusion method based on depth learning. The method comprises: employing a convolution layer to construct basic units based on an automatic encoder; stacking up a plurality of basic units for training to obtain a depth stack neural network, and employing an end-to-end mode to regulate the stack network; employing the stack network to decompose input images, obtaining high-frequency and low-frequency feature mapping pictures of each input image, and employing local variance maximum and region matching degree to merge the high-frequency and low-frequency feature mapping pictures; and putting a high-frequency fusion feature mapping picture and a low-frequency fusion feature mapping picture back to the last layer of the network, and obtaining a final fusion image. The image fusion method based on depth learning can perform adaptive decomposition and reconstruction of images, one high-frequency feature mapping picture and one low-frequency mapping picture are only needed when fusion, the number of the types of filters do not need artificial definition, the number of the layers of decomposition and the number of filtering directions of the images do not need selection, and the dependence of the fusion algorithm on the prior knowledge can be greatly improved.
Owner:ZHONGBEI UNIV

Laser-point cloud and image fusion data classification method based on multi-characteristic

A laser-point cloud and image fusion data classification method based on multi-characteristic comprises the following steps: 1, data preprocessing: preprocessing aviation image and unmanned plane laser-point cloud data; 2, sample extraction: fully utilizing geometry characteristics of the point cloud data and spectrum characteristics of the aviation images so as to carry out sample extraction of various types; 3, fusion data classification based on multi-characteristic: using a vector description model to classify the sample data; 4, precision evaluation: evaluating the precision of the classified data. The method is complete in surface object extraction and high in classification precision. The method starts from the angle of the fusion image spectrum information, carries out data fusionaccording to application purposes and surface object classification demands, sets corresponding classification rules for classification of main surface objects, and builds a corresponding relation between classification types and classification characteristics, thus extracting complete surface object areas, and reducing misclassification phenomenon.
Owner:NORTH CHINA UNIV OF WATER RESOURCES & ELECTRIC POWER

Space-time fusion method, system and device for remote sensing image data

The invention provides a remote sensing image data space-time fusion method, system and device. The method includes obtaining a change detection imagethrough calculation of two time-phase low-resolution remote sensing images; extracting an edge region of the high-resolution image of the first time phase, and calculating abundance corresponding to the number of various high-resolution pixels; calculating time-phase change values of various pixels according to the extraction result and abundance of the edge region; calculating a time prediction value and a space prediction value; according to the earth surface homogeneity degree, the time prediction value and the space prediction value, combining neighborhood information to distribute a residual value so as to acquire a preliminary fusion image; and utilizing the established optimization model to correct the change pixels contained in the preliminary fusion image to obtain a spatio-temporal data fusion result. According to the method provided by the embodiment of the invention, the applicability of different change detection algorithms in different scenes is comprehensively considered, the overall spectral precision of fusion is improved, more spatial detail information is reserved, and a better spatio-temporal data fusion result can be obtained.
Owner:THE HONG KONG POLYTECHNIC UNIV SHENZHEN RES INST

Three-dimensional virtual-real fusion experiment system

The invention discloses a three-dimensional virtual-real fusion experiment system, and the system comprises: an object tracking device which is used for recognizing the type of a real experiment object and positioning the three-dimensional space coordinates of the real experiment object in real time, and obtaining the type of the real experiment object and the corresponding real-time three-dimensional space coordinates; a hand tracking device which is used for identifying the joint points of the operating hand and positioning the three-dimensional space coordinates of the joint points of the operating hand in real time to obtain the real-time three-dimensional space coordinates of the joint points of the operating hand; and a virtual-real fusion processing device which is used for generating a corresponding virtual-real fusion image according to the type of the real experimental object, the corresponding real-time three-dimensional space coordinates of the real experimental object andthe real-time three-dimensional space coordinates of the operating hand joint points. According to the invention, the advantages of the virtual-real fusion technology are fully utilized, the virtual object and the real experimental object are fused, the real experimental object can be tracked and cultivated in real time while a user naturally moves the real experimental object, the experimental operation ability and learning interest of the user are cultivated, and the experimental risk is reduced.
Owner:SHANGHAI JIAO TONG UNIV

Remote sensing image fusion quality evaluation method

The invention discloses a remote sensing image fusion quality evaluation method, which comprises the steps of extracting LBP feature statistical histograms, edge histograms and spectral features of sub-blocks in different waveband images as feature vectors, and constructing an original multivariate Gaussian model in a training stage; in the test stage, constructing original multivariate Gaussian models in the test stage, and calculating a full-resolution quality evaluation prediction value according to the two original multivariate Gaussian models constructed during training and testing; calculating spectral similarity and spatial similarity between all waveband images of the multispectral image referred by the test remote sensing fusion image and downsampling waveband images obtained after downsampling operation is carried out on all waveband images of the test remote sensing fusion image, and obtaining a resolution reduction quality evaluation prediction value; and obtaining an objective quality evaluation prediction value. Due to the fact that the obtained feature vector information can well reflect the quality change condition of the remote sensing fusion image, the correlationbetween an objective evaluation result and human eye subjective perception is effectively improved.
Owner:NINGBO UNIV

A method of target change detection

A method for detect a change of an object Firstly, the salient region of the image is extracted by Gabor texture feature, then the fusion image with salient object is obtained by guided filter fusion,and the fusion image is segmented by means of means shift, and then the HOG texture feature is used to compute the variance of the segmented texture, and the final change detection results are compared with each other. The technical proposal of the invention is based on the high-resolution remote sensing satellite image of the image guiding filter fusion and the texture characteristic analysis for the military target change detection technology, which can ensure the change detection precision and improve the change detection efficiency through the image fusion, and truly realizes the automatic, fast and accurate change detection of the military target.
Owner:BEIJING AEROSPACE AUTOMATIC CONTROL RES INST +1

Field site straw extraction method and device based on aerospace remote sensing data fusion

The invention provides a field site straw extraction method and device based on aerospace remote sensing data fusion. The method comprises the following steps: acquiring a satellite image and an unmanned aerial vehicle image of a target area; performing image fusion on the satellite image and the unmanned aerial vehicle image to generate a fused image; and based on a conditional random field of spatial-spectral fusion, performing ground feature classification marking according to the vegetation index and the texture parameter calculated by the fused image so as to generate a straw distribution map. According to the method, the advantage of high spatial resolution of an unmanned aerial vehicle image and the advantage of rich spectral information such as short-wave infrared of a satellite image are integrated, and the unmanned aerial vehicle image and the satellite image are fused to generate a fused image with the spectral information such as centimeter-level high spatial resolution and short-wave infrared; and according to the spatial-spectral fusion conditional random field, ground feature classification is carried out by calculating the vegetation index and the texture feature of the fusion image, and rapid and high-precision extraction of the ground straw is realized.
Owner:北京市农林科学院信息技术研究中心

Intelligent drawing method for surface water body based on wide-view-field high-resolution No.6 satellite image

The invention discloses an intelligent drawing method for a surface water body based on a wide-view-field high-resolution No.6 satellite image, which comprises the following steps: on the basis of selecting high-quality wide-view-field Gaofen-6 remote sensing image data, obtaining wide-view-field 2m-resolution eight-waveband Gaofen-6 fused image data based on a grid division geometric fine correction and image fusion method; carrying out image multi-scale rapid segmentation based on principal component analysis and object-oriented combination, constructing a feature classification vector on the scale of a segmented object, selecting a water body/non-water body training sample with relatively high spatial-temporal representativeness based on a spatial grid, and constructing an object-oriented deep neural network-based surface water body intelligent mapping model for wide-view-field high-resolution No.6 remote sensing image surface water body automatic drawing is realized. The invention mainly improves the efficiency and precision of geometric fine correction, multi-scale segmentation and surface water body intelligent extraction of the wide-view-field image, and has good application potential in flood monitoring, river and lake supervision and water ecology investigation.
Owner:CHINA INST OF WATER RESOURCES & HYDROPOWER RES

Object-oriented remote sensing image data space-time fusion method, system and device

The invention discloses an object-oriented remote sensing image data space-time fusion method, a system and a device, which are suitable for being used in the technical field of remote sensing. The method comprises the following steps: firstly, acquiring a high-resolution image and a low-resolution image of a first time phase and a low-resolution image of a second time phase; downscaling the two time-phase low-spatial-resolution images to the same resolution as the first time-phase high-resolution image by using a bicubic interpolation model to obtain an interpolation image; segmenting the high-resolution image ground object of the first time phase by using image segmentation; in each segmentation block, inputting the interpolation image and a high-resolution image of a first time phase into a pre-established linear interpolation model to obtain a preliminary fusion result; in each segmentation block, searching spectral similar pixels of a target pixel pixel pixel by pixel, and takingan intersection of the two images as a final spectral similar pixel; and performing spatial filtering through inverse distance weighting in combination with spectral similar pixel information, so thata final fused image can be obtained. The steps are simple, and the obtained spatio-temporal data fusion result is better.
Owner:CHINA UNIV OF MINING & TECH

Movable carrier auxiliary system and vehicle auxiliary system

A movable carrier auxiliary system includes at least two optical image capturing systems respectively disposed on a left portion and a right portion of a movable carrier, at least one image fusion output device, and at least one displaying device. Each optical image capturing system includes an image capturing module and an operation module. The image capturing module captures and produces an environmental image surrounding the movable carrier. The operation module electrically connected to the image capturing module detects at least one moving object in the environmental image to generate a detecting signal and at least one tracking mark. The image fusion output device is disposed inside of the movable carrier and is electrically connected to the optical image capturing systems, thereby to receive the environmental image to generate a fusion image. The displaying device is electrically connected to the image fusion output device to display the fusion image and the at least one tracking mark.
Owner:ABILITY OPTO ELECTRONICS TECH

Hyperspectral and multispectral remote sensing information fusion method and system

The invention relates to a hyperspectral and multispectral remote sensing information fusion method and system. The method comprises the following steps: performing hyperspectral atmospheric correction according to pre-acquired hyperspectral remote sensing image data; carrying out multispectral atmospheric correction according to multispectral remote sensing image data acquired in advance; establishing a wave band mapping model based on a hyperspectral reflectivity image result and a multispectral reflectivity image result of the earth surface generated after hyperspectral atmospheric correction and multispectral atmospheric correction; carrying out weighted calculation of the spectral reflectivity value on the virtual hyperspectral reflectivity image result by taking the original hyperspectral reflectivity image value as a reference, and generating a hyperspectral reflectivity fusion result; based on the hyperspectral reflectivity fusion result, adopting an atmospheric radiation transmission model for simulation, achieving atmospheric radiation transmission conversion of a reflection fusion image, and generating a fusion image result of a hyperspectral remote sensing image and a multispectral remote sensing image.
Owner:自然资源部国土卫星遥感应用中心

Large-area high-fidelity satellite remote sensing image uniform-color mosaic processing method and device

The invention provides a large-area high-fidelity satellite remote sensing image uniform-color mosaic processing method and device, and the method comprises the following steps: screening a fusion image, and carrying out inspection, supplement and replacement of the fusion image; carrying out mosaic preprocessing on the initial fusion image, carrying out mosaic line editing on the preprocessed image, carrying out balanced editing on the color of a mosaic line, carrying out mosaic, adjusting the definition of the image, eliminating the atmospheric influence, carrying out refined adjustment on the local surface feature brightness and color of the image, and carrying out bit reduction processing on the image; and outputting to obtain an image product. The method solves the problems that in the prior art, due to the difference of satellite image imaging conditions, time and atmospheric environments, the color difference between images imaged at different time in the same area is large, and due to the difference between different sensors, the color uniformizing and embedding work of multiple satellite data sources becomes very complex, and the data quality is poor. And there is no effective fixed solution method and process.
Owner:自然资源部国土卫星遥感应用中心

Panchromatic and multispectral image real-time fusion method based on cooperative processing of CPU and GPU

The invention discloses a panchromatic and multispectral image real-time fusion method based on cooperative processing of CPU and GPU and GPU cooperative processing, which is used for quickly fusing full-color and multispectral data to generate a fused image with high spatial resolution and high spectral resolution. All steps of panchromatic and multispectral fusion are completed in a memory, the steps with the large calculation amount are mapped to a GPU, fusion efficiency can be greatly improved while fusion precision is guaranteed, the strict geometric corresponding relation between panchromatic and multispectral is considered through RFM, and the method has the advantages of being high in robustness and high in robustness. Geometric deviation possibly existing between panchromatic multispectrums is limited to the maximum extent through differential correction, so that the registration effect of panchromatic and multispectrums can be optimal, and finally a fusion image with the optimal effect is obtained through a multi-scale SFIM fusion method and a panchromatic spectral decomposition image fusion method. The invention is suitable for fast fusion processing requirements of large-data-volume sensor data.
Owner:中国人民解放军61646部队

Multi-source heterogeneous remote sensing image fusion method

The invention discloses a multi-source heterogeneous remote sensing image fusion method, which comprises the following steps of: firstly, selecting a high-resolution remote sensing image and an unmanned aerial vehicle aerial photo image in the same area, performing orthographic correction and image registration preprocessing on the high-resolution remote sensing image and the unmanned aerial vehicle aerial photo image, and fusing a multispectral image (MSS) and a panchromatic image (PAN) of the high-resolution remote sensing image by adopting a GS (Gram-Schmidt) algorithm; performing HIS transformation on the GS image to obtain three components of brightness (I1), chromaticity (H1) and saturation (S1), and then performing waveband decomposition on an unmanned aerial vehicle (UAV) aerial photo image by adopting a Laurous wavelet algorithm to obtain wavelet surfaces with different resolutions; and superposing each wavelet surface in the PAN image to obtain a fused image UAP, carrying out HIS conversion on the UAP to obtain corresponding brightness (I2), chroma (H2) and saturation (S2) components, replacing the brightness component I1 of the GS image with the brightness component I2 of the UAP image, and carrying out HIS inverse conversion on the I2, H1 and S1 components to obtain a fused image.
Owner:浙江大学德清先进技术与产业研究院
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products