Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

212 results about "Remote sensing image fusion" patented technology

Remote sensing image fusion method based on sparse representation

The invention discloses a remote sensing image fusion method based on sparse representation. The method comprises the following steps of: firstly, establishing a linear regression model between a multispectral image and a brightness component thereof; secondly, performing sparse representation on a panchromatic image and the multispectral image by using high and low resolution dictionaries respectively, and acquiring sparse representation coefficients of the brightness component of the multispectral image according to the linear regression model; thirdly, extracting detail components according to the sparse representation coefficients of the panchromatic image and the brightness component, and implanting the detail components to the sparse representation coefficients of each band of the multispectral image under a general component replacement fusion framework; and finally, performing image restoration to obtain a multispectral image with high spatial resolution. According to the method, the sparse representation technology is introduced into the field of remote sensing image fusion, so that the defect that high spatial resolution and spectral information cannot be simultaneously preserved in the prior art is overcome; and the fusion result of the method is superior to that of the conventional remote sensing image fusion method on the aspects of spectral preservation and spatial resolution improvement.
Owner:SHANGHAI JIAO TONG UNIV

Remote sensing image fusion method, system and related components

The invention discloses a remote sensing image fusion method. The fusion method comprises the following steps: preprocessing a training sample to obtain a training data set; inputting the training data set into a feature extraction neural network to obtain the full-color feature information and the multi-spectral feature information; training the feature fusion neural network to obtain fusion features and coding feature maps by means of all panchromatic feature information and all multi-spectral feature information training features; inputting the fusion features and the coding feature maps into an image reconstruction neural network, and performing reconstructing operation by utilizing the inverse convolution layer of the image reconstruction neural network to obtain a high-resolution multi-spectral image, so that the image reconstruction neural network is trained; and constructing a depth neural network model, carrying out remote sensing image fusion operation by means of the depth neural network model. According to the method, the quality of the fused panchromatic image and the multi-spectral image can be improved. The invention further discloses a remote sensing image fusion system, a computer readable storage medium and electronic equipment, having the above beneficial effects.
Owner:GUANGDONG UNIV OF TECH

Compressive sensing theory-based satellite remote sensing image fusion method

The invention discloses a compressive sensing theory-based satellite remote sensing image fusion method. The method comprises the following steps of: vectoring a full-color image with high spatial resolution and a multi-spectral image with low spatial resolution; constructing a sparsely represented over-complete atom library of an image block with high spatial resolution; establishing a model from the multi-spectral image with high spatial resolution to the full-color image with high spatial resolution and the multi-spectral image with low spatial resolution according to an imaging principle of each land observation satellite; solving a compressive sensing problem of sparse signal recovery by using a base tracking algorithm to obtain sparse representation of the multi-spectral color image with high spatial resolution in an over-complete dictionary; and multiplying the sparse representation by the preset over-complete dictionary to obtain the vector representation of the multi-spectral color image block with high spatial resolution and converting the vector representation into the image block to obtain a fusion result. By introducing the compressive sensing theory into the image fusion technology, the image quality after fusion can be obviously improved, and ideal fusion effect is achieved.
Owner:HUNAN UNIV

City impervious surface extraction method based on fusion of SAR image and optical remote sensing image

Provided is a city impervious surface extraction method based on fusion of an SAR image and an optical remote sensing image. The method comprises that a general sample set formed by samples of a research area is selected in advance, and a classifier training set, a classifier test set and a precision verification set of impervious surface extraction results are generated from the general sample set in a random sampling method; the optical remote sensing image is configured with the SAR image of the research area, and features are extracted from the optical remote sensing image and the SAR image; training is carried out, the city impervious surface is extracted preliminarily on the basis of a random forest classifier, and optimal remote sensing image data, SAR image data and an impervious surface RF preliminary extraction result are obtained; decision level fusion is carried out by utilizing a D-S evidence theory synthesis rule, and a final impervious surface extraction result of the research area is obtained; and the precision of each extraction result is verified via the precision verification set. Advantages of the optical remote sensing image and SAR image data sources are utilized fully, the SAR image and optical remote sensing image fusion method based on the RF and D-S evidence theory is provided, and the impervious surface of higher precision in the city is obtained.
Owner:WUHAN UNIV

Remote sensing image fusion method based on contourlet transform and guided filter

The invention discloses a remote sensing image fusion method based on contourlet transform and guided filter, mainly to solve problems of image contrast reduction and unclear image edge characteristic expression caused by the existing image fusion method. The method particularly comprises steps: the same target is photographed to obtain a to-be-fused multispectral image and a to-be-fused panchromatic image for contourlet transform, and corresponding high-frequency coefficients and low-frequency coefficients are obtained; a weighted fusion method based on guided filter is applied to the high-frequency coefficients of the two source images for fusion, and high-frequency coefficients of the fused image are obtained; a region energy maximum method is applied to the low-frequency coefficients of the two source images for fusion, and low-frequency coefficients of the fused image are obtained; contourlet inverse transform is applied to the high-frequency coefficients and the low-frequency coefficients after fusion, and a fused image of the target is obtained. The method of the invention combines the contourlet transform and the guided filter, the fusion effects are obvious, the image evaluation parameters are high, and the method can be applied to aspects of image analysis and processing, surveying and mapping, geology and the like.
Owner:XIDIAN UNIV

Adaptive remote sensing image panchromatic sharpening method

The invention discloses an adaptive remote sensing image panchromatic sharpening method, and belongs to the technical field of remote sensing image fusion. The spectral distortion of fusion results is reduced, and the sharpening effect of fusion results is improved. The method comprises the following steps: performing interpolation amplification on a low-resolution multispectral image to make the multispectral image and a corresponding panchromatic image have the same resolution; filtering the panchromatic image and the multispectral image of the same resolution to obtain low-frequency components; performing difference operation on the panchromatic image and the multispectral image of the same resolution and the low-frequency components to obtain corresponding high-frequency components; estimating missing initial detail images of the multispectral image on the basis of the high-frequency components of the two images; building an optimization function for the initial detail images, and optimizing the initial detail images in a rapid descent method to obtain final detail images suitable for different channels of the multispectral image; and injecting the final detail images into the corresponding channels of the multispectral image to obtain a high-resolution multispectral image. The adaptive remote sensing image panchromatic sharpening method of the invention is used for remote sensing image panchromatic sharpening.
Owner:SICHUAN UNIV

Structure sparse representation-based remote sensing image fusion method

The invention discloses a structure sparse representation-based remote sensing image fusion method. An adaptive weight coefficient calculation model is used for solving a luminance component of a multi-spectral image, similar image blocks are combined into a structure group, a structure group sparse model is used for solving structure group dictionaries and group sparse coefficients for the luminance component and a panchromatic image, an absolute value maximum rule is applied to partial replacement of the sparse coefficients of the panchromatic image, new sparse coefficients are generated, the group dictionary and the new sparse coefficients of the panchromatic image are used for reconstructing a high-spatial resolution luminance image, and finally, a universal component replacement model is used for fusion to acquire a high-resolution multi-spectral image. The method of the invention introduces the structure group sparse representation in the remote sensing image fusion method, overcomes the limitation that the typical sparse representation fusion method only considers a single image block, and compared with the typical sparse representation method, the method of the invention has excellent spectral preservation and spatial resolution improvement performance, and greatly shortens the dictionary training time during the remote sensing image fusion process.
Owner:SOUTH CHINA AGRI UNIV

Correlation weighted remote-sensing image fusion method and fusion effect evaluation method thereof

The invention discloses a correlation weighted remote-sensing image fusion method and a fusion effect evaluation method of the remote-sensing image fusion method, and relates to the technical field of remote-sensing image process. The correlation weighted remote-sensing image fusion method comprises a first step of preprocessing an original to-be-fused image, a second step of calculating the correlation between each wave section of a processed multispectral image and each wave section of a panchromatic image, a third step of adjusting weight of the multispectral image to obtain a best weight coefficient of the multispectral image, and acquiring a correlation weighted fusion model according to the best weight coefficient, and a fourth step of achieving fusion of the multispectral image and the panchromatic image according to a weighting algorithm. The fusion effect evaluation method of the correlation weighted remote-sensing image fusion method comprises a first step of acquiring a fusion image according to the correlation weighted remote-sensing image fusion method, a second step of evaluating the fusion image through a mathematical statistics quantitative method which evaluates the to-be-fused image and the fusion image according to chosen fusion effect evaluation indexes, and the fusion effect evaluation indexes respectively are variance, information entropy and torsion resistance.
Owner:NORTHEAST INST OF GEOGRAPHY & AGRIECOLOGY C A S

Method used for gully erosion extraction based on landform and remote sensing image fusion technology

The invention provides a method used for gully erosion extraction based on landform and remote sensing image fusion technology. The method is based on conversion between remote sensing RGB (red, green, and blue) color space and HIS (hue, intensity, and saturation) color space; linear standard surface roughness omega<-><'> is taken as weight of sun northwest shaded relief model (SRM); 1-omega<-><'> is taken as color intensity component weight; novel color intensity component I' is obtained via summation; conversion from HIS color space to RGB color space is performed based on the novel color intensity image I', and fusion of landform information and remote sensing image is realized, so that gullies are represented by recesses, and peaks are represented by convexes in remote sensing images; and interpretation on gully shoulder lines is carried out base on fused remote sensing images, and via combination with gully shoulder line gradient threshold data which is obtained via calculation based on DEM (digital elevation model). Compared with traditional remote sensing image-based gully interpretation method, the method is capable of providing remote sensing two-dimensional images with landform information according with human visual habits, gully bottom remote sensing image characteristics are clear, and gully shoulder line interpretation accuracy is high.
Owner:LUDONG UNIVERSITY

Remote sensing image fusion method based on multi-dimensional morphologic element analysis

The invention discloses a remote sensing image fusion method based on multi-dimensional morphologic element analysis, and belongs to the field of crossing signal processing and remote sensing image processing. The method is that the morphologic element analysis is respectively carried out for a high-resolution remote sensing image and a multi-spectrum remote sensing image under different dimension; the iterative shrinkage method is carried out to perform sparse decomposition; a target image to be fused is divided into texture components and cartoon components based on a plurality of dimensions; the cartoon component and the noise component in the high-solution image, and the texture component and the noise components in the multi-spectrum image are removed; the effective dimension texture component in the high-solution image and the cartoon component in the multi-spectrum image are remained and subjected to spare reconstruction, so as to obtain the fusion image. With the adoption of the method, the high-resolution remote sensing image and the multi-spectrum remote sensing image are effectively fused; the space resolution is improved and the spectrum distortion is reduced by being compared with the existing fusion method; in addition, the rate is greatly increased by being compared with the existing sparse reconstruction method.
Owner:YANTAI UNIV

Method for evaluating remote-sensing image fusion effect

The invention relates to remote-sensing image effect evaluation, in particular to a method for evaluating a remote-sensing image fusion effect. The method for evaluating the remote-sensing image fusion effect solves the problems that the man-made interference factor of a main visual effect of an existing method for evaluating the remote-sensing image fusion effect is large, errors can occur easily, a uniform standard does not exist in index selection in an objective mathematical statistic analysis, and image quality is hard to evaluate comprehensively. The method includes the steps that first, an original image to be fused is preprocessed; second, fusion processing is conducted on a preprocessed multispectral image and a panchromatic image; third, object-oriented segmentation is conducted on the images before and after fusion; fourth, a classification rule is used for classifying the segmented remote-sensing image; fifth, the classification result is precisely evaluated from the three aspects of production precision, user precision and a Kappa coefficient, and contrastive analysis is carried out on the classification accuracy of the images before and after fusion so that the fusion effect can be evaluated. The method for evaluating the remote-sensing image fusion effect can be applied to the technical field of remote-sensing image processing.
Owner:NORTHEAST INST OF GEOGRAPHY & AGRIECOLOGY C A S

A remote sensing image fusion method based on difference image sparse representation

The invention discloses a remote sensing image fusion method based on difference image sparse representation and mainly resolves a problem of spectrum distortion in a conventional remote sensing image fusion method. The implementation steps of the method comprise: inputting an image set and performing block acquisition in order to acquire an image block data set; constructing a high-low resolution image training set according to the image block data set; training the high-low resolution image training set with a semi-symmetric dictionary training method in order to obtain a training dictionary; inputting a low-resolution multispectral image to be fused and a high-resolution full-color image and calculating a low-resolution difference image to be fused; and performing super-resolution processing on the low-resolution difference image to be fused with a semi-symmetric dictionary image super-resolution method in order to obtain a high-resolution difference image and performing inverse transformation on the high-resolution difference image in order to obtain a high-resolution multispectral image. Compared with a classic remote sensing image fusion method, the remote sensing image fusion method based on difference image sparse representation reduces spectrum distortion because of the use of a fusion model based on difference images and can be used in object identification.
Owner:XIDIAN UNIV

Multispectral remote sensing image fusion method and device based on residual learning

The invention discloses a multispectral remote sensing image fusion method and a multispectral remote sensing image fusion device based on residual learning. The method comprises the following steps:(1) acquiring a plurality of original multispectral remote sensing images IMS and corresponding original panchromatic band remote sensing images IPAN; (2) calculating to obtain an interpolation imageIMSI of the IMS, a gradient image GPAN of the IPAN and a differential image DPAN; (3) constructing a convolutional neural network fusion model which comprises a feature extraction layer, a nonlinear mapping layer, a residual image reconstruction layer and an output layer which are connected in sequence, taking I=[IMSI,IPAN,GPAN,DPAN] as input for training, and taking a loss function adopted duringtraining as a mean square error function for introducing residual learning; and (4) processing the multispectral remote sensing image I'MSRI to be fused and the corresponding original panchromatic band remote sensing image I'PAN to obtain corresponding data [I'MSI, I'PAN, G'PAN, D'PAN], inputting the data into the trained convolutional neural network fusion model, and outputting an image after fusion. The method is high in fusion speed, and the spectrum and space quality of the fused image is higher.
Owner:HOHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products